AWS makes Lambda cold start latency a thing of the past with SnapStart

1 year ago 108

At its re:Invent kickoff keynote tonight, AWS announced a small but important update to Lambda, its serverless platform, that tackles one of the most common issues with the service. Typically, when a function isn’t used for quite a while, Lambda will shut the virtual machine down — and despite improvements like faster Firecracker microVMs, this still takes a while. Now, with SnapStart, AWS is addressing this by creating snapshots of a customer’s Lambda functions and then simply starting those up without having to go through the usual initialization process.

Cold start times have long been one of the biggest complaints about Lambda — yet as Peter DeSantis, AWS’s senior VP of Utility Computing noted in today’s keynote, spiky workloads are pretty much what Lambda (and all other serverless platforms) were built for. With its Firecracker microVMs, AWS already improved cold start times from multiple seconds to well under a second. Now, the company promises a 90% improvement in cold start times by using Firecracker’s Snapshotting feature.

This new feature is now available to all Lambda users, though it has to be enabled for existing Lambda functions and for now, it only works for Java functions that make use of the Corretto runtime.

Once enabled, when you first run that function, it will perform a standard initialization. After that, it will create an encrypted snapshot of the memory and disk state and cache that for reuse. Then, when the function is invoked again, Lambda will grab the cache and start up the function. Cached snapshots are removed after 14 days of inactivity.

As DeSantis also noted, improvements like this will enable more users to bring their workloads to a platform like Lambda. The company already saw this with the launch of Firecracker on Lambda, he explained.

Invent 2022 on TechCrunch

AWS makes Lambda cold start latency a thing of the past with SnapStart by Frederic Lardinois originally published on TechCrunch

Read Entire Article