For reference - Lambda functions used to billed at 100ms intervals. My Node.Js function usually only takes 37-40ms to run. So this is a pretty good advancement for cost savings.
> ...we really want you to be able to pay for what you use.
Cloudflare Workers has the right pricing model. They only charge for CPU time and not wall time. They also do not charge for bandwidth.
> Lots of sub 100ms workloads...
AWS Lambda (or Lambda at Edge), as it stands, is 10x more expensive for sub 50ms workloads (Workers does allow upto 100ms for the 99.9th percentile) that can fit 128MB RAM.
That's because keeping track of request state is not free. Ask an edge router. If you have a request open, even though it's not doing CPU, that request has to be in a queue somewhere, tracked for a response that can be transmitted back.
I don't know the infra costs of operating lambda, but my guess is that it's far from CPU-dominated.
I would not be surprised if the Cloudflare pricing model is making a tradeoff to make CPU-bound workloads pay for more of the infra than the rest. It's a valid trade-off to make as a business offering, and it might be feasible given the mixture of workloads. Whether it's the right way is debatable. Whether this model can be tanked by an army of actors taking advantage of CPU-insensitive pricing remains to be seen, or is an acceptable risk that you can take (which you can observe and protect against).
Yet, if you're a Cloudflare user, all of your edges are there - so it doesn't matter. We use Workers extensively for
"edge" related things. Lambda, never - but for working with S3 buckets, sure. They feel similar, but differently specialized.
They're not easily comparable (I tried using Cloudflare Workers before going back to AWS). Lambda@Edge runs Node or Python. Cloudflare Workers runs V8 with "worker isolates" which has a few more caveats, an imperfect but improving dev experience, and doesn't work with a lot of npm packages.
What would be really useful for my use case (running browser tests on a schedule) is if Cloudflare workers actually supported running full headless chromium automation in addition to just V8 isolates. Right now I'm using puppeteer/playwright + Lambda, but would love to have more options.
Workers aren't the same as lambdas, they are a super slim JS environments. At 50ms max runtime most browsers won't even start, let alone fetch and process a page.
No to be clear I'm saying you are comparing things that are way more different than our friends at Cloudflare would like you to think. They aren't brought up in any of the convos I have with customers.
It's a quick Google. 128MB max memory, 6 concurrent out going connections max, 1MB code size limit. The use case here is a subset of what AWS Lambda can handle. The supported languages also differ (only things that have a JS / wasm conversion for Cloudflare Workers).
I haven't looked deeply, so please correct me if I'm wrong, but I understand there's also restrictions on the built-in APIs available [1] and npm packages supported for NodeJS.
I would assume some of the above contributes to the price difference.
It isn't about the products, it is about the pricing model in a similar market.
Second, for sub 50ms workloads [0], Workers is absolutely a superior solution to API Gateway + Lambda or Cloudfront + Lambda at Edge if the workloads can fit 128MB RAM and package/compile to 1MB JavaScript or WASM executables, in terms of cost, speed, latency, ease of development etc
[0] For Workers, 50ms is all CPU time and that is definitely not the case with Lambda which may even charge you for the time it takes to setup the runtime to run the code and time spent doing Network IO and bandwith and RAM and vCPUs and what not.
Based. "That's just an edge case. Our customers love this service!"
It's like going to a restaurant that uses bottled water instead of tap water, and they dont provide an answer as to what the benefits of bottled water are
But you're telling us that Lambda's prices are justifiably higher because of the strong vendor lock-in? AWS is starting to sound more like Oracle. Ironic. :)
Besides the fact that Cloudflare's part of the Bandwidth Alliance with GCP and other infrastructure providers from which AWS is conspicuously absent, Cloudflare's also slowly but surely building a portfolio of cloud services.
Lambda's pricing is indeed higher than Cloudflare Workers for sub 50ms workloads (that fit 128MB RAM).
Cloudflare's alliance with other infrastructure providers mean Cloudflare's platform isn't really limited to "API" workloads. This is discounting the fact that Cloudflare recently announced Workers Unlimited for workloads that need to run longer (upto 30mins) though then they do charge for bandwidth.
The question here isn't the price change here (which is in some sense mainly about balancing short functions and long functions, removing the penalty for short functions) , it's where the pricing is at overall vs Cloudflare.
This comment would be much more useful if you gave some clear examples of the difference (presumably something you get on Lambda that makes it worth more per ms than Cloudflare).
>> AWS Lambda (or Lambda at Edge), as it stands, is 10x more expensive for sub 50ms workloads
Not sure about this, most use cases of Lambda use other resources and do not exist in a vacuum. Comparison should be made using complete systems not only parts.
Not if you're actually taking up that much cache storage but bandwidth has plenty of examples of high usage on low tiers. They usually allow it as long as you're not affecting the rest of the network adversely since the lines are already paid for (which is the right approach IMO).
Chris, while I've seen the change in my accounts on regular Lambda, I don't yet see it on Lambda@Edge. I think Lambda@Edge is the place where we'd benefit from this change the most, because many L@E scenarios take single-digit milliseconds, and the cost of L@E is 3x regular Lambda.
Any word on whether we'll also see this change on L@E billing?
Yes, to be clear this change was just for Lambda. L@E is honestly a completely different service run by a different part of AWS that just happens to share parts of our core worker platform. I am not 100% aware of when they might adjust their own pricing on this, but also couldn't share any roadmap here (sorry).
How does that even work? Lambda seems like a challenge even with the entirety of the datacenter resources to work with. Running it in constrained edge environments with a VM per function seems like black magic.
The naming is a bit of a misnomer, today L@E doesn't run at the edge (in our PoPs) but when you deploy it copies to every region and then CloudFront routes you to the lowest latency region for your request.
Okay, nice. And if I would like like 32 vCpus? Having an application today that has a huge degree of parallelism, but utilizing an external cloud provider that offers dedicated machines with very affordable pricing. Would really like to use lambdas instead though.
I would love to see this as well: having 96-vCPU Lambda instances (or instances that match the biggest C-family instance you have) would solve a lot of problems for me. The execution model of Lambda (start a runtime, handle requests, AWS handles pool management) feels much easier to use than managing a pool.
Someone from AWS once commented to me that "if you're ever having to manage a pool rather than letting us manage it, that's a gap in our services".
Just out of curiosity, what are you getting out of your 160 million CPU cycles? Are you mostly on the CPU, or mostly waiting for something (database call or whatever)?
I want to do something that a low level hacker could do in 100 clock cycles with hardcoded bit twiddling and some avx-512, but I want to use nodejs, so I'm gonna need at least 100 million clock cycles to parse all the npm modules...
Not sure why you need a whole bunch of npm modules to do bit twiddling in a performance-sensitive lambda function. Sounds like you just don't like Javascript.
Edit: You're spending like 80ms on cold-start of your lambda function, plus network overhead. If you can spare that, you can likely spare the half a millisecond for the 999,900 cycles you're complaining about.
So this confirms there is a lot of competition in the serverless space: aws lambda, Azure cloud functions, Google cloud functions, serverless containers. Like knative, Google cloud run...
Just out of curiosity, could you share what kind of things you use it for?
I've never used Lambda, but any time I have a function that I need to run in response to some event or perodically (that's what Lambda is, right?), it's set up in a background worker specifically because it's long and slow, as anything fast can be done synchronously without the overhead.
For longer tasks, spinning up an EC2 or Beanstalk instance is probably the way to go.
As for what to use it for, we used it in our application (deployed to Netlify which uses Lambda under the hood) where lambdas operated like a 'proxy' to various 3rd party API suppliers (Commercetools, Adyen, some age verification service), and those too would use a lambda function to ping back at us (e.g. when payment was confirmed). Worked pretty well, although in retrospect I would've preferred a 'normal', monolithic server to do the same thing.
Enough that in 2019 it was the most popular topic at re:Invent (our big user conference) and that today per our re:Invent announcement almost half of all new compute workloads in Amazon are based on it. Pretty heavily used across different industries and verticals.
I rarely use lambda but I use alot of Google firebase-functions for majority of my server code. From my experience lambda/firebase-functions/Azure-functions are very popular.One simple usecase I can tell is payment-successful return-hook from payment servers like Stripe.Its a tiny task which just logs payment-success info and triggers a email..etc.
Not necessarily. For low frequency workloads with reasonably long step times, Lambda can still make sense. (E.g. When videos appear in this S3 bucket, process them.)
You might only drop videos in once a week, but when you do you want to run some code against them. There are plenty of distributed workflow reasons to run long running Lambdas infrequently rather than spinning up and down an EC2 instance.
Lambdas are underpowered and often poor choices for compute-heavy workloads. Unless there's an urgency to processing infrequent videos, it might make more sense to backlog messages to the queue and use spot instances for draining the queue and processing videos, especially from a cost perspective. Though I acknowledge that this is a more complex setup.
As was mentioned by qvrjuec in a sibling comment, hardware is limited. I seem to remember CPU speeds listed alongside available memory for AWS Lambdas, but the pricing page seems to just list memory now[0]. At the highest end, you're still limited to ~10.2GB of memory, which is considerably lower than what's available via EC2. And while I have no personal experience with the EC2 finer-grained pricing that was announced[1], it sounds like that approach may be a better approach to the described scenario above. We can nitpick on these architectural details, but my response was largely that there are other architectural alternatives that could be more ideal; especially in response to a comment that seems to dismiss the value of pricing at finer time intervals.
> We can nitpick on these architectural details, but my response was largely that there are other architectural alternatives that could be more ideal; especially in response to a comment that seems to dismiss the value of pricing at finer time intervals.
Not trying to nitpick anything; just curious what was meant by "underpowered". Seems like there's still a breadth of compute-intensive use cases that are more appropriate for lambda--e.g., cost is more sensitive than latency and I have too low a volume of requests for a dedicated EC2 instance to make economic sense. This has been where I've spent most of my career, but no doubt there are many use cases where this doesn't hold.
Limitations on hardware one can run a lambda function on and constraints on execution time mean they are "underpowered" compared to other options, like ECS Fargate tasks.
Does Fargate allow you to run on beefier hardware? I know you can bring your own hardware with vanilla ECS. I’m aware of the execution time constraints (15 minutes), but I thought we were talking about 60s?
Right but the first thought I had was, couldn’t you fan out and run a lambda for each frame or a group of related frames? (E.g. Batch HLS processing would be really easy!) If so, you’re back to short lambdas again. It’s really the sweet spot for using Lambda after all: lots of big jobs can be broken down into lots of little jobs, etc.
Plausibly. But that might be more effort than just writing the code to ingest a video file (or some other big data blob) in the simplest, most straightforward way possible.
Lambda has a 15 minute limit, I'm not sure exactly how it compares to ec2 but for a low duty cycle application it still makes sense! It is also pretty easy to combine a lambda to SNS or SQS
Instead of processing 10 messages off of SQS per lambda we process 10, then start polling for more using the same lambda, and don't stop until the lambda is just about to die.
Forgive me if this is naive, but why not trigger a lambda for each message separately? I think they’ll automatically reuse lambdas instead of spinning down
If Amazon reduces costs behind the scenes, they can maintain the same revenue while lowering prices for everyone, by helping people deploy previously cost-prohibitive infrastructure.
(ie, if people now use 1.5x as many Lambdas, they can lower costs by 1/3, and everyone wins).
The Jevons paradox applies to compute. In economics, the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand.
The more efficient (or cheaper) compute gets the more uses we find for it, rising the consumption of it.
People think about pricing as a zero sum game. In reality, there are very few things in the world that are zero sum games.
This is also a competitive market. Aws has to fight against gcp and azure for new customers, and in a competitive market, you don't get a guarantee to make up the money you lose somewhere else.