Hacker News new | past | comments | ask | show | jobs | submit login
AWS Lambda pricing now per ms (amazon.com)
629 points by astuyvenberg on Dec 1, 2020 | hide | past | favorite | 275 comments



For reference - Lambda functions used to billed at 100ms intervals. My Node.Js function usually only takes 37-40ms to run. So this is a pretty good advancement for cost savings.


Awesome! That was the idea here. Lots of sub 100ms workloads and we really want you to be able to pay for what you use.

- Chris, Serverless@AWS


> ...we really want you to be able to pay for what you use.

Cloudflare Workers has the right pricing model. They only charge for CPU time and not wall time. They also do not charge for bandwidth.

> Lots of sub 100ms workloads...

AWS Lambda (or Lambda at Edge), as it stands, is 10x more expensive for sub 50ms workloads (Workers does allow upto 100ms for the 99.9th percentile) that can fit 128MB RAM.

https://medium.com/@zackbloom/serverless-pricing-and-costs-a...


That's because keeping track of request state is not free. Ask an edge router. If you have a request open, even though it's not doing CPU, that request has to be in a queue somewhere, tracked for a response that can be transmitted back.

I don't know the infra costs of operating lambda, but my guess is that it's far from CPU-dominated.

I would not be surprised if the Cloudflare pricing model is making a tradeoff to make CPU-bound workloads pay for more of the infra than the rest. It's a valid trade-off to make as a business offering, and it might be feasible given the mixture of workloads. Whether it's the right way is debatable. Whether this model can be tanked by an army of actors taking advantage of CPU-insensitive pricing remains to be seen, or is an acceptable risk that you can take (which you can observe and protect against).


Except that none of the rest of your infrastructure is there, and that APIs represent just a non-majority part of Lambda workloads.


Yet, if you're a Cloudflare user, all of your edges are there - so it doesn't matter. We use Workers extensively for "edge" related things. Lambda, never - but for working with S3 buckets, sure. They feel similar, but differently specialized.


Lambdas are UDFs for S3.


To be clear, you're acknowledging Cloudflare has a much better pricing model but just not as many other services yet?


They're not easily comparable (I tried using Cloudflare Workers before going back to AWS). Lambda@Edge runs Node or Python. Cloudflare Workers runs V8 with "worker isolates" which has a few more caveats, an imperfect but improving dev experience, and doesn't work with a lot of npm packages.


What would be really useful for my use case (running browser tests on a schedule) is if Cloudflare workers actually supported running full headless chromium automation in addition to just V8 isolates. Right now I'm using puppeteer/playwright + Lambda, but would love to have more options.


Headless browser tests seem to be quite far away from the problems cloudflare workers are trying to solve.

https://developers.cloudflare.com/workers/platform/limits

Workers aren't the same as lambdas, they are a super slim JS environments. At 50ms max runtime most browsers won't even start, let alone fetch and process a page.


CloudWatch Synthetics may fit your usecase? https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitori...

Then there are more specialized browser-testing providers like LambdaTest.com and BrowserStack.com


and the venerable WPT (private instances, not the free public one at webpagetest.org)


I wished Google Cloud supported something similar Lambda@edge but I think my only alternative is Cloudflare Workers at the moment.


No to be clear I'm saying you are comparing things that are way more different than our friends at Cloudflare would like you to think. They aren't brought up in any of the convos I have with customers.


> No to be clear I'm saying you are comparing things that are way more different than our friends at Cloudflare would like you to think.

Care to expand on that? What exactly do you mean by "things are way more different"?


Last I looked Cloudflare Workers had different limits and constraints:

https://developers.cloudflare.com/workers/platform/limits

It's a quick Google. 128MB max memory, 6 concurrent out going connections max, 1MB code size limit. The use case here is a subset of what AWS Lambda can handle. The supported languages also differ (only things that have a JS / wasm conversion for Cloudflare Workers).

I haven't looked deeply, so please correct me if I'm wrong, but I understand there's also restrictions on the built-in APIs available [1] and npm packages supported for NodeJS.

I would assume some of the above contributes to the price difference.

1 - https://developers.cloudflare.com/workers/runtime-apis/web-s...


This is pretty standard AWS marketing spiel. They make vague assertions of "ah but you're not considering the big picture...." with no details


If you want to compare two things that are different in a ton of ways, don't be mad when someone points it out.


It isn't about the products, it is about the pricing model in a similar market.

Second, for sub 50ms workloads [0], Workers is absolutely a superior solution to API Gateway + Lambda or Cloudfront + Lambda at Edge if the workloads can fit 128MB RAM and package/compile to 1MB JavaScript or WASM executables, in terms of cost, speed, latency, ease of development etc

[0] For Workers, 50ms is all CPU time and that is definitely not the case with Lambda which may even charge you for the time it takes to setup the runtime to run the code and time spent doing Network IO and bandwith and RAM and vCPUs and what not.


Based. "That's just an edge case. Our customers love this service!"

It's like going to a restaurant that uses bottled water instead of tap water, and they dont provide an answer as to what the benefits of bottled water are


Not from AWS, but the isolation model is completely different.

On Lambda you get a full environment inside an OS container. On CloudFlare you get a WASM process.

The Lambda model is more compatible which can be a real benefit.


But you're telling us that Lambda's prices are justifiably higher because of the strong vendor lock-in? AWS is starting to sound more like Oracle. Ironic. :)

Besides the fact that Cloudflare's part of the Bandwidth Alliance with GCP and other infrastructure providers from which AWS is conspicuously absent, Cloudflare's also slowly but surely building a portfolio of cloud services.


This reply is in bad faith. He did not attempt to "justify" the pricing with "vendor lock-in". Indeed, the prices went down, not up.


Lambda's pricing is indeed higher than Cloudflare Workers for sub 50ms workloads (that fit 128MB RAM).

Cloudflare's alliance with other infrastructure providers mean Cloudflare's platform isn't really limited to "API" workloads. This is discounting the fact that Cloudflare recently announced Workers Unlimited for workloads that need to run longer (upto 30mins) though then they do charge for bandwidth.


The question here isn't the price change here (which is in some sense mainly about balancing short functions and long functions, removing the penalty for short functions) , it's where the pricing is at overall vs Cloudflare.


This comment would be much more useful if you gave some clear examples of the difference (presumably something you get on Lambda that makes it worth more per ms than Cloudflare).

Otherwise it's just "AWS said, Cloudflare said"


It's nice to get a semi-official confirmation of AWS pricing strategy: create lock in, then overcharge.


>> AWS Lambda (or Lambda at Edge), as it stands, is 10x more expensive for sub 50ms workloads

Not sure about this, most use cases of Lambda use other resources and do not exist in a vacuum. Comparison should be made using complete systems not only parts.


> They also do not charge for bandwidth.

Is there fine print on this? Can I put 100TB / mo through their caching servers at the lowest $20 price tier?


Not if you're actually taking up that much cache storage but bandwidth has plenty of examples of high usage on low tiers. They usually allow it as long as you're not affecting the rest of the network adversely since the lines are already paid for (which is the right approach IMO).


Yes, but you'll probably get an email about it.


Very high bandwidth usage for Cloudflare Workers workloads is not against ToS according to Cloudflare's CEO: https://news.ycombinator.com/item?id=20790857


Chris, while I've seen the change in my accounts on regular Lambda, I don't yet see it on Lambda@Edge. I think Lambda@Edge is the place where we'd benefit from this change the most, because many L@E scenarios take single-digit milliseconds, and the cost of L@E is 3x regular Lambda.

Any word on whether we'll also see this change on L@E billing?


Yes, to be clear this change was just for Lambda. L@E is honestly a completely different service run by a different part of AWS that just happens to share parts of our core worker platform. I am not 100% aware of when they might adjust their own pricing on this, but also couldn't share any roadmap here (sorry).


How does that even work? Lambda seems like a challenge even with the entirety of the datacenter resources to work with. Running it in constrained edge environments with a VM per function seems like black magic.


The naming is a bit of a misnomer, today L@E doesn't run at the edge (in our PoPs) but when you deploy it copies to every region and then CloudFront routes you to the lowest latency region for your request.


So is L@E multi region then? Like if I have two concurrent requests across the globe are they serviced from two locations?


Yes, but you deploy it from one region.


Wait, are you Chris Munns from Music2Go? If so, massively small world. My email is username @setec.io; would love to hear from you.

-Harlan, the ex-intern


It me.


I have only a lambda@edge function which usually runs between 10-20ms.

If this also covers lambda@edge, this will save us quite some money.


Hi, are there any plans on offering instances with more CPU cores than the maximum 2 as I guess you have today?


Yes, announced today you can go up to 10gb/6 vCPUs: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-function...


Okay, nice. And if I would like like 32 vCpus? Having an application today that has a huge degree of parallelism, but utilizing an external cloud provider that offers dedicated machines with very affordable pricing. Would really like to use lambdas instead though.


Interesting. Not possible today. We'd still encourage paralyzation up through multiple concurrency of functions being executed.


I would love to see this as well: having 96-vCPU Lambda instances (or instances that match the biggest C-family instance you have) would solve a lot of problems for me. The execution model of Lambda (start a runtime, handle requests, AWS handles pool management) feels much easier to use than managing a pool.

Someone from AWS once commented to me that "if you're ever having to manage a pool rather than letting us manage it, that's a gap in our services".


"paralyzation" -> parallelization, yeah? :)


It was a long day yesterday, thank you :)


Hey Chris. Happy to see your awesome trajectory from Meetup admin to The Serverless AWS guy.


::waves at Yuri::

Thanks! It's been a fun/interesting 4 years in this space :)


Just out of curiosity, what are you getting out of your 160 million CPU cycles? Are you mostly on the CPU, or mostly waiting for something (database call or whatever)?


I want to do something that a low level hacker could do in 100 clock cycles with hardcoded bit twiddling and some avx-512, but I want to use nodejs, so I'm gonna need at least 100 million clock cycles to parse all the npm modules...


Not sure why you need a whole bunch of npm modules to do bit twiddling in a performance-sensitive lambda function. Sounds like you just don't like Javascript.

Edit: You're spending like 80ms on cold-start of your lambda function, plus network overhead. If you can spare that, you can likely spare the half a millisecond for the 999,900 cycles you're complaining about.


So this confirms there is a lot of competition in the serverless space: aws lambda, Azure cloud functions, Google cloud functions, serverless containers. Like knative, Google cloud run...


Just out of curiosity, could you share what kind of things you use it for?

I've never used Lambda, but any time I have a function that I need to run in response to some event or perodically (that's what Lambda is, right?), it's set up in a background worker specifically because it's long and slow, as anything fast can be done synchronously without the overhead.


Lambda is not specifically made for long and slow tasks; AWS Lambda specifically has a maximum execution time of 15 minutes (and you have to explicitly configure it to do so, see https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambd....).

For longer tasks, spinning up an EC2 or Beanstalk instance is probably the way to go.

As for what to use it for, we used it in our application (deployed to Netlify which uses Lambda under the hood) where lambdas operated like a 'proxy' to various 3rd party API suppliers (Commercetools, Adyen, some age verification service), and those too would use a lambda function to ping back at us (e.g. when payment was confirmed). Worked pretty well, although in retrospect I would've preferred a 'normal', monolithic server to do the same thing.


Warm or cold?


How many people actually use lambda? Always came off as a gimmick to me.


Enough that in 2019 it was the most popular topic at re:Invent (our big user conference) and that today per our re:Invent announcement almost half of all new compute workloads in Amazon are based on it. Pretty heavily used across different industries and verticals.


I rarely use lambda but I use alot of Google firebase-functions for majority of my server code. From my experience lambda/firebase-functions/Azure-functions are very popular.One simple usecase I can tell is payment-successful return-hook from payment servers like Stripe.Its a tiny task which just logs payment-success info and triggers a email..etc.


They can seem not that useful by themselves. Their power comes from the integration with the rest of AWS services.


> So this is a pretty good advancement for cost savings.

For some people. Those cost savings are made up somewhere else. Ultimately, Amazon is not a loss leader.


… yes? I don't use AWS Lambda at all, so I don't see any cost savings. It's implicit that not everyone will see the savings.


Why only for some people? It always rounded up to 100ms. If your function took 101ms, it was billed as 200ms.


If it takes 60 seconds, that's 600 steps. 601 steps isn't that much of a saving.

It matters close to 0.


If a process takes that long, lambda would be a poor architectural choice.


Not necessarily. For low frequency workloads with reasonably long step times, Lambda can still make sense. (E.g. When videos appear in this S3 bucket, process them.)

You might only drop videos in once a week, but when you do you want to run some code against them. There are plenty of distributed workflow reasons to run long running Lambdas infrequently rather than spinning up and down an EC2 instance.


Lambdas are underpowered and often poor choices for compute-heavy workloads. Unless there's an urgency to processing infrequent videos, it might make more sense to backlog messages to the queue and use spot instances for draining the queue and processing videos, especially from a cost perspective. Though I acknowledge that this is a more complex setup.


I haven’t heard that lambdas are “underpowered” before, but I’m interested to learn more. Could you elaborate just a bit on why they are underpowered?


As was mentioned by qvrjuec in a sibling comment, hardware is limited. I seem to remember CPU speeds listed alongside available memory for AWS Lambdas, but the pricing page seems to just list memory now[0]. At the highest end, you're still limited to ~10.2GB of memory, which is considerably lower than what's available via EC2. And while I have no personal experience with the EC2 finer-grained pricing that was announced[1], it sounds like that approach may be a better approach to the described scenario above. We can nitpick on these architectural details, but my response was largely that there are other architectural alternatives that could be more ideal; especially in response to a comment that seems to dismiss the value of pricing at finer time intervals.

[0] https://aws.amazon.com/lambda/pricing/

[1] https://www.cnbc.com/2017/09/18/aws-starts-charging-for-ec2-...


> We can nitpick on these architectural details, but my response was largely that there are other architectural alternatives that could be more ideal; especially in response to a comment that seems to dismiss the value of pricing at finer time intervals.

Not trying to nitpick anything; just curious what was meant by "underpowered". Seems like there's still a breadth of compute-intensive use cases that are more appropriate for lambda--e.g., cost is more sensitive than latency and I have too low a volume of requests for a dedicated EC2 instance to make economic sense. This has been where I've spent most of my career, but no doubt there are many use cases where this doesn't hold.


Limitations on hardware one can run a lambda function on and constraints on execution time mean they are "underpowered" compared to other options, like ECS Fargate tasks.


Does Fargate allow you to run on beefier hardware? I know you can bring your own hardware with vanilla ECS. I’m aware of the execution time constraints (15 minutes), but I thought we were talking about 60s?


Right but the first thought I had was, couldn’t you fan out and run a lambda for each frame or a group of related frames? (E.g. Batch HLS processing would be really easy!) If so, you’re back to short lambdas again. It’s really the sweet spot for using Lambda after all: lots of big jobs can be broken down into lots of little jobs, etc.


Plausibly. But that might be more effort than just writing the code to ingest a video file (or some other big data blob) in the simplest, most straightforward way possible.


If your process takes more than 10 seconds, the extra 100ms charge is pretty much noise.


Lambda has a 15 minute limit, I'm not sure exactly how it compares to ec2 but for a low duty cycle application it still makes sense! It is also pretty easy to combine a lambda to SNS or SQS


My lambdas run for 15 minutes. I feel that they're still a great choice :)


Huh... that's the limit of lambda functions, are you doing some sort of work in 15 minute chunks?


Instead of processing 10 messages off of SQS per lambda we process 10, then start polling for more using the same lambda, and don't stop until the lambda is just about to die.


Forgive me if this is naive, but why not trigger a lambda for each message separately? I think they’ll automatically reuse lambdas instead of spinning down


I used an extreme to show the point. At 800 ms the savings are also less than closer to 0.


Yes, but even then it's still a saving, however small, not a loss, so I feel to see this point of this.


If Amazon reduces costs behind the scenes, they can maintain the same revenue while lowering prices for everyone, by helping people deploy previously cost-prohibitive infrastructure.

(ie, if people now use 1.5x as many Lambdas, they can lower costs by 1/3, and everyone wins).


The Jevons paradox applies to compute. In economics, the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand. The more efficient (or cheaper) compute gets the more uses we find for it, rising the consumption of it.

People think about pricing as a zero sum game. In reality, there are very few things in the world that are zero sum games.


This is also a competitive market. Aws has to fight against gcp and azure for new customers, and in a competitive market, you don't get a guarantee to make up the money you lose somewhere else.


Necessary change. Now writing stuff in fast languages suddenly matter in cost, changing the landscape of when these solutions might become viable for a chase.


And you can pick any language you'd like as it supports arbitrary containers now too: https://news.ycombinator.com/item?id=25267182


This should be at the top of this thread! This is huge and fixes my #1 gripe with Lambda -- that managing dependencies is "non-standard" and you can't use tools like Docker. Plus, the 250MB limit is brutal.

This is really, really exciting!


Oh that's quite big news if you run an app that has to be deployed cross cloud.

We use very little serverless at the moment, because the three clouds we need to deploy to have infuriating differences between their execution and deployment environments. Eg. How they manage dependancies, the runtimes, how you describe and deploy each function.

Compared at least to K8 where the containers you build run just fine wherever you put them.


Wow, I haven’t read yet, but I’m very curious if this affects the size limits on the lambdas. I’ve tried to build lambdas for a small Python use case, but by the time I imported pandas and a few other libraries I exceeded the 250 mb limit and migrated to ECS/Fargate, but the startup times were much longer.


https://twitter.com/chrismunns/status/1333825503464214530

Docker containers can be up to 10gb, traditional lambdas are still limited to 250mb.


Thats me. We've got some fun things we do behind the scenes to keep Lambda container image support snappy. SO yes, up to 10gb artifacts with container image support.


[flagged]


I don’t know how this is related to my comment. In my case we used a slow language (although Python’s poor performance and its large bundle sizes are only indirectly related at best) and we had to spend a lot more by moving to ECS/Fargate. If we were using Go, our bundle sizes would’ve been 30x smaller (I checked) and would’ve fit in a lambda easily. Not only would it have fit easily, it would have made a lot of progress before the Python version even finished importing its dependencies. And in top of all of that, it would have out-performed the Python version by a good order of magnitude. If anything, my anecdote supports the idea that Amazon wants you to use fast languages, especially now that they offer per-ms pricing for lambdas.


The thing you put inside the container still has to speak Lambda's API, but yes, pretty cool.


For per ms billing to matter, you probably want to limit your language choices.


True: Last week HN had this to say about AWS + Rust:

https://news.ycombinator.com/item?id=25200324


[flagged]


For a 12-hour old account you sure are trolling this thread pretty damn hard. Maybe take a step back, catch your breath?


We are drowning in so much cynicism these days. Can't we just accept that some things are actually good news?


I've been here for a while resisting the temptation to write a sarcastic comment. Speed has been the opposite of what matters for decades. Every change is always trading speed for something else. And suddenly some offer by Amazon is going to change that? Seems unlikely.


Speed has never not mattered; whoever told you that hand waved over a ton of nuance and did you a disservice. The reality is that for a lot of work loads an increase in speed is not worth the tradeoff (key word) of increased maintenance burden.

All else being equal, faster services are cheaper to run. Faster services can service more requests per compute/memory resource, which means you don't have to buy as many servers/containers/whatever. This is particularly important if you're being billed by the ms, which is the context we're talking about here.


Speed has always mattered, though I agree we are light-years away from optimization levels once considered standard. OTOH, so are we WRT complexity of applications.

Amazon is not going to change software development per se, but at least at some of their customers' sites calculations will be done how many hours can be allocated for a n% reduction in runtime. So, if you live in an amazon-universe, this is a real "game changer". Bystanders may chuckle ;)


It sounds like you are assuming that faster code means you need to sacrifice something which has negative consequences. If you know upfront you need faster code you may choose a statically compiled language and I don't see it as a sacrifice.


In case it helps Terraform users estimate their cloud costs, I updated https://www.infracost.io to support the new ms-based Lambda pricing (https://github.com/infracost/infracost/pull/248/files, it'll be in the next infracost release).

I'm interested to hear what people think about https://www.infracost.io/docs/usage_based_resources - longer term we could extend that to fetch average_request_duration from cloudwatch or datadog.


Wow amazing project! Another reason to start looking at terraform for my next project


Very cool project, well done.


This is my favorite news so far. Hard to imagine beating this one, in terms of actual impact for me.

Very tired of this: `Duration: 58.62 ms Billed Duration: 100 ms`

Very happy about this: `Duration: 48.74 ms Billed Duration: 49 ms`


I am curious to see if this will mean a shift to more efficient languages (Go or Rust) for Lambda services, as usually people default to JS


From the research I did, here's how languages stack up in Lambda runtime (lowest first):

1. Python & JS 2. Go 3. C# & Java

I couldn't find any data on Rust.

The understanding at the time was that Python & JS runtimes are built-in, so the interpreter is "already running" Go is the fastest of compiled languages, but just can't beat the built-in runtimes. C# and Java were poorest as they're spinning up a larger runtime that's more optimized for long-running throughput.

https://docs.aws.amazon.com/lambda/latest/dg/best-practices....

https://medium.com/the-theam-journey/benchmarking-aws-lambda...

https://epsagon.com/development/aws-lambda-programming-langu...

https://read.acloud.guru/comparing-aws-lambda-performance-of...

Of course, benchmarks like this only go so far. Use as a starting point for your own evaluation; not as an end-all-be-all.


I’m not sure I’m interested in a hello world benchmark if it takes Python 5 seconds to import its dependencies in the real world.


Dependencies for the dynamic languages matter A LOT! Take a look at what it'll cost you for requiring the AWS SDK in Node.js, for your cold starts https://theburningmonk.com/2019/03/just-how-expensive-is-the....


This is very true. Just importing Django + Django Rest Framework + some other minor libraries in Google App Engine (standard) leads to painfully slow response times when a new instance spins up. Like, more than 10s to spin up an instance. Although App Engine seems to be 3-4 times slower than my desktop computer from 2014 on this particular task. I wonder if AWS lambda is better.


In the real world you don't import 5 seconds worth of dependencies into a lamdba, and a 5 second boot time for a longer-lived service is acceptable.


> In the real world you don't import 5 seconds worth of dependencies into a lamdba

Laughs in data science.

> a 5 second boot time for a longer-lived service is acceptable.

Not every application can tolerate the occasional 5-second-long request. Just because Python can cold boot "hello world" 3 seconds faster than Go doesn't mean that's going to hold in the real world.


You're mixing arguments here. It's not the occasional 5-second long request, it's "the app doesn't start serving requests for 5 seconds".

Using data science tooling in a lambda seems iffy, especially ones that are not production ready. And good luck getting such libraries in go.

Python cold booting an interpreter 3 seconds faster than Go is a big deal, especially if your target execution time is <50ms and you've got a large volume of invocations, and are not being silly and importing ridiculously heavy dependencies into a lambda for no reason other than to make a strange point about Python being unsuitable for something nobody should be doing.


> You're mixing arguments here. It's not the occasional 5-second long request, it's "the app doesn't start serving requests for 5 seconds".

Lambdas cold-start during requests. So the unlucky request that triggers a cold start eats that cold start.

> Using data science tooling in a lambda seems iffy, especially ones that are not production ready.

Nonsense, there are a lot of lambdas that just load, transform, and shovel data between services using pandas or whathaveyou. Anyway, don't get hung up on data science; it was just an example, but there are packages across the ecosystem that behave poorly at startup (usually it's not any individual package taking 1-2s but rather a whole bunch of them scattered across your dependency tree that take 100+ms).

> And good luck getting such libraries in go.

Go doesn't have all of the specialty libraries that Python has, but it has enough for the use case I described above.

> Python cold booting an interpreter 3 seconds faster than Go is a big deal, especially if your target execution time is <50ms and you've got a large volume of invocations

According to https://mikhail.io/serverless/coldstarts/aws/languages/, Go takes ~450ms on average to cold start which is still up a bit from Python's ~250ms. To your point, if you're just slinging boto calls (and a lot of lambdas do just this!) and you care a lot about latency, then Python is the right tool for the job.

> not being silly and importing ridiculously heavy dependencies into a lambda for no reason other than to make a strange point about Python being unsuitable for something nobody should be doing.

Not every lambda is just slinging API requests--some of them actually have to do things with data. Maybe someone is transforming a bit of audio as part of a pipeline or doing some analysis on a CSV or something else. Latency probably matters to them, but they still have to import things to get their work done. And according to https://mikhail.io/serverless/coldstarts/aws/#does-package-s... (at least for JavaScript) just 35mb of dependencies (which will buy you half of a numpy iirc) causes cold start performance to go from ~250ms to 4000ms.

My rule of thumb (based on some profiling) is that for every 30mb of Python dependency, the equivalent Go binary grows by 1mb, moreover, it all gets loaded at once (as opposed to resolving each unique import to a location on disk, then parsing, compiling, and finally loading it). Lastly, Go programs are more likely to be "lazy"--that is, they only run the things they need in the main() part of the program whereas Python packages are much more likely to do file or network I/O to initialize clients that may or may not be used by the program.


Curious why it would take 5 seconds?

The way I'm using lambda, I compile the lambda build image beforehand which contains the python packages already installed, and the only "time" restraint is that of the lambda spinning up itself.

If you ran e.g. "pip install -r requirements.txt" inside the lambda, then yes it would take time to install the packages.


Installing packages onto the system (“pip install”) is different than the interpreter importing them (loading them when the interpreter hits an “import” statement). Not only is it resolving imports into file paths and loading them into memory, but it’s also executing module-level code which tends to be quite common in Python, so it’s not at all uncommon for imports to take 5s or more.

Meanwhile in Go, dependencies are baked into the executable so there is no resolving of dependencies, and the analog to “module level code” (i.e., package init() functions) are discouraged and thus much less common and where they occur they don’t do as much work compared to the average Python package.


Interesting, I see what you mean, but in my time working with python I've never seen that as an issue. Perhaps in different domains such as big data it might be a problem.


The numbers you link don’t support your ranking, unless you’re specifically ranking by cold start alone. Even then it doesn’t make sense to group Python and Node but not Go, as Node and Go are significantly closer than Node to Python.


Interesting, thanks for sharing! This is the opposite of what most people would expect.


A lot of this was based around the fact that we've seen languages become just so much more performant. This includes Go/Rust/etc, but a lot of Node.js workloads are also sub 100ms, or fast enough that they'd benefit from this pretty well.

- Chris, Serverless@AWS


I've got bad experiences with go startup (i.e., cold runs). They're much more expensive than I would have expected. If node can indeed run in 40ms (as https://news.ycombinator.com/item?id=25267211 says), then I'm surely going back to JS.


What is your experience? Go is an AOT compiled language so the only thing I could imagine you running into on startup is loading the binary into memory? Theres not a cold-start issue with Go, as its not an optimizing JIT.

Edit: Bizarre. Seems like Go on lambda does actually have slower cold start than JS or Python. I wonder if its just that the binary is likely larger than the equivalent JS source code? https://levelup.gitconnected.com/aws-lambda-cold-start-langu...


My experience is that Go cold start takes around 900ms. Processing (parsing a JSON message, verifying it with one DynamoDB look-up, and storing it in DynamoDB) then takes between 11ms and 105ms. Go does use less memory than node, though, and that also counts in Lambda.

I hadn't expected it either, but it loads node faster. Perhaps via some VM trick?


I'm not sure about lambda, but cloudflare workers use v8 isolates to avoid starting a new process at all.


> The function did nothing except emit ‘hello world’.

A more realistic benchmark would be parsing a 1kb protobuf blob and printing some random key from it.

(this would require importing a non-stdlib parser)

Without knowing how it's implemented, my guess is that they're conserving python/v8 processes, so that they're not cold-starting the interpreter on each lambda execution.

You can't [1] do the same thing for a Go binary, so they have to invoke a binary, which might involve running some scans against it first.

This leads to some pretty counterintuitive conclusions! If you want minimal latency (in Lambda!!), you really should be using JS/Python, I guess.

[1]: OK. Maybe you could. Go has a runtime after all, although it's compiled into the binary! I have never heard of anybody doing this, but I'd love to read something about it. :)


Go binaries are pretty large compared to say, C.


Go is not AOT compiled.


... yes it is?


I wrote this in another comment here, but just so you don't miss it: Don't.

Dependencies for the dynamic languages matter A LOT! Take a look at what it'll cost you for requiring the AWS SDK in Node.js, for your cold starts https://theburningmonk.com/2019/03/just-how-expensive-is-the...

Personal benchmarks puts Rust as the most optimal language that I've tried to run on AWS Lambda so far.


same. I went through the trouble of implementing my function in Rust (Rocket), and it's actually quite slow because (a) startup is slow and (b) async/await is still pretty painful to use so I'm blocking on IO


JS is a great choice for Lambda thanks to great cold performance. I’m seeing runtimes in the 40ms to 100ms range.

Most of the time in Lambda is usually spent waiting for IO, which is slow in any language. If you’re using Lambda for heavy computation, that’s not a great choice.


Yep, we ran ~50M Node invocations last month on a small function. AVG was around 100ms but lots of sub 50ms invocations too.


On the one hand I read that JS Lambdas were often already under 100ms (30-50ms)

On the other hand I heard legends about under 10ms Rust Lambdas.


That's the point- billed per ms, a Lambda that executes in 5ms is 10x cheaper than one that takes 50ms. Billed per 100ms interval, the total cost of the two is the same.


Yes,

I'm not questioning that 5 is a tenth of 50. I'm questioning the Rust speed :D


I just checked cloudwatch for my rust-based function, I'm now being billed 1ms :D


IME Lambda functions are mostly sitting around waiting on I/O, so I don't think it would make much of a difference for those workloads. The important technical factors for those workloads are startup time and I/O capabilities...JS is strong in both of those areas. For simple Lambda functions JS still seems like a great choice, along with Go. Rust would be overkill IMO unless you need to share a codebase or aren't I/O bound or have some other unique requirements.


I've learned that AWS pricing tends to improve over time, and I appreciate it. I just recently switched from a startup offering authorization to AWS Cognito because the startup kept raising their price(s).

It's nice to see this drop, though I'm sure Amazon does it due to competition as well.



It's nice to see that AWS is using their economy of scale to reduce their own costs, and passing that on to the consumer.


I don't think they are doing it to be nice to consumers. I think they are doing it to cost less than competitors.


FWIW I was informed by an AWS employee that their internal philosophy is to keep pricing at cost+ levels, which is a strategic play - it forces the operations to remain lean and discourages many competitors from trying to wedge themselves into the cost-price gap.

Fat profit margins attract competition, this is what happened when Oracle/Unix combo were chewed up by Microsoft Windows/SQL from the bottom, and then Linux/MySQL started chewing up Microsoft from their bottom. It's the dog-eats-dog world.


"Your margin is my opportunity."

- Jeff Bezos


Isn't that the same in practice?

You get customers by being nice to them. Being nice to customers means competitive pricing, high quality support, good documentation, easy integration, etc. It's all driving towards the same goal.


Naming good documentation and high quality support in conjunction with AWS is a bit weird to me. Though parent was talking about using their scale to improve prices. They might just reduce their margins at the moment.


It's not necessarily the same in outcome. Undercutting competitors can be a temporary thing. As soon as the competitors are eliminated you jack the prices up. Doing it to be nice to customers can potentially last even after competitors go belly up. Then again, Google's motto used to be "do no evil" (basically be nice to customers). That obviously went the way of the dodo bird.


>>As soon as the competitors are eliminated you jack the prices up

And then you provide a competitor or startup another opportunity.


Eventually. But Amazon has the headspace to drop prices for as long as they need to kill the new competition. Only someone like Google or MS will be able to keep up as long as they can automate a lot and use money from ads or software licenses to prop up their cloud business.


I didn't mean to imply that they did it to be kind, I think it's clear they made this change to be more competitive in the market place.

It's still great to see.


Are you implying that they are trying to make money?


I'm implying that they are trying to undercut competition to drive them out of business so they can then raise prices across the board.


I think everyone is already aware of the fact that AWS is a company.


It’s more that computing gets cheaper over time because of advancements in cpu performance and lowering of storage costs.

AWS doesn’t really ever have a reason to raise costs, it doesn’t have to lowball costs to attract customers in the first place.


I used to own some internal services where we had a model very similar to AWS for cost recovery.

It’s an interesting model because apps either optimize for or happen to fall into “loopholes” where some customers end up getting more value than others or may turn into a financial liability at scale.

For example, think about authentication... charging per auth will mean that some use cases will be nearly free, as some external users may only sign in once per quarter. But charging a flat rate has the opposite effect. You have to design the service and tweak the metrics and rates to make it work.


I just recently switched from a startup offering authorization to AWS Cognito because the startup kept raising their price(s).

Maybe because they know Cognito is horrible? ;)

OTOH, I have never seen DynamoDB prices decreasing.


The introduction of DynamoDB on-demand pricing was a huge price reduction for some workloads with the additional benefit of also reducing the complexity of scaling capacity as well.


On demand pricing dropped our DynamoDB cost by ~20%.


What don't you like about cognito? I've been able to solve every auth problem I've encountered with it.


I am impressed that computation is billed by the ms nowadays.

I'm an ignorant in AWS Lambda but how do you know if their ms measurement is accurate? Is there any way to verify this?


You are billed by the execution time of the function. So from the millisecond we hand the event over to you until you return a response or timeout.

- Chris - Serverless@AWS


Are 15 minute max execution times still the norm?


Yes. I didn't see anything about relaxing those limits today in the keynote. Once can hope and dream..


I saw the news around running container images as artifacts which is welcome. Still the 15 min limit restricts a lot of use cases.

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-containe...


As far as I know, if you run for more than a few seconds Lambda’s cost will _really_ not worth it. One should prefer ECS or Batch.


On Fargate? I agree.


It's still measured by wall clock time - not CPU time. I'd love to see them bill for actual CPU time.


They would be willing to do this probably if you let them evict your entire workload from memory during the period you were not paying for it, and then were able to charge you for CPU time and some additional charge to reload workload into memory from hibernation.

Most workloads ALSO hold memory (which is a key constraint) over the entire wall clock time, and the delays and impacts/costs of hibernating out the memory and then bringing it back so you can just be charged for CPU time may not make sense.


They could also charge you some rate for CPU seconds + Gb-seconds of memory used? Sort of the ultimately flexible cloud platform. You could apply the same sort of thinking to other resources, but it works best for CPU/memory I think.


That would be ideal, as it more closely fits consumption to billing. But potentially harder for end users to reason about. AWS's bills are already notorious.


I'd bet a significant portion of their margins come from this, given Lambda's focus on I/O-bound workloads.


If the profits exist there, then others might eventually find it advantageous to have this metric in the future.

Maybe even accounting for strong and weak nuclear forces (not sarcasm...we are engineering at the quantum level now, soon it will be apart of a business metric. Instead of 'equipment' being servers, it might be the domain-space-time used).


You pay for more than cpu cycles. Yu are paying for those cycles to occur at a particular wall time and for some segment of memory to be reserved during that time as well.


Exciting! As a primarily C & C++ programmer this makes me happy. Also, I see that there's now examples for C++ that don't involve "step 1, download nodejs". Progress!


More detailed breakdown from AWS's James Beswick: https://acloudguru.com/blog/engineering/building-more-cost-e...


this is a good reference, thanks


Interesting: the german version (and other non-english versions, if I parse that correct) of the page still mentions rounding up to 100ms while the english version to 1ms. Cache? Not yet translated? Different pricing model?


Oh, that absolutely changes the price calculation for Lambda. Historically the 100ms minimum billing interval made Lambda significantly more expensive than EC2 for large numbers of work loads.


My needs were different and the EC2 vs Lambda different was state vs stateless.

This won’t save me a ton of money; but at least I won’t have to guess on if bumping up to the next CPU+MEM tier will get me from 101ms to 99ms


This is great. For many high velocity low runtime workloads this is going to be a result in a significant cost savings.


2020 is the year when companies will gladly pay 20x as much as a dedicated server in a DC because cloud.


But... But....It's cloud. And savi gs.


This means that using bigger CPUs (by allocating more RAM) just became even more important.


That was my first thought. Over a lot of the curve it's almost free to run on a bigger slice. When you get down to just a few billing quanta the math didn't work out in your favor.


This is great, AWS.

A feature I'd really like next is secrets as environment variables like ECS.

Retrieving SecretsManager secrets and SSM Secure Parameters in application code is messy and provides significant friction for developers on my team.


I’m confused. This is available already for more than a year (maybe 2).

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


Yes, it's in ECS, which is why I said "like ECS"

I'm asking for Lambda to have externally supplied secrets


I find it annoying to have all these pricing per second, and now per millisecond. It's really hard for my mind to visualize what `$0.0000000021 per millisec` actually is.

Being billed by the millisecond does not mean that you should give a pricing per millisecond.

I prefer Digital Ocean or Heroku's approach of billing by the second, but giving the price per month. How on hell is `$0.0000000021 per millisec` better than `$5/month, billed by the millisecond`? If I know that my workload will be about 20% of a dedicated CPU, I know that I'll end up paying about $1 per month.


There is simply an enourmous amount of assumptions that would go into estimating anything else, because ms is the only correct metric. Lambda billing for a month? What on earth does that say? 10 invocations running 15 minutes? 90,000 invocations running 100ms? (Those two are equivalent btw).

If I know my function takes around ~35ms ballpark, and I will probably invoke it 5,000 times per day, then I can calculate my monthly: 0.0000000021 $/ms * 35ms * 5,000 * 30 = 0.011 $/month.

AWS usually shows a neat example of usecase and what the billing would be on their pricing pages.


Gotta be sure to have that number of 0s exactly right.


Anyone know why there's a hard limit of 15 minutes for Lambda (and 9 minutes for Google Cloud Functions)? Still seems really weird to me.


Safety.

No function runs forever.

Also if you want to reboot/repurpose the server, 15 minutes is max wait time.

And finally, you don't want people running long jobs here when you have a solution there.


Bin packing. It's a lot easier to distribute workloads around a cluster of compute if you can guarantee the maximum runtime.


If I had to imagine designing a system like Lambda, people running really long operations would really throw a wrench in things.

Maybe you could let users indicate the operation will take a long time... but if the user knows the operation is long running in advance, why not just guide them to a more suitable system?


One way around this limitation while still using server less (for Python only) are Glue python shell jobs. They can run for hours if not days, and default at 1 vCPU and 1GB of memory for 2.75¢ an hour.


To limit the scope, and type of applications people should use Lambda for and Lambda is best at running them.

15 mins max runtime simplifies the resource management and avoid abuse. If you have workload for long running jobs, then that should go to something like AWS EKS/Batch/SageMaker.

That being said, things can change, if more and more people requires long running capacity for Lambda (though I am skeptical of that, as Lambda abstracts the underlying hardware away and is supposedly flexible to the requirements)


How long would you like it to be?

Chris Munns - Lead of Dev Advocacy for Serverless@AWS


Now that someone's listening: I don't mind the 15min Lambda timeout, but it would be great to get rid of the Lambda + API Gateway 30s & ~6MB limits. Those always bite unaware devs in the butt and workarounds for them take quite a bit of effort.


Yes.

But being real, we hear you on this one. I can't comment on API Gateway's roadmap here but this is something both teams is aware of. The reason it is the way it is today is for a valid reason. But this is def something we hear pretty often.

- Chris


My lambdas run indefinitely. It's a bit silly, but basically every lambda spins up a bunch of threads, pulls messages, and then pushes them into internal buffers to be processed. There are reasons for this.

What I care about is: * Scale to 0, and automatic scaling up without configuring it

* Automatic patching of the OS

* Fault isolation

Lambda gives me that. So each one runs for 15 minutes, processing all data in an SQS queue.

I do wonder if Fargate would be cheaper per millisecond? Dunno.


If you want sequential processing of the data in the SQS queue, something which works really well today is to create a state machine in AWS Step Functions which triggers a AWS Lambda function which then pulls data from SQS and processes it. Using a condition in the state machine, this can be done in a loop, so when the AWS Lambda function reaches its timeout, another one gets triggered as long as there is still data in the SQS queue.

If data doesn't have to be processed sequentially an option is to configure the AWS Lambda function to get invoked for new data in the SQS queue [1], so you don't have to care about manually fetching data from SQS at all.

[1]: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html


Yep, but I prefer to care about manually fetching data from SQS. It's a weird system, but due to our data model there are many benefits to processing as many messages in a given lambda as possible.


This might be useful to simplify that model for you :)

https://aws.amazon.com/about-aws/whats-new/2020/11/aws-lambd...


Pretty cool, but not quite the model I want.


Today, for a nonstop workload you might save money with Fargate or ECS.


Yeah, I'd be curious to see how much money we'd save with Fargate.


I'm going to assume since you're processing off a queue, the lambdas are not serving a waiting user. If you do the work to orchestrate onto Fargate, you might as well go the rest of the way and move to ECS. Then you can use spots, where real savings kick in.

Someone mentioned step functions above. All of our steps run on spots. We also have some tasks like you have that read off queues and do processing, which also all run on spots.


For raw compute, Fargate is ~1/2 the cost of Lambda. But you'd have to orchestrate the launch yourself. It could be worth it though depending on your workload


As long as possible. Our jobs usually finish well within the limit but the top 1% hit the limit.

One example we've been wrestling with is a merge operation. Usually it's merging about 1000 records which completes in a few seconds. But every once in a while someone kicks off a job that tries to merge 1,000,000 records and it times out.

We want the benefits of serverless (scale down to zero, up to infinity at the drop of a hat) but these edge cases mean we're having to evaluate other options.

An hour or two would be a good start; then it'd cover 99.9% of requests. With a few hours we could add more nines :)


If your lambda function detects that it is going to hit the timeout you could have it launch a Fargate container to handle the long merge. Fargate is essentially a long-lived Lambda.


It’s also probably much easier to share code between those two implementations with the new container based lambdas :)


That sounds like you need a queue of some sort


Make it indefinitly and price exponentially.

;)


thar be dragons :)


I have many workloads where 15 minutes may be just a little too short for comfort. A hour timeout would have me reaching for Lambda more often.


Lambda isn't designed for long running processes. Keeping the runtime limit lower makes it a lot easier to operate the underlying metal because you can move thing around every N minutes where N is the runtime limit. For long-running processes, something like Fargate might be a better fit in the AWS side of things.


Fargate works well especially after they fixed some of the pricing issues there. Then next step you can spin up a fleet of EC2 or EKS or something


> Fargate works well especially after they fixed some of the pricing issues there.

Was this recent, should I take a look at this again? My issue with Fargate as recent as a year ago was that running the same workload on ECS (if you can use your cluster nodes efficiently) was twice as cheap (even without reserved instances).


Yup! In general, I prefer Fargate over Lambda because of cold starts. It's a little bit more management overhead over Lambda and it is a bit lower level. But I think it's worth it.


It's a new pattern, so it's only relevant for some use cases. I don't think it's meant to solve every cases: including the case of a long running job.

It's best suited for jobs that can be broken down into tons of small individual computations, or to respond directly to HTTP requests. If you can fit your pipeline / application into that model it's usually beneficial: Instant scaling, retries, reliable etc. Mixed with other concepts like SQS you can build pretty powerful things without having to pay when there's no load.


Is there a way to use lambda where I can use ffmpeg to watermark and downscale a 4K video? Possibly some system where I can throw a lot of computing at the job and get it done quickly. Right now on my vps it takes multiple minutes to get done and scaling up the vps for one feature or dedicating another server for it is overkill.


I don't see why not, assuming it's less than the lambda max run time (15 minutes I believe). Use a python script and an included binary of ffmpeg might work. Note: You'll need to write to the tmp space of the lambda as that's the only place that allows writing to the filesystem and then you'll need to upload it to S3 or elsewhere.

Lambda's are good at batch jobs where you might need to kick off a few of them but not have a dedicated system for it. I've used it to automate manual customer support tasks that are sporadic in requests.


I'd check out Amazon Elastic Transcoder (https://aws.amazon.com/elastictranscoder/).


Another subission today seems to point to exactly what you're looking for - https://aws.amazon.com/blogs/aws/new-for-aws-lambda-function...



I think now is a good time to use this kind of services.

https://callbackfy.com

It's essentially a way to save some money by avoiding long http requests by buffering requests and sending a callback when the result is complete.


Using a service with a TLS certificate error... probably not.


The url is just a redirection to this:

https://rapidapi.com/kadukeitor/api/callbackfy2/


I wonder if this change will push more people to consider rewriting code to more memory and CPU-efficient programming platforms, for example from java/c# to C or Rust?


I thought you are limited in the languages they permit, the fastest of which being Go.


No. You could always shell out to any language from node/python, but for quite a while even that is not required, lambda can directly run any binary.

https://aws.amazon.com/blogs/opensource/rust-runtime-for-aws... https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom...


There will be more surprise in bills, imagine if your Lambda is making a call to an API and this API suddenly slow down for any reason


How does this change increase the likelihood of that happening?


Before the 100x courser granularity would hide a lot more variation if your lambdas were quick.


It would be a surprise of less savings, but it would still be a savings on the tail with the smaller granularity. Perhaps Keep your old budgets for a while and be glad at the surprise discount.


I was considering migrating my NodeJs lambdas to .NET Core, but I guess cold start times will have a higher impact over billing.


Were lots of memory tiers (eg. 256MB) removed? They aren't present on pricing page anymore.


It is up to 10GB now


What sort of workload is that going to make a significant difference to?


Anyone know what the co2_price is? It's missing from the page.


I hope Azure follows this move soon :)


They might change their 128MB rounding-up structure but they already had per 1ms pricing with the minimum execution time and memory for a single function execution being 100ms and 128MB respectively.

(Google do per 100ms and have 6 pre-defined memory sizes to pick from).


66$ for running a AWS Lambda with 128 for a year seems very expensive.


If you're running at 100% of the time, don't use a Lambda. If you're running for 10 seconds a day, a Lambda is cheaper than owning your own hardware.

Similarly, if a car rental costs $20/day, it would be silly to say that that's expensive because it will cost you $73,000 after 10 years. The needs of a car rental and car ownership are different.


Generally speaking, functions don't run continuously. If your use case is 100% compute 100% of the time, it's not a great fit for lambda.


If you're running it continuously for a year, then you should probably be using a reserved instance and not lambda. Lambda is for short bursts, not 1 year of continuous use.


Not necessarily, especially in a large company setting. Lambda allows you to scale and take care of spikey traffic. It also doesn't require server management. Servers are a pain in the butt.


True, but ECS and K8s both abstract aware the hardware and offer cheaper 24/7 workloads and get past the cold start problem (at the expense of slower spike responses).


And it costs even more if you need that Lambda function to have access to the internet.

A possibly useful comparison:

A Raspberry Pi 3 (~6.5 watts) costs $6.83 per year to run full-time (at 12 cents per kWh). You get a full computer with much more I/O.

However, there are a lot of other factors to consider:

- Initial cost of the hardware

- Time and energy spent maintaining/configuring the device

- Physical maintenance of the device - power/network/physical management/etc

- Lack of immediate access to I/O ramp-up and global replication

- Lack of direct integration into other AWS services/etc

AWS Lambda isn't a magic bullet, but it offers a lot of convenience to offset time and money spent on a DIY approach. I run a small static site/service that never breaks Lambda's free tier, but most of the cost goes to hosting a NAT gateway for it to have access to the internet. The benefit of hassle free, global access to the service I built and the underling services it runs on (Lambda/AWS in general) makes it worth the cost. I could setup the same service at home on a Raspi for pennies by comparison, but if my home internet goes down while I'm away, or a dog chews on an ethernet cable, it's a headache that I have to deal with personally - or I have to do remote tech-support to whoever is home: "Okay, you should see a Raspbery Pi. No...it's not a food. It's a computer. Whatever... I know it's a strange name. Anyway, is the cable plugged in? Do you see a blinking light?"

I find it similar to being able to use a rental car while visiting a foreign city vs just driving yours across the country to save some money. You might save some money, but it comes with extra time, maintenance, potential roadblocks (literally and figuratively), breakdowns that you will have to deal with personally, etc. It's really up to personal preference.


A RPi 3 has an upfront cost of over 10 dollars. With lambda you can start with pennies.


And write in any language, and have no OS/patches/updates/hardware to look after, have built in logging, a complex but robust authentication system, etc etc.


The service bills in ms's, and you are complainting about price for the year? For $66/year and a 3 year commit maybe get a T4g small with 2GB of ram vs .1GB?


Yes, and buying a server, powering/cooling it, and maintaining it for a three or five year lifetime to be utilized 5% is also expensive.

Lots of choices are expensive, pick the one that’s best for your use case.


I wonder if any of the downstream service providers like vercel and netlify will pass this along.


Does Lambda still lag from awake?


There are warm and cold runs. If it’s really important you can pre-heat by calling the lambda every so often. It would be nice if they had some option for this.


There's Provisioned Concurrency for Lambda which keeps an amount of Lambdas warm at a price, so you don't have to keep pinging your function yourself to pre-heat.


This is news to me and exactly what I would have expected to exist. Good to know!


I suppose I can ping it upon page visit (for web apps). That never occurred to me until now.


All fine and dandy, until it is not. Lambda = massive vendor lock-in, which means once you start depending on it, it’ll be hard to rip it out of your system and replace with someone else. My startup used to heavily rely on Lambda and frankly I wish we never did - so much AWS-specific complexity that is just not worth the trouble.

Their own representative admits it[0].

We moved everything to containerized workflows and sleep much better (and at lower maintenance cost).

[0]: https://news.ycombinator.com/item?id=25268049


That is not at all what my words say and I won't reply to that thread which was started by a former competitor to troll this convo today.

The perceived lock-in is really no different than consuming other technologies. You make a trade-off on what you want to manage vs. handoff to a managed service. For many customers the benefits are well worth it.

It's fine that Lambda wasn't for you, but you aren't being clear here about what issues you saw, just waving the lock-in boogey man so many misunderstand.


Cost, lack of debugging capabilities, terrible developer tools, cryptic documentation that misses some key scenarios, I can go on. Now, I get it, you’re at AWS so you would never openly and bluntly come out and say that what you -really- want is to lock in your users, because that brings AWS money. But that’s the reality. See my earlier comment as to why it made sense for our company to move away from Lambda.


What even is your point here and in this thread? Acquiring customers and giving them an incentive to stay is a cornerstone of any enterprise.

Saying lambda is bad because of lock-in is like saying their VPC offering is bad because of lock-in, or IAM is bad because of lock-in. It's not a generic component that you can flip between providers, they usually respond to a specific event from an AWS service and nobody really gives a rats about deploying their specific lambda to a cloud they don't use.

So yeah it seems like a real awesome idea to avoid vendor lock in for a small python function that responds to S3 change events from a SQS queue and updates a DynamoDB table with some values.

If you want "generic" lambdas go check out serverless.com.


I'm pretty sure we've reached an inflection point with some "technical architects" where they spend more time worrying about vendor lock and doing technical gymnastics to reach portability nirvana instead of just shipping decent code and products that make money.


YMMV, of course. The problem with lock-in is that you start designing code that is for a given vendor, AWS in this case. Once you find that the vendor either is prohibitive cost or functionality-wise, you will have to spend your development resources untangling the mess rather than shipping value.


After using lambda quite a lot recently I think the vendor lock-in argument is overstated, especially with things like Serverless framework.

The lock-in comes more with the other AWS products you end up using such as CloudFormation and DynamoDb, but not Lambda which in most cases can be wrapped around an Express server or similar and could be hosted anywhere


There's definitely a balance. I tend to be more concerned about things which store or update data durably — testing a Lambda function is relatively easy, and if you're calling other services but your code isn't a complete mess that's a relatively manageable problem, but something like DynamoDB poses both a migration challenge (especially for a running system) but also questions about correctness if you aren't really careful about how you handle things like concurrent access. That doesn't mean they're not worth using but it definitely tells me where you want robust validation, testing, etc. since it's a lot harder to recover from missing/corrupted data than it is to resolve a 500 error on a particular endpoint.


https://www.serverless.com/ abstracts a bit, so there's less lock in, but agree typical lambda workflows aren't just plain code you can move to a different vendor. A big perk is easy triggers to glue things together like new S3 file -> run lambda


I use Serverless framework even when I only want to deploy on Lambda. It's such a wonderful tool.


We use λ with Go and the change between the service and its λ counterpart is really minimal.

FYI, we're using https://github.com/awslabs/aws-lambda-go-api-proxy which supports most Go web frameworks.

Edit: We also use DynamoDB and there lies the real vendor lock-in with AWS imho.


Your lambda function can be a container


Why would you want to put it in Lambda then when there are better options?


There is no better option for temporally sparse compute. If your job can run all the time there is no benefit to these systems, but if it wastes money by being provisioned all the time when not in use, there is no alternative.

Lambda is neither "worse" nor "better" in any general sense. It's just another option that might apply given a particular scenario, and one that got significantly cheaper today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: