1) The pricing is listed as per invocation however it is also billed for the CPU seconds and It happens that on a low volume function usage I exceed the free quota exactly on that as the invocation numbers remain very low.
2) The functions take about 10s-15s to execute on cold start. Every execution that happens between something like a minute is a cold execution. It is also billed as 15s execution even if the actual script runs for 200ms.
3) The CPU second are calculated according to time and VM memory, so if your function happens to make a call to another API and sits idle for 5s for response to arrive, you are still billed as 5s full memory usage.
Which makes me wonder why I would't spin a DO droplet instead. Maybe it's good on very high volume but from experience I know that a 5$ NodeJS droplet can handle hundreds of concurrent request. That would probably cost a lot on Google Cloud Functions. On low volume DO droplet performs much better anyway.
> The functions take about 10s-15s to execute on cold start
This may be a bit of an exaggeration, and may vary depending on your deployment (from experience, it takes up to 5s at most), but I agree. There is a very noticeable cold start time which makes it not ideal for any business critical services. It is probably only good for things like document conversions.
This is also something that you can optimize down to a small number of seconds. Crazy things such as bundling+minifying nodejs apps can make a difference. However that may make debugging difficult.
There is some latency in the infrastructure but I have found that most of the delay is puling the image and actually starting the app. So small images with few layers and fast boot up will help a ton.
1) The pricing is listed as per invocation however it is also billed for the CPU seconds and It happens that on a low volume function usage I exceed the free quota exactly on that as the invocation numbers remain very low.
2) The functions take about 10s-15s to execute on cold start. Every execution that happens between something like a minute is a cold execution. It is also billed as 15s execution even if the actual script runs for 200ms.
3) The CPU second are calculated according to time and VM memory, so if your function happens to make a call to another API and sits idle for 5s for response to arrive, you are still billed as 5s full memory usage.
Which makes me wonder why I would't spin a DO droplet instead. Maybe it's good on very high volume but from experience I know that a 5$ NodeJS droplet can handle hundreds of concurrent request. That would probably cost a lot on Google Cloud Functions. On low volume DO droplet performs much better anyway.