Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Netlify Edge Functions: A new serverless runtime powered by Deno (netlify.com)
295 points by csmajorfive on April 19, 2022 | hide | past | favorite | 160 comments


This is great news. I'm really rooting for a successful trend of Serverless runtimes, mainly as a weapon against rising cloud deployment costs.

While the general trend today is to back the serverless environment with Javascript runtimes (Cloudflare runs its edge on top of V8, Netlify uses deno, most other serverless runtimes use nodejs), I'm optimistic that WebAssembly will take over this space eventually, for a bunch of reasons like:

1. Running a WASM engine on the cloud means, running user code with all security controls, but with a fraction of the overhead of a container or nodejs environment. Even the existing Javascript runtimes, comes with WebAssembly execution support out of the box! which means these companies can launch support for WASM with minimal infra changes.

2.It unlocks the possibility of running a wide range of languages. So there’s no lock-in with the language that the Serverless provider mandates.

3.Web pages that are as ancient as the early 90s are perfectly rendered even today in the most modern browsers because the group behind the web standards strive for backward compatibility. WebAssembly’s specifications are driven by those same folks - which means WASM is the ultimate format for any form of code to exist. Basically, it means a WASM binary is future proof by default.

I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique, here - https://writer.zohopublic.com/writer/published/nqy9o87cf7aa7...


I fully agree with your take. I think JS-directed computation is very good towards short-term adoption (since running JS on the Edge is probably the most popular use case), but eventually the needs to run other programming languages at the Edge will likely eclipse the JS use case.

At Wasmer [1] we have been working actively towards this future. Lately more companies have been also doing awesome work on these fronts: Lunatic, Suborbital, Cosmonic (WasmCloud) and Fermyon (Spin). However, each of us with a different take/vision on how to approach the future of Computation at the Edge. I'm very excited about to see what each approach will bring into the table.

[1] https://wasmer.io/


> a fraction of the overhead of a container

I mean, only in theory or when looking at it from the right angle, right? Or are you only comparing against JavaScript (unclear)? WASM is still much slower than native code. Containers spend most of their time executing native code; the "overhead" of containers is at the boundaries and is minor compared to the slowdown by moving from native code to WASM. In the future WASM may approach native performance, but it's not there now. I'm 100% certain that transitioning my native-code-in-containers workloads to WASM would be slower, not faster.


Edge functions are typically run intermittently, with their runtime stopped to free up resources between runs. Therefore a big factor is startup and shutdown speed. Containers are pretty bad there. Deno is better, and WASM is unbeatable, especially with things like Wizer[0].

[0]https://github.com/bytecodealliance/wizer


Deno can in theory do the pre-initialization that wizer does for JS too. We have all of the infrastructure for it, we just have not gotten around to actually implementing it yet.


Issue 3335. I know. I'm watching it. :))


Makes sense when talking about edge functions, but then OP started talking about Kubernetes. Our Kubernetes workloads don't resemble that at all; there's virtually none of that container startup/shutdown overhead to be concerned about. Most weeks, no containers are started or stopped at all.


Also a common spec for serverless is badly needed. Serverless code should be portable between different cloud providers, otherwise there's vendor lock in and much greater opportunity to price gouge.

Anybody know if a common API for serverless components is being worked on?


Check out knative, it’s used for Google Cloud Run: https://knative.dev/docs/


What is the common spec for Knative? It seems like one runs an app as normal in a container and then the special scaling sauce is handled by Knative.

When I think of a portable spec for serverless, I think of something more like a Trait of an Interface that needs to be implemented, less a app hosting model. If you think about it like that, then Web Assembly component model [1] would be a great fit for defining an interface that could be implemented in a variety of languages.

[1]: https://github.com/WebAssembly/component-model/blob/main/des...


Yes, exactly. Of course experimentation and competition is good to settle on a solid set of building blocks, but in the long run serverless code should absolutely be portable between vendors.

There's a strong incentive for vendors to not allow this as it reduces their pricing power... But think we will see it eventually one way or another.


That would only be implementable by languages with a WASM output target though. Knative says that any container that responds to HTTP will work, and you could host those in different environments. It's much more general.


There's the serverless framework which is easy to use and there is terraform which is more powerful.

Relevant: https://xkcd.com/927/


It doesn’t avoid vendor lock-in. Instead of a cloud provider, it is now Terraform.


The problem is database. Distributed serverless-first SQL like spanner, Cockroach, PlanetScale are very very expensive.


We're building PolyScale[1] to address this problem. PolyScale is a serverless edge cache for databases so you can easily distribute your reads.

We are opening up early access to our connection pooling features in the next couple of weeks which allows FaaS platforms like Netlify, Cloudflare etc to create large numbers of ephemeral connections, without impacting your origin database, as well as reducing connection latency significantly.

[1] https://www.polyscale.ai/


I was looking at the polyscale docs and found following

  PolyScale evaluates each SQL query in real-time and when it detects a DML query i.e. a SQL INSERT, UPDATE or DELETE, it extracts the associated tables for the query. Then, all data for the table(s) in question are purged from the cache, for every region globally.
at https://docs.polyscale.ai/how-does-it-work/#smart-invalidati...

Isn't clearing cache for entire tables for a all DMLs which may be changing one record too intensive and how does this affect performance of cache when there are multiple DML queries being run every minute?

Also can you please give the docs link to connection pooling feature


That’s right. Currently the auto invalidation is somewhat of a blunt instrument in that it will blow away all cache data related to the table(s) as default. That approach favors consistency over performance, but is also a natural fit to some query traffic patterns. You can also switch it off if you so desire. The next iteration that is imminent for release can be much more surgical, invalidating based on more of the query details.

Connection pooling docs are coming soon as part of the feature early access launch. Feel free to drop me an email and I can let you know when its released. Im ben at our domain.


Really, really dumb question. I've seen a lot of node/python/etc serverless offerings. Is there something where you just provide a binary and its executed each time?

For example, I write a simple single responsibility piece of code in Go `add_to_cart.go` and build it, deploy it, and somehow map it to some network request. dot slash pass args, and return the result?

No need to have containers or runtime?


That's how every FaaS (Function as a Service) offering works.

A caveat is that most non trivial applications need something more than running a function.

You might need secrets management, ephemeral and non ephemeral storage, relational databases, non relational databases, dependency management, AAA capabilities, observability, queues/async, caching, custom domains... That's where said offerings differ.

EDIT: actually most FaaS offerings take code as input, not binaries. I'm not sure if that was the relevant part of your question. If it was, then yeah I don't know of such service.


You can do that on Lambda: https://docs.aws.amazon.com/lambda/latest/dg/golang-handler....

AWS still needs the container/runtime to stop your code getting access to other things on the same physical computer.


Find a cheap apache web host that allows cgi-bin.


This is exactly what we are building at TinyFunction.com Just write your function code in Javascript or Python in the browser and click deploy to get a URL. Take a look at https://TinyFunction.com

Appreciate any suggestions or feedback.


Site no loading on my phone. Just a white screen with a green comment button. That’s really cool though that you’re going to build something like that, wish you all the best!


It's possible you'd run into environment issues. JS or Python code (or a container) doesn't have to care as much what OS or architecture it's run on. A raw binary could pierce that abstraction and make the service more complicated to offer.


AWS lambda’s work that way.


I agree about WASM. I am sort of worried that Deno may be too late tbh. Why would I bother with an interpreted language at all when I can code in any language I want and run it anywhere with WASM?


The "any language" advantage of WASM is theoretical. Each language is at a different level of support of WASM, libraries aren't all caught up and at the same point for all languages, etc...

I love the promise of WASM, but every time I look at it I get lost in a sea of acronyms, and my optimistic ideas of using language X with library Y on runtime Z are dashed because there is some missing piece somewhere.

If anything, the "any language" thing creates a giant matrix of potential pitfalls for the programmer.

In comparison, the combination of JS/TS, the browser API and a solid std lib looks pretty good for some problems.


Agreed. There is also the lack of a GC in WASM and Denos' use of existing web standard APIs.


What is "any language" these days? I feel like WebAssembly's day will come when one of those is Javascript, and so far that hasn't happened.

Go's support is pretty good (with tinygo offering a tiny runtime more suited to this application). Rust appears to support compiling directly to WebAssembly, and there are some smaller languages like AssemblyScript and Lua with support. I'm guessing plain C works fine. Then there are projects that compile the runtime for interpreted languages to WebAssembly, so you can theoretically run things like Python.

Nobody is writing applications in C or AssemblyScript, so that leaves rust or go. If you're using one of those languages, though, you can just (cross-)compile a binary and copy it to a VM that is on some cloud provider's free tier, so this isn't really easing any deployment woes. It was already as easy with native code, so WebAssembly isn't adding much stuff here. (The isolation aspect was interesting in the days before Firecracker, but now every computer has hardware virtualization extensions and so you can safely run untrusted native code in a VM at native speeds.)

Anyway, I always wanted WebAssembly for two things: 1) To compile my React apps to a single binary. 2) To use as a plugin system for third-party apps (so I don't have to recompile Nginx to have OpenTracing support, for example). The language support hasn't really enabled either, so I'm a little disappointed. (Disappointed isn't really fair. I've invested no effort in this, and I can't be disappointed that someone didn't make a really complicated thing for me for free. But you know what I mean.)


> The isolation aspect was interesting in the days before Firecracker...

I don't think Firecracker's existence makes WASM's isolation uninteresting. First, I think you are looking at way more resources running a full VM (even a "micro" VM) compared to a WASM runtime. I think startup times are not comparable either, so if that matters you'll find WASM to be the way to go.

Second, WASM's capability-based security model is wonderful for giving the untrusted code the things it needs to work with. With a VM, you have to stitch together with shared directories, virtual eth bridges or, linux capabilities, maybe some cgroups, and who knows what else. (Granted you may need to do some of that with WASM too, but less so).


WASM still needs an interface layer to interact with the outside world (filesystem, etc.) My money is on WASI, but Deno becoming the interface layer has some advantages, mainly that most WASM-supporting languages already have tooling around JavaScript ffi.


Deno supports wasm. :p


You can write Cloudflare Workers in WASM today.

As far as I can tell from the outside, that's still "WASM-called-by-Javascript", and many of their JS optimizations don't work the same way. E.g. if a Worker calls JS `fetch` and returns that `Response`, they recognize that and remove the JS from the data path; same is not true for WASM at this time.


I think that optimization should still work when using Wasm, unless the Wasm code does something silly like manually pump the stream (read from one, write to the other), but I think you'd have to go out of your way to do that, and anyway the same is true of pure-JavaScript code.


Ooh, that is great news. I've been itching to write a "proxy thing" in Rust instead of JS.


Hmmm I had a look into WASM runtimes and the idea seems interesting to deploy something on a server as a lightweight execution environment (I think of Firecracker from AWS for VMs).

To be honest on the server side of things containers are so nice because 99% of the time they include all your dependencies you need to run the app.


> I'm really rooting for a successful trend of Serverless runtimes, mainly as a weapon against rising cloud deployment costs.

How would that work? Don't these tend to facilitate cloud lock-in or at least be cloud-only in the sense that they make it hard to operate your own metal infrastructure?


There is KNative for that. But really folks are using serverless to _not_ have to operate their own iron and save themselves an ops team.


Sure. Just saying those savings only work if the cloud vendors are in fact charging reasonable rates on serverless compute, storage, and bandwidth. In my experience cloud vendors love to tow that line but if you do the math the savings over DIY are sometimes questionable. It depends on your work load.


Fastly use WASM for their Edge Compute offering


in what world is serverless cheaper?


The one where you only need a few invocations and with serverless: aren’t paying for an idling VM all of the other time. Arguably you may save yourself an ops team too.


> mainly as a weapon against rising cloud deployment costs.

Cloud Functions is literally code you're running in the cloud. And the moment you approach their limit(ation)s, you will see the same "rising cloud deployment costs"

> Running a WASM engine on the cloud means ... a fraction of the overhead of a container or nodejs environment

You do realise that there are other languages than javascript in nodejs? That there other environments than cloud functions? And that you can skip that overhead entirely by running with a different language in a different environment? Or even run Rust in AWS Lambda if you so wish?

> so there’s no lock-in with the language that the Serverless provider mandates.

And at the same time you're advertising for a runtime lock in. This doesn't compute.

> Web pages that are as ancient as the early 90s are perfectly rendered even today... Basically, it means a WASM binary is future proof by default.

It's not future proof.

Web Pages from the 90s are not actually rendered perfectly today because browsers didn't agree on a standard rendering until late 2000s, and many web pages from the 90s and 2000s were targeting a specific browser's feature set and rendering quirks. Web Pages from the 90s are rendered good enough (and they had few things to render to begin with).

As web's standards approach runaway asymptotical complexity, their "future-proofness" is also also questionable. Chrome broke audio [1], browsers are planning to remove alert/confirm/prompt [2], some specs are deprecated after barely seeing the light of day [3], some specs are just shitty and require backtracking or multiple additional specs on top to fix the most glaring holes, etc.

> I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique

"Let's replace somewhat unlimited code with severely limited, resource constrained code running in a slow VM in a shared instance" is not a good take.

[1] https://www.usgamer.net/articles/google-chromes-latest-updat...

[2] https://dev.to/richharris/stay-alert-d

[3] https://chromestatus.com/feature/4642138092470272 and https://www.w3.org/TR/html-imports/


I sought to unserstand "serverless" but every deployment diagram I have ever seen show things that look an aweful lot like they run on... servers?

Maybe I don't get the idea (and honestly I was too lazy to put in the legwork), but when I hear something like "serverless" I imagine some p2p javascript federated decentralized beast where the shared state is stored through magic and tricks with the users clients and there is literally no server anywhere to be found.

Instead it seems like a buzzword (?) for a weirdly niche way of running things that someone with a 4 Euro/Month nginx instance that hosts 10 websites will probably never understand.

Maybe I also don't need to understand because I know how to leverage static content, caching, fast Rust reverse proxy services and client side javascript to develope fast web stuff that gets the job done).


> Instead it seems like a buzzword (?) for a weirdly niche way of running things that someone with a 4 Euro/Month nginx instance that hosts 10 websites will probably never understand.

To me, serverless means that I as the developer don't have to do ongoing server maintenance work. A 4 Euro/month setup sounds great, until you find out that you never enabled log rotation and filled up the disk space, or your certificate refresh was improperly configured and now you don't have SSL, or your site gets popular for a day and the site slows to a crawl unless you add an instance.

The dream of serverless is that I can deploy code in a “set it and forget it” manner. Stuff can still break at the application layer, but should work the same at the infrastructure layer in a year as they do today, and auto-scaling happens automatically.


> but should work the same at the infrastructure layer in a year

Serious doubt, there. This brave new world seems to be entirely focused on making it an incredibly fragile world with your code scattered to the winds. It's bad enough dealing with library semver breakages in a monolithic app. I can't imagine tracking a dozen serverless functions running god-knows-where with whatever resources some cloud service decides to allocate for you today. Billing is opaque as a black hole. Which I'm sure is more a feature than a bug, for these cloud providers.

> and auto-scaling happens automatically.

wheeze


> but should work the same at the infrastructure layer in a year

It's been my experience. For example, I've had periodic data fetching jobs last for years without giving them any thought. In some cases I've gone back years later and found them still chugging away, obediently putting data where I told them to years earlier. The one exception I can think of is when Lambda EOL'd Python 2.7, but that happened about 12 years after Python 3's initial release.

I've found the same to be true of web services. I have one that's been running continuously for 5+ years that I actually forgot about until just now.

> wheeze

Why?


Agreed, my company focuses on writing business logic that actually provides value to our customers, spends roughly 0 time configuring web infrastructure, and everything just works, our cloud costs are dirt cheap (especially when compared to cost of labor), the performance is better than if we had used a traditional server since code is being run on servers very near our users rather than a fixed location, and we save time, stress and money by not needing to hire cynical, behind the times sysadmins like deckard1.

Maybe that wouldn’t be the case if my company wasn’t a B2B SaaS that isn’t constrained by being the scaling concerns of a mass market consumer web app (specifically one that couldn’t scale via smart caching policies, which honestly is a minority of use cases), but for our use case it makes plenty of sense.

If you’re worried about cost overruns from auto scaling, you just set a billing limit and deal with it when you get close. Anyway the code we push to serverless is literally just the business logic we would have written anyway so there’s virtually no platform lock-in. And honestly my serverless costs are so cheap that it’ll be a long while before we bother touching them.


But where does the code run physically? Of course on a server, otherwise it wouldn't be reachable from the net. But who maintains those servers? Is there some contract with those who maintain it?

In my experience if you run things professionally you have to set up log rotation purely for legal reasons anyways. Is serverless without logs? Or how would you there ensure to log privacy relevant data only for the legally allowed periods?

How do you do SSL on serverless and who is in control of the certs that guarantee safe communications between you and your customers? If it is not you, are they somehow contractually bound to keep your user data private?


> In my experience if you run things professionally you have to set up log rotation purely for legal reasons anyways. Is serverless without logs? Or how would you there ensure to log privacy relevant data only for the legally allowed periods?

AFAIK the big providers use log rotation by default. I just know that I've been running some low-stakes serverless projects for years and have always been able to access recent logs, and never worried about disk space. Privacy laws is a good point I hadn't thought of (in the context of these projects), though.

> How do you do SSL on serverless

In the case of Netlify, they already manage the certificate if you point your domain at them and click a button, so it works automatically with their functions. Same story with Cloudflare. AWS and Google make you jump through a few more hoops, or you can host the endpoint from one of their domains and piggyback on their certificate.

I imagine the security practices of all four would hold up to the security practices of a 4 Euro / month VPS host.


> Instead it seems like a buzzword (?) for a weirdly niche way of running things that someone with a 4 Euro/Month nginx instance that hosts 10 websites will probably never understand.

That's exactly it. It's a way to intermittently run a piece of code in a managed environment. You're basically guaranteed that the environment is setup, and that the code will start up and execute. That is basically it.

People are extremely enamoured with it, for no apparent reason. The only usecase I've found for myself so far is running small analytics BigQuery queries to look for easily detectable anomalies once a day and send a Slack message if something's wrong. This way you avoid etting up a separate Kubernetes job etc. Makes no sense outside of GCP.


"Server" can mean many things. A few (obvious) meanings:

* A role in the client-server model. One machine makes requests and asks for info or things to be done, and the other end executes the request.

* The physical (or virtual) machine that runs the processes that fulfill the server role

* A class of "serious" machines that do important stuff somewhere not directly facing the end user (desktop or mobile device).

* A unit of administration: the thing you log in with ssh, install/update software, rotate logs, organize user accounts on, create groups, handle file permissions, craft backup scripts, cron jobs, handle disk space, and generally care about filesystem health etc etc etc

As you correctly pointed out, the "serverless" buzzword clearly talks about code that has to run on some machine, which is not your desktop or mobile device, so it's still about the client-server model and it's still running in servers, which ultimately have ti be administered by somebody somehow.

The "less" suffix in the buzzword means that that person is not you. You don't have to manage the server.

It's hard to find a better word. Sysadminless? NoPet? JustRunIt? FocusOnMyCode?

All names have their drawbacks. My main qualm with serverless is not that there are servers involved, but that it's not clear what is serverless and what it's not.

For example, is kubernetes a serverless platform? As a user if it (not an admin) you don't need to worry about any of the good old sysadmin chores, i.e. you don't manage the actual servers where your code runs.

Otoh generally when people talk about serverless they don't talk about abstractions like kubernetes but usually about going on step further in the abstaction ladder and imagine a world that's not only "serverless" but "processless", where you don't build software and deploy it somewhere but where you write some "functions" and map of them to some endpoint and the system takes care of figuring out how to build, deploy and manage the full lifecycle.


Netlify for me is a prime example of a great company gone wrong by raising too much VC money. The basic product of Netlify is a great one: build and host static sites without the need to mess with any of the tech stack. For us developer folk, this should be easy: run the build command of any static site generator and stick the results into an S3 bucket. And yet, something as simple as this became so popular with even developer companies (see Hashicorp’s quotes on Netlify).

This could have been a great story but then tons and tons of VC money came in and now you’d have to think of ways to make the valuation worth it and make the product sticky: so now we have edge Deno powered functions, lambda-esq applications, form embedded HTML and so much other features that are used by the long tail of their customer base while they changed their price to charge by git committees and have daily short downtimes of 1 to 5 mins for the past month (monitored by external services, as they wouldn’t reflect that in their status page).

Soon, they’ll sell the company to some corp like Akamai or similar “enterprise” outfit leaving us high and dry.

There is a lot of money in building businesses that do boring stuff that just makes peoples lives easier. But when you take VC money, you’d need to build a moat to fend off cloud providers from the bottom, capture the value for the top from developers and everything in between.


I’d be interested in building the bootstrapped “git push and we build and publish”, aka “heroku for static site compilers”

Chime in if you’d like to be one of the first few customers. If there’s enough interest here’s how I’d play it:

1. I won’t raise VC money. I know how to build a SaaS business without it—I bootstrapped Poll Everywhere from $0 to $10m+.

2. My motivations these days are to build low complexity products. Ideally they’re “evergreen”, meaning I can ship a core feature set that I know will be the same in 10 years. The feature I’m selling their is stability.

3. I like to price things in a way that makes them accessible to as many people as possible while being sustainable for the business so it can operate for a long time with the support it needs for customers.


I built something that will take care of the publish part but not the static site generation part here https://github.com/newbeelearn/sserver. Right now it only has one user i.e. me :-)


How would you convince people to go with you over Cloudflare Sites, Vercel, Netlify etc? Is the pitch "we do what they do, but less" compelling?


I’d position it as “we do exactly what needs to be done: push, build, deploy” and talk to the benefits of deploying static websites without all the complexity of edge functions and the complicated pricing that goes with it.

I’d also speak to the idea that the service is shooting for longevity and stability by not adding a bunch of whiz bang stuff needed to justify PM salaries or impress VCs that will be sunset later.


FWIW, we've been building this exact thing over at Read the Docs for the past 11 years :)

* https://docs.readthedocs.io/en/latest/about.html * https://docs.readthedocs.io/en/latest/story.html

Historically we've only supported Sphinx & Mkdocs, but we're looking to expand into serving all docs tooling with a versioned URL scheme, and search indexing & backend API's that are docs specific.


I think this is a natural and fine extension of the Netlify platform. They've had various "serverless functions" for a few years that's mostly been out of the way if you don't need it.

It fits within their goal of a 'heroku for frontend websites', for easily deploying sites.


I guess Netlify still offers the basic static site hosting, which can be anything from drag-drop to easy to set up automated github deployments. I mean, it's not like Netlify offers a worse static hosting service post-funding, right? With VC funding they've just built out more features. Not to mention I think they've always aimed to build out the "JAM" stack and support as many frameworks as possible.


https://news.ycombinator.com/item?id=31025183

No experience from the Netlify of old to compare with, though.


Netlify pricing has always been confusing to me, but I'm not entirely sure why. I guess I'm more accustomed to pay-as-you-go in this space (CFW) than tiered plans (Netlify bundles their features into starter/pro/business).

It seems that the free plan is 3M invocations/mo, starter is 15M/mo, and business is 150M/mo, but there aren't any ways to increase those limits (business says to contact them for higher limits).

Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.


Netlify's creative pricing is what lost me as a customer. They decided to start reading our git commits to decide how much to charge us. Instead of charging for usage of bandwidth and build minutes they decided to charge based on how many different authors we had--even though those people never interacted with Netlify or even knew how we were deployed. If we didn't hurry up and migrate to Render.com this would have taken our bill from $1.5k/year to over $25k/year.


Wow - really surprised at this move by Netlify. It looks like that's a new policy[1] where a "Member" is not just someone who can log in to the Netlify UI and manage a site, but anyone who can trigger a build.

Relevant quote from the article outlining the policy changes:

> For sites connected to private Git repositories on Pro and Business teams, Git contributors will need to be team members in order to trigger builds.

> Teams will only be billed for the number of team members. Currently, Git contributors are people who trigger builds on your team’s site(s). Moving forward, in order to trigger builds, Git contributors who aren’t Team Members, such as people in the ‘Contributors via Git’ section, Reviewers, or people not on the team entirely, will need to have their deploy approved by a team Owner.

> Once their deploy is approved, they’ll be invited to become a Team Member and can deploy without approval from then on. If their deploy is rejected, their build won’t run and they will not be added as a team member to your monthly bill.

> This change does not apply to sites linked to public repositories or sites on Starter or Open Source plan teams.

So it sounds like you could limit your costs by limiting your team Owners.

This pricing doesn't seem like a good value proposition to me. I see Netlify as a web host and CDN which has products very comparable to some of Cloudflare's products. In those spaces billing is generally based on usage, not number of seats.

What you get from Netlify doesn't scale with the number of seats you pay for.

If I have 1 member on the Business plan I'll pay $99/mo and get 1.5TB of bandwidth per month. If I have 5 members on the Business plan, I'll pay $495/mo and still only get 1.5TB of bandwidth. Hardly seems fair or reasonable.

[1] https://answers.netlify.com/t/upcoming-changes-to-netlify-pl...


> Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive.

Please, a hard no to that. That's the worse aspect of AWS, Azure and all those new huge hosting centers - hard to calculate the real cost and set a budget.

I don't know about Netlify, but the old Linode (before it got acquired), was flexible with its "hard" limits in a plan - for example, if your site got slashdotted / Digged (or was that dug?) and suddenly saw a spike on its resource, exceeding the limits, they were quite accommodating in not charging their users for the unexpected extra usage. Linode even wouldn't mind an occasional surge in resource a few times a year. But if it happened more frequently, they would recommend that you upgrade to a more suitable plan. They earned a lot of goodwill that way from their clients who really appreciated that their server / site wasn't unexpectedly taken offline because of a resource crunch they hadn't paid for and / or anticipated.


Like another comment said, different strokes for different folks.

I would much rather pay overage fees than have my site go down due to a hard limit, but I would also like the option to choose the opposite.

That way you appeal to both sides of the scalability-vs-predictability crowd.


I have no special insight into Netlify, so this is (educated) speculation: there's an important difference between pay-as-you-go compute providers, like AWS, and Netlify: Netlify is a platform, their value is not derived from the workloads they process, so charging (or not charging) based on compute doesn't align with their value proposition. The value of Netlify is that it's an end to end platform, taking a business from having some code to having a live website, where compute is just one component of the entire value proposition.

The marginal cost of a request is probably negligible, hence the tens of millions of requests included, but there is a cost associated with each user making use of their platform because it includes a lot more than just compute, and that's the value they're charging for.

I think if you're looking for a compute provider that offers pay as you go billing in order to minimise your costs, then Netlify probably isn't the platform for you, and you'd be better off using their service provider directly (in this case, Deno, but many Netlify alternatives use Lambda, Cloudflare Workers etc.).


Different strokes for different folks.

This has been one of the big knocks on AWS, that a poor little old lady can setup a "free" AWS account then when her website (and accompanying Lambda function) goes viral she gets hit with a $100k bill from uncle Jeff.


Would the Lady prefer to let her site go down?

I don't understand this way of thinking. One of the main benefits of serverless is scalability, peace of mind for precisely when you go viral.

If you're doing something good, especially if you're selling something good, all you want is to go viral. And of you went viral, you don't mind paying the AWS costs, which should be tiny compared to your revenue. Just need to care about your unit economics.


Sure, but I think the GP was referring to a situation where revenue doesn’t match the traffic.

Imagine the grandma scenario. Let’s say that she cooks and sells artisanal jams and jellies. With the help of her granddaughter, she creates a TikTok that goes viral. Her web store immediately sells out, and the traffic from the video hammers her website. She cannot react fast enough to enable back orders and so most of those visits go to waste.

Putting aside the technical absurdities (why is she hosting on a lambda, etc), in this scenario, grandma is up a creek.

If this scenario were real, I would feel really bad for the grandma with a huge bill and not enough revenue to cover it, but I would be livid at whatever imbecile decided to set her up with such a ridiculous hosting paradigm.

“But it only costs pennies a month to run!*”

Yeah, until she goes viral. This scenario right here is why services like Squarespace et al are still valuable. You’ll pay a few extra bucks a month, but if you go viral, you won’t go bankrupt when the bill is due.


Then get peoples emails to notify when stock is up again.

The worst thing it could happen is having your site down when many people want what you have.

In any business, the hardest part and most expensive, by far, is sales and marketing.

You wanna throw it all away to save 100 bucks on Amazon? That's insane!..


And how about a fun educational project you aren't expecting visitors for and make zero revenue from? AWS bills can far exceed 100 bucks. Paying thousands for... Exposure? That's insane!..


> Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.

Sure, if you can set a max budget. Otherwise, you'd constantly have to worry about the unbounded cost.


I would love to jump over to something like Vercel or Netlify Edge, but maddeningly none of these platforms give you control over the cache key. I have pages that are server-side rendered with Cache-Control headers, but because our visitors come with unique tracking params on the end of their URL (e.g. from Mailchimp or Branch), we would essentially have no cache hits.

It seems the only way to have control over this is to write your own Cloudflare Workers. There must be a better way? I can't imagine this is an infrequent problem for people at scale.


So far Netlify Edge Functions runs before the cache layer, so you can actually use a minimal function to rewrite the URL to remove all unique params, etc, and then let it pass through our system to a Netlify Function which runs behind our caching layer.

For anything you can do at build time as a static HTML pages we already strip query parameters from cache keys.


Interesting, thanks... do you have any docs on how we might achieve this with Next.js? Am I right in thinking we would have essentially a custom Edge Function first that handles query params, and then a second Edge Function that renders the Next app?


I work at Netlify on framework integrations. Next has beta support for running the whole app at the edge, and Netlify supports that. If you create you own custom edge functions they will run first, so you can do just that. You can also run Next "normally" (i.e. in the node-based Netlify Functions) and run your own edge functions in front of that. In those you can modify the Request in any way you'd like, before passing it on to the origin.


Yeah I'm very intrigued by running the whole app on the edge (in their Edge Runtime).

This sounds pretty promising. I'll take a dive and see if I can get it working, thanks for the tip!


Cloudflare Transform Rules let you rewrite URLs on the fly https://developers.cloudflare.com/rules/transform/


I'm biased but there is a better way. Give developers a high performance method programmatically manipulating the cache key from JavaScript. That's what we created with EdgeJS: https://docs.layer0.co/guides/caching It's less work to write and higher performance than dealing with edge functions or worker for routine tasks like this.


Why don't you just link to an API route, consume the tracking params, set a cookie, and redirect to a statically rendered page?


Time to first paint


> There must be a better way?

You're experiencing friction trying to use something in a way that it's supposed to not be used. (I.e., click-tracking by junking up URLs.) You could look for an answer, or you could take a step back, evaluate your expectations, and then decide not to do what you're trying to do.


Unfortunately this is an enormous business and asking them to stop all tracking is well outside of my remit.


At the end of the day, though, no matter how big the business is, it is the result of someone agreeing to fulfill their wishes.


Aaron from Deno here, happy to answer any questions you may have !


Thanks! Any useful pros and cons vs Cloudflare Workers?


One of the big reasons for going with Deno is that it's an open runtime closely based on web standards. You can download the open source Deno Cli and all code written for our edge layer will run exactly the same there.

As more and more front-end frameworks starts leaning in on running part of their code at the edge, we felt it was important to champion and open, portable runtime for this layer vs a proprietary runtime tied to a specific platform.


Cloudflare uses v8 and the client is open: https://github.com/cloudflare/wrangler


Hm yes, the fact I can't run Cloudflare Workers somewhere else is a worry. Fair point.


Workers can have persistent storage attached to them (as a KV store), I can't see whether this has anything similar.


Also Workers can talk to Durable Objects which is super nice


Yes, and I love the minimal pricing of both. Just paying for real compute time; even calling an API pauses the pricing while it waits for a response.


Yes - that is really good.


Looks like it comes with Typescript support


Anything running JS comes with some TS support, you just have to transpile it before releasing :) I'm not sure why shipping the transpiler on the production server rather than keeping it in your CI is a good idea, but I think that's what Deno is doing.


> I'm not sure why shipping the transpiler on the production server rather than keeping it in your CI is a good idea, but I think that's what Deno is doing.

IMHO, the decoupling of build step and runtime step in JavaScript was a terrible mistake. I've wasted hours just trying to find tsconfig settings that are compatible with the other parts I'm using. Shipping a transpiler with a known-good configuration alongside the runtime forces everyone to write their packages in a way that are compatible with that configuration, instead of creating a wild west.

The current state of modules and npm reminds me a bit of the bad old ”php.ini” days, where you would have to make sure you enabled the language features enabled by the code you wanted to import. What a mess.


> I've wasted hours just trying to find tsconfig settings that are compatible with the other parts I'm using.

Deno only “solves” that problem by not having a legacy ecosystem, and that’s only if you stick to the happy path of only using modules with first class Deno support. If you try to tap into the vast Node ecosystem, where Deno’s lacking, through e.g. esm.dev, you can waste hours just as easily. Even packages that claim Deno support sometimes have minor problems.


I understand that it might be a problem for browser target, but nodejs is pretty easy to target (at least I never had anyissue).

Also speaking of wild west, Deno did not even manage to have their TS be the same as everyone else, as apparently they do import with .ts file extension, while everyone else is using .js. I feel like this would be creating more mess than fixing anything...


True, but one feature I enjoy about Cloudflare workers is that I just edit them in the browser, even on devices without nodejs installed.


Question about https://edge-functions-examples.netlify.app/example/rewrite

    export default async (request: Request, context: Context) => {
      return context.rewrite("/something-to-serve-with-a-rewrite");
    };
I'm surprised that the function is async but context.rewrite() doesn't use an await. Is that because the rewrite is handed back off to another level of the Netlify stack to process?


Promises are flat, so if a async function or promise callback returns a promise the result is just a promise, not promise<promise>.

Using async for functions that do not use await is still a good idea because thrown errors are converted to rejected promises.

`return await` can be useful because it's a signal that the value is async, causes the current function to be included in the async stack trace, and completes local try/catch/finally blocks when the promise resolves


Actually `context.rewrite` returns a `Promise<Response>`. The `async` isn't necessary here, but it also doesn't particularly hurt. You can return a `Promise` from an async function no problem.


Since it's being returned it doesn't really matter whether `.rewrite()` is returning a promise or not. `return await x` is mostly equivalent to `return x` within an async function.


Is Netlify running Deno on their edge and not on Deno.com Deploy's? Is this also what Slack, Vercel (?), and Supabase do?


Netlify and Supabase use Deno's infrastructure for code execution (https://deno.com/deploy/subhosting). Vercel hosts their edge functions on Cloudflare (nothing to do with Deno). Slack's Deno runtime is hosted on AWS.


Are you willing to talk a bit about how Deno Deploy works internally? I think you have an internal build of Deno that can run multiple isolates (unlike the CLI, which basically runs one). How do you limit the the blast radius in case of a vuln in Deno?

Kenton Varda did a pretty great writeup on CF worker security [0]. Would love to see Deno Deploy do something similar.

[0] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...


We probably will eventually. A talk like this takes a _lot_ of time to prepare though, so it's not on the top of our priority list. But it will happen eventually.

The TLDR is that Deno Deploy works pretty similarly to CFW in that it can run many isolates very tightly packed on a single machine. The isolation strategy differs slightly between CFW and Deploy, but both systems make extensive use of "defense in depth" strategies where you minimize the blast radius by stacking two or more defenses against the same issue on-top of each other. That makes it _much_ more difficult to escape any isolation - instead of breaking out of one sandbox, you might have to break out of two or three layers of isolation.

These levels of isolation could happen at different layers. For example network restrictions could be initially restricted by an in-process permission check, then additionally a network namespace, and finally a routing policy on the network that the machine is connected to. Imagine this, but not just for network, but also for compute, storage, etc.


This is really great to see an open source project coming up with a viable business plan to support the project development.


Glad to see the Deno Company is getting a piece of the pie! Funding open source projects is tricky, but it seems like you're figuring it out.


Ok, I stay corrected.


How many Deno instances might an edge server run? Does each tenant have an instance or is there multi-tenancy? What interesting tweaks have you made making a cloudified offering of Deno tailored for http serving?


We're building a highly multi-tenant "isolate cloud" (think VMs => containers => isolates, as compute primitives).

The isolate hypervisor at the core of our cloud platform is built on parts of Deno CLI (since it has a modular design), but each isolate isn't an instance of Deno CLI running in some kind of container.

Isolate clouds/hypervisors are less generic and thus flexible than containers, but that specialization allows novel integration and high density/efficiency.


We’ve been Netlify paying customers for 2 years now. While I appreciate the new features, the core platform has been becoming unreliable in the past 6 months. We’ve had a decent amount of downtime.

I do not recommend them anymore. We will move somewhere else.

Almost every few days we get a report that some customers can’t access our site from where they are. Our US east engineers can confirm that their POP is down.

Netlify’s status page says everything is working, but in reality it’s not.

Netlify as a CDN has failed for us on its core promise.


Does anyone know how those compare to regular Netlify Functions, other than running on the edge nodes? The main difference I’ve found is that they have much stricter CPU time budgets, but it seems to me that the use cases overlap quite a bit.


This is a new concept for me, what is the use case for edge functions?


You have a hosted static web app but want to dynamically change the <meta> tags in your index.html to provide a unique url preview for each route (/about, /careers, etc)


Why would one want to do this? For analytics?


When you share a url like a news article on Twitter or send it on Slack, iMessage etc, you get a small preview widget showing you the headline and a photo from the story.

The way this is generated is Twitter/iMessage scrapes the HTML of the url you are sharing and looks for <meta name="og:image" content="https://imgur/adfstdd"> and displays whatever image is in the content field. Similarly the widget title is populated from <meta name="og:title" content="About Us">

In a static html site there is only one index.html so all routes have the same meta tags unless you overwrite the index.html file using an edge worker.


> In a static html site there is only one index.html so all routes have the same meta tags unless you overwrite the index.html file using an edge worker.

I guess this is specifically for SPA static sites? I would expect a static html site to have different html files for each page, like blog/2020-09-09-title.html and that could then have its own metatags, so no need for this dynamic rewriting, unless I'm missing something.


Correct. This is only for SPAs that use client side routing like react-router. React SPAs are quite common with startups in my experience.


Crawlers like Google's run JavaScript now though, so your example is outdated.


This isn't only for googlebot but Twitterbot, slackbot, facebookbot and [fill-in-blank]bot that fetch your page to generate a url preview in their app when links from your domain are shared.


If that's your concern it probably makes more sense to just be server-rendering at that point


My main concern is cost. Cloudflare workers are free.


So is Vercel


For one (minor) thing, they're a great way to add certain HTTP headers which can't be handled through other means. I use a Cloudflare Worker to give my site the necessary headers for its Content Security Policy (some parts of which aren't to be added via a <meta> tag[0]), as well as the nonces[1] for that CSP. This only scratches the surface, of course.

[0]: https://content-security-policy.com/examples/meta/

[1]: https://content-security-policy.com/nonce/


The first link in the article points to this: https://www.netlify.com/products/#netlify-edge-functions

How to use them Drop JavaScript or TypeScript functions inside an edge-functions directory in your project.

Use cases Custom authentication, personalize ads, localize content, intercept and transform requests, perform split tests, and more.


> Use cases

> A bunch of server functionality

Why is that the use case? I don't see how an edge function can be faster than a centralized server endpoint if it has to reach out to literally any other component of the system involved in auth / persistence


It's just serverless architecture. It may not fit for every project.

We use the concept to add meta tags to our client-side-rendered webapp for search indexing. We can decouple our client app from our server and deploy each separately. We use "serverless functions" to add some meta tags that need to be added server-side.


I mean, it's incredibly easy to deploy client apps separately from the server in a lot of ways

To me, the only value I see for Edge compute is when some chunk of data requires processing going one way across the network, and that processing can be done entirely locally. I suppose what you describe with the meta tag qualifies in this case, otherwise I think serverless architecture looks like a pretty sweet deal for the cloud companies promoting it.


You are replying to a message that very explicitly didn't say "A bunch of server functionality". Specifically, it said:

> Custom authentication

You can most definitely authenticate requests based on signed tokens and the like, meaning you don't necessarily need to reach any other component in the system.

> personalize ads

Same here. You can most definitely pick different adds depending on some cookie value or such, no need to reach anywhere. Even if you want to track what ads you've served, that can be done _after_ the response is sent to the user, meaning the extra latency of going to your persistence layer isn't perceived by the user.

> localize content

Again, you can have your translations in your edge function (or some edge cache if that platform supports it) and apply them at the edge. Admittedly this sounds like the shakiest use-case.

> intercept and transform requests

You can implement redirections, security headers, etc. in this layer. No need to go to your persistence layer.

> perform split tests

Same idea here. You can have multiple versions of (cached) pages and serve one or the other depending on the user's cookies, ip (country?), some frequency, etc.

You may be doing all these things from your backend layer, which is arguably easier, but it doesn't mean that they can't be offloaded to the edge and have a positive impact.


> You are replying to a message that very explicitly didn't say "A bunch of server functionality".

Right, let's see...

> You can most definitely authenticate requests based on signed tokens and the like

Sure, you can, you just have to give up the ability ever invalidate tokens

> personalize ads

How can this not just be done on the client?

> localize content

How can this not just be done on the client?

> intercept and transform requests

How can I implement redirections and security headers here? What context does the Edge function have to do something meaningful here that couldn't have been done on the client?

> perform split tests

I would prefer to serve my static content from static content hosts with caching capabilities, not Edge functions

I am saying this: Edge functions and serverless are loss-leaders for cloud vendors to get you to integrate deeply into their systems. Using them for their tiny, imperceptible gains in the face of the massive engineering efforts (complexity) and risk of vendor lock-in is ridiculous. These use-cases do not justify binding yourself to a cloud vendor.


> if it has to reach out to literally any other component of the system

Maybe it doesn't have to "reach out to literally any other component" Sometimes code can be self-contained. That's why they're called "Edge Functions" and not "Edge Services"

The advantage is edge functions are physically closer to the customer (lower latency) and can be updated at-will and all at once (unlike client-code, you cannot force a user to update their app).


Even if it doesn't reduce latency, it could reduce bandwidth and load on other servers. (Or increase it, if it's used badly.)


I can't imagine a successful argument that you use less compute cycles spinning up Edge functions to interact with your app than just hitting an already-running API endpoint on an already-running server. And rearranging your load just because you can isn't worth consideration.


Serverless API functions, like if you were going to use Amazon AWS lambda functions to add interactivity or simple APIs to a site without have to manage and run a full server.


I was was told to use firebase cloud functions literally yesterday.

You can pre-parse and pre-process JSON responses to minimize the payload size and customize it for your frontend needs. Makes dealing with client secrets and configuration easier too I believe. I didn't want to rewrite a bunch of backend code so this was one of the simplest solution.


What are the cold start times like? Compared to say Cloudflare Workers where they claim you can have no cold starts?


Deno Deploy (https://deno.com/deploy) uses the same optimizations as CFW to achieve effectively 0ms cold starts.

Netlify Edge Functions are still in beta and don't have all of the same optimizations yet, but we're going to be working with Netlify over the next few months to enable these optimizations to Netlify Edge Functions too.


Thank you! That’s so impressive that you are able to achieve 0 ms cold starts in Deno Deploy. That and CFWs are game changers.


I don't want edge functions, I want edge appliances. An edge function means I still have to run my own janky devops for that specific appliance. Edge IPv6 Appliances or Bust.


Great stuff! Well done!!


What advantage does this have over something like https://deno.dev?


How does this compare to cloudflare workers?

CF always seems so cheap compared to alternatives, if you ever expect to scale beyond the developer plans.


Does this complement or compete with Deno Deploy?


big-time red flag that they're using deno's infra... wouldn't trust that


It sounds like Netlify is essentially reselling a third-party service here. Isn't operating infrastructure Netlify's job? Why outsource this? Can requests end up taking circuitous paths where Netlify and Deno's infra don't line up?


Netlify also uses aws and rackspace . They are in the business of selling PaaS on top of IaaS by adding value to developer workflows .

They could host their own infra at large enough scale when that makes sense, the same way AWS decided after many years to make their own chips (graviton), but that is not their core identity like AWS is not a chip manufacturer.


It looks like Netlify is essentially reselling Deno Deploy as is, not building a higher-level service on top of it. And latency matters in CDNs.


(I work at Netlify, and worked on this)

It uses Deno Deploy for the actual execution of the function, but the whole workflow around it, routing and middleware API, integration with frameworks and the CDN are all Netlify. It's similar to how Netlify Functions use AWS Lambda for execution. It does add latency, but it's tens of ms, because it uses Deno Deploy nodes that are very close to the Netlify edge nodes. Deno Deploy is awesome, but Netlify has a much more complete platform, so the combination is best of both worlds.


Just integrating into their own workflows and other apps could be sufficient value, only time will tell.


Why?


care to elaborate?


Deno is an offline runtime. Similar to nodejs. Not an infrastructure.



That's discussing their subhosting offering, which is unrelated to the subject of this thread, the actual runtime.


You're being dense, that's from this thread, and explains this Netlify offering uses Deno's Deploy infrastructure.


You're right. I missed the link to the Deno blog and only saw 'Deno runtime' in this article.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: