Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Here, “a Node.js server” refers to one Node.js server. A single, unique instance of a Node.js server (with or without Docker) with no horizontal scaling and no zero-downtime deploys is not a viable deployment strategy for serious projects.

Ah, the penny drops. The idea that you can’t run a traditional server and must rely on serverless vendor if you’re “serious”, and that NextJs is not convenient to Netlify, is at the core of this blog, camouflaged by concerns of openness. I’m most certainly agree with openness, and quite honestly I dislike NextJs and Vercel, but suggesting that side stepping lock in altogether by simplifying down to traditional techniques is not “serious” makes me bristle a little, especially when you say we can’t do something that we have been doing for many, many years without your platform.



> Ah, the penny drops. The idea that you can’t run a traditional server and must rely on serverless vendor if you’re “serious”

That's not at all how you should read this. They later on give an example of exactly what kinds of problems you'll run into once you start needing to horizontally scale you Next.js servers (e.g. as pods in k8s, which is not serverless):

> The issue of stale data is trickier than it seems. For example, as each node has its own cache, if you use revalidatePath in your server action or route handler code, that code would run on just one of your nodes that happens to process that action/route, and only purge the cache for that node.

Seeing as a Node.js server running Next.js serving SSR or ISR (otherwise you'd just serve static files, which I personally prefer) is not known to have the greatest performance, you will quickly run into the need of needing to scale up your application once you hit any meaningful amount of traffic.

You can then try to keep scaling vertically to avoid the horizontal pains, but even that has limits seeing as Node.js is single-threaded, and will run into issues with the templating part of stringing together HTML simply taking too long (that is, compute will always block, only I/O can be yielded).

The common solution for this in Python, Ruby, and JS/Node.js is to run more instances of your program. Could be on the same machine still, but voila! you are now in horizontal scaling land, and will run into the cache issues mentioned above.

There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.


> (e.g. as pods in k8s, which is not serverless):

> There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.

It's not because you can use an external cache like Redis[1]. You can scale to hundreds of instances with an external redis cache and you'll be fine. The problem is that you can't operate on Netlify scale with a simple implementation like that. Netlify can't afford running a redis instance for every NextJS application without significantly cutting into their margins (not just from compute cost, but running and managing millions of redis instances at scale won't work).

Clearly Vercel has their own in-house cache service that they have priced in their model. Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

[1] https://github.com/vercel/next.js/tree/canary/examples/cache...


Interesting and definitely something platforms must take into consideration.

Now back to the post, implementing custom cache is not something Netlify is strongly complaining about. They are mostly asking for some documentation with rather stable APIs. Other Frameworks seems to provide that.


> Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

But they have done that, as they say in the post.

Disclosure: used to work at Netlify, now work at Astro


The ability to self-host on serverless compute is just a subset of the challenges. Even with your own VPS or ECS setup, I think it's legitimate to say a serious project requires server redundancy, and as soon as you have more than one running server it opens up requirements to synchronize cache state, which is a significant challenge. This has nothing to do with serverless architecture.


It's their whole business model, to convince developers that their walled garden is the only viable option. I've met a lot of newish developers that believe it too.


> suggesting that side stepping lock in altogether by simplifying down to traditional techniques is not “serious” makes me bristle a little

This is a strawman. You're misinterpreting the word "serious". They are using it to mean scalable, not about unimportance/ability. At some point in the scaling process, it will be more effective to scale to another machine than stay on a single one at which point you need a lot of other primitives like the article mentions. E.g. a shared cache with proper invalidation mechanisms. If you dont need scale, then you're right, you dont have to worry about this. I will also note that it is slightly odd to use a framework like next.js if you arent (or planning on) running at scale because most of its features (e.g. SSR) are entirely performance oriented. Essentially, the whole point of the article is that despite being "open source" you cannot run next.js at scale yourself without a massive investment of your own.


> Essentially, the whole point of the article is that despite being "open source" you cannot run next.js at scale yourself without a massive investment of your own.

I don't know about that. Asking a service provider to provide an implementation for a cache interface isn't a "massive investment". It's an investment sure, but it's the type of investment that seems should be customizable per provider depending on their needs, technologies they want to bet on, etc. It seems to me the problem with Netlify isn't comfortable putting in the investment to have a NextJS specific cache service. It's understandable considering they don't control the framework and to them it's just another option, so they don't want to invest in it too much.


(disclaimer: Netlify employee) The big challenge with the cache interface atm is not using Redis (personally, I love Redis). It's that this interface is far from being a straightforward GET/SET/DELETE. Rather, you need to learn the default implementation and all its nuances (for different payload types), and duplicate/transform all the logic.

The division of labor between what the framework does and what the platform developers (or any other developer working on a high-scale/high-availability deployment) need to do, has to be fixed. If this happens - plus better docs - you should be able to "just use" Redis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: