More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me. It divorces the place you need to write code for user request validation (as soon as the user request arrives) from the middleware itself.
Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen. I think the mindset of ignoring unknown/invalid parts of a request as long as some of it is valid also plays a role.
The framework providing crutches for bad server design is also a consequence of this mindset - are there any concrete use cases where the flow for processing a request should not be a DAG? Allowing recursive requests across authentication boundaries seems like a problem waiting to happen as well.
> More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me.
That's basically the same way phone phreaking worked back in the day. Time is a flat circle.
LLMs have the same problem a la "ignore previous requests".
The fundamental problem is that you always either need two signalling paths or you have to specially encode all user content so that it can never conflict with the signalling.
Those are both a pain in the ass, so people always try to figure out how to make in band signalling work.
There are mechanisms for this, liked signed headers or extra auth tokens, but using those here should immediately illustrate the absurdity of a framework using headers internally to pass information to other parts of the framework.
Relevant parallel to this is the x-forwarded-for header and (mis)trusting it for authz.
This seems like a consequence of Vercel pushing that weird "middleware runs on edge functions" thing on NextJS, and b/c they are sandboxed they have no access to in-memory request state so the only way they can communicate w/ the rest of the framework is via in-band mechanisms like headers.
Unfortunately, in-band signalling seems to be the norm when dealing with HTTP. There isn't really a standard mechanism for wrapping up an HTTP request in a standard format and delivering it, plus some trusted metadata, over HTTP to another service.
Or if there is, and I've somehow missed it, please *please* share it with me.
Just use MIME multipart content-type to wrap an HTTP message inside another. This is commonly done for batching requests. Here is an example of how it might look like: https://cloud.google.com/storage/docs/batch#http
That misses the point. The OP's original use case is for a middleware to wrap a client request. The middleware would reject such multipart requests from the client.
The middleware doesn't have to reject it. It could decide to just wrap it and pass it along. The backend code can then be able to distinguish which was sent by the client and which was added by the middleware. And that's the point. The middleware can do as little or as much filtering it desires, without causing any confusion to the backend.
The absurd part to me is that this is all internal to the framework, why on earth does NextJS need to wrap up an HTTP request and re-send it...to itself...?
(I think the answer is because of the "requirement" that middleware be run out-of-process as Vercel edge functions.)
That "article" looks like AI generated slop. It suggests `if (request.headers.has('x-middleware-subrequest'))` in your middleware as a fix for the problem, while the whole vulnerability is that your middleware won't be executed when that header is present.
You’re right - I was specifically referring to it giving a concrete example (which may or may not be correct) of the vulnerability as opposed to the main article just pointing in the direction of the header.
> Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen.
I'm going to disagree on this. Browsers and ISPs have a long history of adding random headers, a website can't possibly function while throwing an error for any unknown header. That's just the way HTTP works.
This is clearly a case of the Next devs being silly. At a minimum they should have gone with something like `-vercel-` as the prefix instead of the standard `x-` so that firewalls could easily filter out the requests with a wildcard.
Even if they had to make things go through headers (a bad idea in and of itself, in-band signalling always causes issues), the smart move would have been to make it a non-string, such that clients would not be able to pass in a valid value.
1) Plain HTTP, go wild with headers. No system should have any authenticated services on this.
2) HTTP with integrity provided by a transport layer (so HTTPS, but also HTTP over Wireguard etc for example). All headers are untrusted input, accept only a whitelisted subset.
With this framing, I don’t think it’s an unreasonable for a given service to make the determination of which behaviour to allow.
I guess browser headers are still a problem. But you can get most of the way by dropping them at the request boundary before forwarding the request.
https://zeropath.com/blog/nextjs-middleware-cve-2025-29927-a...
This looks trivially easy to bypass.
More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me. It divorces the place you need to write code for user request validation (as soon as the user request arrives) from the middleware itself.
Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen. I think the mindset of ignoring unknown/invalid parts of a request as long as some of it is valid also plays a role.
The framework providing crutches for bad server design is also a consequence of this mindset - are there any concrete use cases where the flow for processing a request should not be a DAG? Allowing recursive requests across authentication boundaries seems like a problem waiting to happen as well.