In practice today a CA would want to actually have physical control over the infrastructure signing for your intermediate, meaning the actual hardware would live in their data centre and they would administrate it, but presumably you'd be paying for it (rackspace in a high availability secure data centre) so that's not going to be free and doesn't offer you any capability you wouldn't have via something like ACME.
To give you physical control over the intermediate basically means they're staking their entire reputation on you doing what you promised and never screwing up.
You could imagine this working with a constrained intermediate, except I can more or less guarantee that the day after you commit to such a thing you discover a client you care about can't handle the constraint, and so you ask for it to be relaxed, whereupon you are back in the exact same situation.
Mozilla requires CAs to tell them about every unconstrained intermediate, so there's actually a complete list you can go look at. It is not large.
What is unconstrained intermediate and what about constrained intermediate (which could only issue certificates for subdomains)?
My main use-case in mind is service which provides HTTPS to its clients, but clients should serve content from their own endpoints. Think about home devices. Proper way to do so is to generate private key on device, send public key to somewhere and receive proper certificate for device-0123.example.com. This could be automated with letsencrypt and DNS verification, but there's limit of 50 certificates per week which would require to register plenty of domains and rotate them. Using intermediate certificate would solve that issue.
X.509 has a feature called constraints, where you say e.g. "This is a CA but it must only issue for names under .foo.example" or "This is a CA but it can only issue for S/MIME" and to make these effective you set a bitflag that means "If you are parsing this certificate and you don't understand this part, the certificate is invalid for you".
The consequence of that second element is that clients which don't understand the constraint (and thus wouldn't know which are or are not trustworthy leafs under the constrained intermediate) mustn't trust the intermediate at all. This means if you need those clients you cannot use the constraints they don't understand, because they'll reject your entire intermediate.
Mozilla defines constrained as it pertains to the problems they care about, so for example they do not consider a CA constrained if it lacks a constraint they require.
For your use-case there are two practical options:
1. Most suitable for commercial projects e.g. you're a startup selling a new IoT device. Talk to a commercial CA and work out a deal where you get what you need. Outfits like Sectigo strike deals like this all the time. They understand that you don't want to pay high prices per certificate for this problem, but on the other hand you may be able to guarantee minimum volume and that means it makes commercial sense compared to piecemeal orders.
2. For a hobby project talk to Let's Encrypt about getting an exemption to the rate limit for your specific application, or, if it fact the devices are owned by third parties, whether this should be on the Public Suffix List and thus exempt from rate limits anyway. (the PSL also means these devices can't share HTTP cookies among each other, which might well be exactly what you wanted anyway).
To give you physical control over the intermediate basically means they're staking their entire reputation on you doing what you promised and never screwing up.
You could imagine this working with a constrained intermediate, except I can more or less guarantee that the day after you commit to such a thing you discover a client you care about can't handle the constraint, and so you ask for it to be relaxed, whereupon you are back in the exact same situation.
Mozilla requires CAs to tell them about every unconstrained intermediate, so there's actually a complete list you can go look at. It is not large.