Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're self hosting, IMO following Vercel is not the model. Use KEDA and K8s, and if you need compute at the edge lean into Cloudflare. That way you're staying standardized, and your vendor lock in is for best in class edge support.


I get the appeal, but personally I stay away from k8s. I don't mind putting in work to set up my deployment pipeline, but on a day to day I just want to push code to my repo and potentially edit environment variables. That's the sweet spot I was trying to hit.


you can do that with k8s and argo?


Sure, I just wanted a simple and user-friendly experience.


Not trying to be dismissive here but if you're deploying regularly and you have plans to scale anything putting the time into learn and automate K8s pays off very quickly. What you're doing here gives me ORM vibes - good for training wheels and helping people that don't know stuff to be productive quickly, but ultimately a source of lots of problems that you wouldn't encounter if you didn't invest in a leaky abstraction.


Are you suggesting K8S is free of leaky abstractions?


I'm suggesting for this case it's closer to ground truth than a hand rolled Vercel clone, because it's been battle tested and tweaked HEAVILY.


Containers on top of K8S is radically less ground truthy than this project, and you say heavily tweaked like it’s a good thing. I doubt there is a single person alive who understands even half of all of those tweaks.


K8s for APIs and job processing, and CF workers for hosting sites at the edge is kind of the dream for me right now. Have you used their new-ish VPC networking to handle secure networking between the two?

https://www.cloudflare.com/press/press-releases/2025/cloudfl...


I have done a good amount of integration work with cloudflare in zero trust military environments, I used Cloudflare tunnels to do edge integration in a seamless way for services, if you need more flexibility (zero trust clients and integrations) Warp is the way to go.

The pattern I like is to shard customer data in D1+R2 based on a customer specific DO, and have that front your core services via tunnels, then have a shared state database fronted by hypertunnel. I like KEDA scaling of modular monoliths with Cloudflare containers as a burst fallback if SLAs are in danger of going red, it gives you resilience and scalability without boxing you into cloud specifics, and if you're going to be married to one cloud provider it should be Cloudflare, if they go down it's gonna be an internet wide thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: