Hey HN, we're Sebastian and Mish and we are working on EdgeNode - A global serverless deployment platform.
Typically, most applications are hosted in a single location. This makes the experience worse for the users based further away due to high latency. Higher latency leads to decreased conversion rates, impairs user interactions and ultimately reduces customer satisfaction.
Amazon and Google operate their services globally to eliminate this issue and increase service reliability. However, setting up and maintaining such an infrastructure is technically challenging and very costly.
With EdgeNode, we want to enable companies to scale globally in a cost-effective and developer-friendly way.
EdgeNode runs your existing apps (Next.js, Ruby on Rails, Laravel, Spring, Django) or Docker containers globally on the edge, no modifications required. Your application scales down to zero instances when there are no active requests, so you only pay for the resources you use.
We're committed to make global deployments as simple as possible. To achieve that, we will offer a one-click deployment process, eliminating complex configurations and minimizing the time and effort required to get your application up and running on the edge.
Please sign up for Early Access and let us know what you think!
Makes sense, so if I already have a tRPC application, what's the transition like to Garph? Or do I need to have it already in Garph when first starting out (ie hard to transition)?
Depends on your application size. The hardest part to transition will probably be the client
We will provide a migration path in our documentation and build some utilities to automate the work required. You can join our Discord to get an update when we have something to share (linked in the repo)
One big difference in philosophy here is that tRPC is not designed to be used for cases where you have third party consumers. It's built for situations where you control both ends. (That said, you can use trpc-openapi for the endpoints that are public)
On versioning: it's 2023 & in most cases, you can solve versioning in other ways than maintaining old API versions indefinitely. For RN there's OTA, for web you just need to reload the page or "downgrade" your SPA-links to a classic link to get a new bundle (did an example here https://twitter.com/alexdotjs/status/1627739926782480409)
Also, we'll release tooling to help keeping track of changes in cases where you can't update the clients as easily.
GraphQL is amazing but it isn't a silver bullet either, it has its own complexity that you have to accept as well.
> On versioning: it's 2023 & in most cases, you can solve versioning in other ways than maintaining old API versions indefinitely. For RN there's OTA, for web you just need reload the page or "downgrade" your SPA-links to a classic link to get a new bundle (did an example here https://twitter.com/alexdotjs/status/1627739926782480409)
I would suggest you think more deeply about this problem.
In all the examples you listed there is a population on the new version and the old version, even in the happy case. Distributed systems are not instant, and changes take time to propagate. Publishing a new version of your website / RN bundle to a CDN does not mean all edge locations are serving it. Long lived single page applications (Gmail for example) are not typically refreshed often. For small applications, this may not be an issue -- when your scale is millions of users, the population that now receives a 500 due to a bad request is significant, even if it's seconds or minutes where clients access both versions, but the backend only supports the latest version.
> Also, we'll release tooling to help keeping track of changes in cases where you can't update the clients as easily.
The ease of updating the clients doesn't solve this issue, and avoiding leaking it into your framework because it's messy or introduces constraints won't make it go away.
Thanks for the input! We have thought about it a lot.
The biggest challenge with trpc right now is that the API is transient so it's not obvious when you might be braking clients in flight and that you often can't guarantee perfect sync of deployments as you're rightly pointing out.
Once we have some more tooling around it, you'll be able to get the benefits of a traditional API where you consciously choose when to break the spec, but with the productivity benefits of not having to manually define it. I think that will scale pretty well.
> One big difference in philosophy here is that tRPC is not designed to be used for cases where you have third party consumers. It's built for situations where you control both ends. (That said, you can use trpc-openapi for the endpoints that are public)
I'm a happy tRPC user, and this is my use case. Our web application has no client other than our web frontend. I can't see a situation when it would (and I would bet this is true for most web applications), so I am very happy with how tRPC has worked out.
I did recently create a more limited data API, and for that I used express-zod-api [0] which I like very much.
> Could you please tell us more about your strong opinion against test generation?
I don't believe generated tests can test anything relevant, and to me they're just a kind of mental load. One day they break, for who knows why and you have to fix them, not knowing how they work or what they do, even doubting the value they add.
To me tests should be business focused and thoroughly thought out. Knowing that your endpoint returns a JSON with a certain format is far from enough:
- it also has to return data that makes sense, generated tests cannot give me that
- and more importantly, the API endpoints that are the most useful to test have side effects. A PUT will probably modify data in a database somewhere. A POST may trigger an asynchronous action. Inputs and outputs are worth testing thoroughly, but you have to check the side effects too. In a good test you'll check what's in db before and after your actions. That's why I love Venom, because it allows me to precisely check the state of all my system.
To be fair, I also have strong feelings against generated code in general because it's always been an impediment more than anything. I worked with JHipster for some time (a Java generator) and it's still nightmare fuel to me.
Again, take this with a grain of salt, others may have a different vision on how testing should work.
You can generate tests from your OpenAPI definition using the “import from OpenAPI” button on our website.
The tests are generated from your request/response examples. If there are no examples given/available, we will generate examples for you based on the schema provided.
Thanks for the quick answer! Are the examples generated based on the types? As in if a field is set as an integer, the tests will pass arbitrary integers? But still how does it know how to produce positive examples? Do you maybe have some documentation on this functionality?
Disclaimer: I am building EdgeNode with my friend.