Thanks for asking, I initially had them like you suggested but with the way I was doing it, it made it kind of hard to make responsive. For the hero especially, the words didn't line up without lots of adjusting.
I'm not really good at css though, so there's that.
I wonder if we can build a Next.js app with this new `edge-runtime` mode, and host it on any platform that supports `edge-runtime` [0] APIs (like Cloudflare Workers (I think Vercel uses it?) and Deno Deploy).
If yes, that's truly amazing. It'll empower more and more people to run stuff at the Edge. I'm also working on an open-source alternative to the above offers [1], so I would love to be able to run and support Next.js on it.
Great news! Feel free to DM me on Twitter (@tomlienard) if you want to chat about this, as I also want to support Next.js in the future for my runtime.
We've started Digger 1.5 years ago, and launched it last summer. It was well received but didn't exactly take off as we hoped. Clearly needed more work. We have also realised that as a startup of 2 at the time we were trying to do way too many problems at once, it is simply impossible to build a full-featured platform with a tiny team.
We then launched a number of more focused products, all powered by the same Digger engine. 2 did better than the other 5 or so. Lemon was an alternative UI for AWS; Alicorn was a multi-cloud offering for containers. Those launches helped us realise that some of the core architectural assumptions we had were wrong. We also needed to rework the UX because people were getting confused by the split between Services and Environments, as well as separation of Infrastructure and Software deployments.
So we got back to the drawing board. We have radically simplified the UX, removed the confusing parts, introduced keyless AWS connection with narrowed down permission scope and many other things. The question remained - which is the use case people care about most? To answer it we started launching smaller products again, all powered by Digger engine with tweaked configurations.
Many people liked AWS Bootstrap. It allows you to quickly configure your AWS account to run frontend, backend and a database of your choice. Quite literally bootstrap.
Another thing that was well received was Terragen. We made it all about auto-generation of terraform. As soon as the user connects their AWS account they can export generated Terraform into their GitHub.
With OpsFlow we took the learnings from AWS Bootstrap and Terragen and made the UI even simpler. It no longer bothers the user with optional stuff, it is all moved to the new Settings page. And it's centered around 2 simple types of building blocks - Apps and Resources.
OpsFlow is the closest we got so far to making something people want. Still a long way to go though :)
I would love to see more in-depth explanations about Oxygen [0]. I'm currently creating a similar runtime [1] based on V8 Isolates so that would be interesting to compare.
Look for a blog post about Oxygen in the coming weeks! Initially, we're partnering with Cloudflare using Workers for Platforms [0] so Oxygen's runtime shares many of the same APIs you'd expect to see in a Cloudflare Worker [1].
Isn't this the nature of trying to capture more profit? Of course, once proven, they would like to vertically integrate; I take this as true in all circumstances
Same same, the SSR piece means they’d have to build some kind of storefront PaaS, and it wasn’t immediately obvious what the enabling runtime is (or more interestingly for me, how it ticks).
I’m currently building a FaaS runtime using v8 isolates, which I hope to open-source soon. That’s actually not that hard since isolates are, isolated from each other.
Performance-wise, it’s also very promising. For a simple hello world, the « cold start » (which is mostly compile time) is around 10ms, and on subsequent requests it runs in 1ms.
Can you give a link to this? Cloudflare (Workers) and Deno (Deploy) both uses v8 isolates for their runtimes, with I believe some significant clients running production code (huge clients like Vercel and Supabase use these solutions)
Edit:
> If you execute untrusted JavaScript and WebAssembly code in a separate process from any sensitive data, the potential impact of SSCA is greatly reduced. Through process isolation, SSCA attacks are only able to observe data that is sandboxed inside the same process along with the executing code, and not data from other processes.
I do run isolates in separate processes to prevent security issues, even if that may not be enough. Still an early prototype for now.
As long as you run each customer in a separate OS-level process, you should be good. But then, that is not much different from Lambda or other FAAS implementations.
For now, each process runs many isolates - but a single server run many processes. Cloudflare have implemented a similar mechanism [1]:
> Workers are distributed among cordons by assigning each worker a level of trust and separating low-trusted workers from those we trust more highly. As one example of this in operation: a customer who signs up for our free plan will not be scheduled in the same process as an enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8.
Because a subscription pricing model brings in more money and a more predictable revenue stream. Most people selling apps on a pay-once basis can't afford to support it and develop new features indefinitely, so they move on to the next project and it goes into maintenance mode, only getting updates when a new OS update breaks it.
Also needs to pay for the cloud servers running it lol.
> goes into maintenance mode, only getting updates when a new OS update breaks it.
I don't see anything wrong with this. Users buy software because it solves their current problem now, rather than a possible feature in the future. The revenue stream is definitely more reliable though. Just that as a user I wouldn't mind if the software I bought today stays that way forever, and as a developer I wouldn't mind developing software to completion then leaving it as that.
Ok I can't thank you enough for exposing me to this channel. The aesthetics and vim and the complete coding is so relaxing. Do you have any other channel recommendations like this?
Similar to Lambda@Edge, Cloudflare also offers Workers (https://workers.cloudflare.com/) which is the same thing, but only with a JavaScript runtime (no nodejs, they use V8), so I believe that it's significantly faster.
CloudFront Distributions cannot pass the request to CloudFront Functions before sending to the origin. In other words, they cannot be used to modify origin request/responses. They can only modify the viewer request/responses. [0]
Only Lambda@Edge can help the scenario which I provided, which is also AWS's recommended solution. [1]
A viewer request function can, of course, be used to modify the request before it is sent to the origin. The difference is only related to caching — viewer request happens before the cache lookup, origin request happens after the cache lookup when CF has decided to make an origin request.
For the stated purpose, either function is fine:
With Lambda@Edge you'd use origin request if you were caching these paths, because your function would be called less often so your costs would be lower.
With CF Functions you can only do the pre-cache-lookup modification, so it will be called for every request, but the cost is much lower than Lambda@Edge so it may not matter, and maybe you were not caching these paths anyway in which case it's virtually identical.
Modifying the URL in a viewer request results in the viewer request changing accordingly. If you look at my other responses to other comments, I explained that in detail.
Basically, using the Viewer Request trigger and modifying the URI there causes CloudFront to force a 301 to the user to the new URI, because it's for the viewer. Therefore, in the example of /api/users, if you modify the viewer to remove /api, CloudFront literally removes /api from your request URI, meaning the client accesses /api/users but the server instead sends you to /users (read: server returns 301 location: /users when you hit /api/users) because of your viewer rewrite. You end up hitting your frontend instead of your backend because in order for it to hit your backend, the viewer request has to have /api in it. Therefore you cannot strip it in the viewer request. You must do it in the origin request, which is not supported by CloudFront functions.
True, if it is down, then that means AWS is down (not necessarily, obviously). :D But honestly, if they want to monitor AWS, they gotta pick something else for this reason, something that is not down when AWS is.
But we're talking about a status page which should be basically static. In it's simplest form you need a rack in 2+ random colos and a few people to manage the page update framework. Then you make teams submit the tests that are used to validate SLA. Run the tests from a few DCs and rebuild the status page every minute or two.
Maybe add a CDN. This shit isn't rocket science and being able to accurately monitor your own systems from off infrastructure is the one time you should really be separate.
They have a bazillion alexa and kindle devices out there that they could monitor from, heh heh. At least let that phone-home behaviour do something useful, like notice AWS is down.
AWS wouldn't monitor itself from a competitor, of course
Why not? The big tech companies use each other all the time.
For example, set up a new firewall on macOS and you can see how many times Apple pulls data from Amazon or Azure or other competitors' APIs and services.
But the idea that Amazon or Microsoft or Google would host anything at apple is pretty out there.
Apple uses their competitor's services because they can't build their own cloud and host their own shit. The big boys don't use competitors for services they are capable of building themselves.
A similar reason drives businesses to host `status.product.bigcorp` on a different server. And if your product is a cloud then your suggestion makes sense.
Considering the sea of bright green circles, reds might stand out but blues get lost in a fast scroll. Perhaps fade or mute the green icon to improve visibility of non-green which is the interesting information?
Now that they're back up they're not reporting any problems, how is it supposed to work? It looks like it is just repeating the status reported on the Amazon status page.
Yes - but here, you can also save hosts credentials (ip, port, user, password/key...) on a remote server. This data is E2E encrypted, and the key to decrypt it is a hash of your master password (the hash is stored in the credential vault of your OS, but never your clear password - and the remote server only stores the encrypted hosts data).
This allows you to connect to your account on the app from any computer and be able to connect to your saved SSH hosts.
This also means that only your master password can unlock the encrypted data, so if you lose it: no way to recover the data.
Note:
You will be able to self-host the server if needed.