Hacker Newsnew | past | comments | ask | show | jobs | submit | more QuiiBz's commentslogin

I wonder why the headline and other gradient title are images. Why not a text with linear-gradient and background-clip?


Thanks for asking, I initially had them like you suggested but with the way I was doing it, it made it kind of hard to make responsive. For the hero especially, the words didn't line up without lots of adjusting.

I'm not really good at css though, so there's that.


I wonder if we can build a Next.js app with this new `edge-runtime` mode, and host it on any platform that supports `edge-runtime` [0] APIs (like Cloudflare Workers (I think Vercel uses it?) and Deno Deploy).

If yes, that's truly amazing. It'll empower more and more people to run stuff at the Edge. I'm also working on an open-source alternative to the above offers [1], so I would love to be able to run and support Next.js on it.

edit: formatting

[0]: https://edge-runtime.vercel.app/

[1]: https://github.com/lagonapp/serverless


In the future, yes absolutely! Cloudflare is working on their support for hosting Next.js. And would be happy to chat with you about your solution.


Great news! Feel free to DM me on Twitter (@tomlienard) if you want to chat about this, as I also want to support Next.js in the future for my runtime.


Is there any public mention of Cloudflare's work on this? I can't find anything


I don't understand why there's two apps, name and domains (doing the same thing?), one redirecting to the other.

Is your startup named OpsFlow (previously digger.dev)? Or is OpsFlow a new product home digger?


We've started Digger 1.5 years ago, and launched it last summer. It was well received but didn't exactly take off as we hoped. Clearly needed more work. We have also realised that as a startup of 2 at the time we were trying to do way too many problems at once, it is simply impossible to build a full-featured platform with a tiny team.

We then launched a number of more focused products, all powered by the same Digger engine. 2 did better than the other 5 or so. Lemon was an alternative UI for AWS; Alicorn was a multi-cloud offering for containers. Those launches helped us realise that some of the core architectural assumptions we had were wrong. We also needed to rework the UX because people were getting confused by the split between Services and Environments, as well as separation of Infrastructure and Software deployments.

So we got back to the drawing board. We have radically simplified the UX, removed the confusing parts, introduced keyless AWS connection with narrowed down permission scope and many other things. The question remained - which is the use case people care about most? To answer it we started launching smaller products again, all powered by Digger engine with tweaked configurations.

Many people liked AWS Bootstrap. It allows you to quickly configure your AWS account to run frontend, backend and a database of your choice. Quite literally bootstrap.

Another thing that was well received was Terragen. We made it all about auto-generation of terraform. As soon as the user connects their AWS account they can export generated Terraform into their GitHub.

With OpsFlow we took the learnings from AWS Bootstrap and Terragen and made the UI even simpler. It no longer bothers the user with optional stuff, it is all moved to the new Settings page. And it's centered around 2 simple types of building blocks - Apps and Resources.

OpsFlow is the closest we got so far to making something people want. Still a long way to go though :)


I would love to see more in-depth explanations about Oxygen [0]. I'm currently creating a similar runtime [1] based on V8 Isolates so that would be interesting to compare.

[0]: https://shopify.dev/custom-storefronts/oxygen [1]: https://github.com/lagonapp/serverless/


Look for a blog post about Oxygen in the coming weeks! Initially, we're partnering with Cloudflare using Workers for Platforms [0] so Oxygen's runtime shares many of the same APIs you'd expect to see in a Cloudflare Worker [1].

[0]: https://blog.cloudflare.com/workers-for-platforms/ [1]: https://shopify.dev/custom-storefronts/oxygen/worker-runtime...


> initially

Does that mean you want to roll out your own runtime in the future?


Isn't this the nature of trying to capture more profit? Of course, once proven, they would like to vertically integrate; I take this as true in all circumstances


Certainly not out of the picture, but Cloudflare is a terrific solution for us right now.


Same same, the SSR piece means they’d have to build some kind of storefront PaaS, and it wasn’t immediately obvious what the enabling runtime is (or more interestingly for me, how it ticks).


I’m currently building a FaaS runtime using v8 isolates, which I hope to open-source soon. That’s actually not that hard since isolates are, isolated from each other.

Performance-wise, it’s also very promising. For a simple hello world, the « cold start » (which is mostly compile time) is around 10ms, and on subsequent requests it runs in 1ms.


It doesn’t worry you that the v8 team specifically tells you not to do this?

eta link: https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...


Can you give a link to this? Cloudflare (Workers) and Deno (Deploy) both uses v8 isolates for their runtimes, with I believe some significant clients running production code (huge clients like Vercel and Supabase use these solutions)

Edit:

> If you execute untrusted JavaScript and WebAssembly code in a separate process from any sensitive data, the potential impact of SSCA is greatly reduced. Through process isolation, SSCA attacks are only able to observe data that is sandboxed inside the same process along with the executing code, and not data from other processes.

I do run isolates in separate processes to prevent security issues, even if that may not be enough. Still an early prototype for now.


I'm talking about this: https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...

As long as you run each customer in a separate OS-level process, you should be good. But then, that is not much different from Lambda or other FAAS implementations.


For now, each process runs many isolates - but a single server run many processes. Cloudflare have implemented a similar mechanism [1]:

> Workers are distributed among cordons by assigning each worker a level of trust and separating low-trusted workers from those we trust more highly. As one example of this in operation: a customer who signs up for our free plan will not be scheduled in the same process as an enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8.

[1] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...


The blogpost is from two years ago.


The guy from Inkdrop [0] makes a living with his note-taking app. He also has a YouTube channel [1] which I found very relaxing.

[0] https://www.inkdrop.app

[1] https://www.youtube.com/c/devaslife


Uses a subscription pricing model though. I wonder why he choose that model. Personally I love the Sublime model, buy a license, use it anywhere.


Because a subscription pricing model brings in more money and a more predictable revenue stream. Most people selling apps on a pay-once basis can't afford to support it and develop new features indefinitely, so they move on to the next project and it goes into maintenance mode, only getting updates when a new OS update breaks it.

Also needs to pay for the cloud servers running it lol.


> goes into maintenance mode, only getting updates when a new OS update breaks it.

I don't see anything wrong with this. Users buy software because it solves their current problem now, rather than a possible feature in the future. The revenue stream is definitely more reliable though. Just that as a user I wouldn't mind if the software I bought today stays that way forever, and as a developer I wouldn't mind developing software to completion then leaving it as that.


I think it syncs the data between instances of the app (on the cloud - 10GB or something like that..).


By default, it's synced on Inkdrop's servers, but you can self-host your own DB if you want. It's well described in the docs: https://docs.inkdrop.app/manual/synchronizing-in-the-cloud


... in which case paying per month makes no sense.


Ok I can't thank you enough for exposing me to this channel. The aesthetics and vim and the complete coding is so relaxing. Do you have any other channel recommendations like this?


I follow him on YT. His videos have an amazing aesthetics, and the tech content displayed is simple mind blogging to me as a non-tech person.


Seems to be doing quite well too, considering all the hardware on his channel.


Similar to Lambda@Edge, Cloudflare also offers Workers (https://workers.cloudflare.com/) which is the same thing, but only with a JavaScript runtime (no nodejs, they use V8), so I believe that it's significantly faster.


CloudFront has CloudFront Functions now which is similar; a very stripped-down JS environment that runs at the edge locations.


CloudFront Distributions cannot pass the request to CloudFront Functions before sending to the origin. In other words, they cannot be used to modify origin request/responses. They can only modify the viewer request/responses. [0]

Only Lambda@Edge can help the scenario which I provided, which is also AWS's recommended solution. [1]

[0] https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...

[1] https://aws.amazon.com/blogs/architecture/serving-content-us...


A viewer request function can, of course, be used to modify the request before it is sent to the origin. The difference is only related to caching — viewer request happens before the cache lookup, origin request happens after the cache lookup when CF has decided to make an origin request.

For the stated purpose, either function is fine:

With Lambda@Edge you'd use origin request if you were caching these paths, because your function would be called less often so your costs would be lower.

With CF Functions you can only do the pre-cache-lookup modification, so it will be called for every request, but the cost is much lower than Lambda@Edge so it may not matter, and maybe you were not caching these paths anyway in which case it's virtually identical.


Modifying the URL in a viewer request results in the viewer request changing accordingly. If you look at my other responses to other comments, I explained that in detail.

Basically, using the Viewer Request trigger and modifying the URI there causes CloudFront to force a 301 to the user to the new URI, because it's for the viewer. Therefore, in the example of /api/users, if you modify the viewer to remove /api, CloudFront literally removes /api from your request URI, meaning the client accesses /api/users but the server instead sends you to /users (read: server returns 301 location: /users when you hit /api/users) because of your viewer rewrite. You end up hitting your frontend instead of your backend because in order for it to hit your backend, the viewer request has to have /api in it. Therefore you cannot strip it in the viewer request. You must do it in the origin request, which is not supported by CloudFront functions.


I tried to monitor services status using https://stop.lying.cloud, but they are also hosted to AWS, and down too.


If they're monitoring AWS downtime they might want to rethink this.


How come? It's accurate.


True, if it is down, then that means AWS is down (not necessarily, obviously). :D But honestly, if they want to monitor AWS, they gotta pick something else for this reason, something that is not down when AWS is.


I guess it depends on whether you like your FALSE's encoded as timeouts :)


Well... Yes. Hahahah


Work smarter, not harder


AWS should monitor itself from Azure or GCP, even DO or Linode makes more sense.

Eat your own dog food shows confidence, but monitoring it is a different dimension, you need use anything but your own dog food there.


It's the only realistic multi-cloud provider scenario I can ever come up with that I would consider actually implementing...


AWS wouldn't monitor itself from a competitor, of course, but they could just as well silo a team and isolate DCs to do independent self-auditing.


I don't know about AWS, but I know a lot of us uptime monitoring makers use (and pay) for competitor's products to know if we're down.


Rightly so. My point is a company can self-audit without having to pay a competitor.


I think that is inherently riskier because you never know on what axis you will have a failure and it is difficult to exclude all shared axes.


But we're talking about a status page which should be basically static. In it's simplest form you need a rack in 2+ random colos and a few people to manage the page update framework. Then you make teams submit the tests that are used to validate SLA. Run the tests from a few DCs and rebuild the status page every minute or two.

Maybe add a CDN. This shit isn't rocket science and being able to accurately monitor your own systems from off infrastructure is the one time you should really be separate.


That applies when you use competitors too.

They could have a related outage, or even a coincidentally timed one


Absolutely. And even if it’s cheaper to use the competition, an expensive custom solution will be found.


They have a bazillion alexa and kindle devices out there that they could monitor from, heh heh. At least let that phone-home behaviour do something useful, like notice AWS is down.


AWS wouldn't monitor itself from a competitor, of course

Why not? The big tech companies use each other all the time.

For example, set up a new firewall on macOS and you can see how many times Apple pulls data from Amazon or Azure or other competitors' APIs and services.


Apple is not a competitor to AWS or Azure in any way. They offer not infrastructure/platform as a service that I am aware of.


Apple and Amazon are competitors. Apple and Microsoft competitors.

The postulation was that Apple and Amazon weren't competitors. Not that they're not competitors in a specific niche.


But the idea that Amazon or Microsoft or Google would host anything at apple is pretty out there.

Apple uses their competitor's services because they can't build their own cloud and host their own shit. The big boys don't use competitors for services they are capable of building themselves.


And yet video.nest.com (Google) resolves to an Amazon load balancer.


A similar reason drives businesses to host `status.product.bigcorp` on a different server. And if your product is a cloud then your suggestion makes sense.


Yeah, I homed https://stop.lying.cloud out of us-west-2. Oops.


Considering the sea of bright green circles, reds might stand out but blues get lost in a fast scroll. Perhaps fade or mute the green icon to improve visibility of non-green which is the interesting information?


The brand is strong if you’re really the owner


How does this service work?

It seems to have all the look and feel of AWS, and somehow has more up to date info than the official AWS status page?


It's the same info - it just changes all blues to yellows and all yellows to reds. :)


I had no idea!

Pretty funny actually.


Now that they're back up they're not reporting any problems, how is it supposed to work? It looks like it is just repeating the status reported on the Amazon status page.


It is. It's just the AWS status page run through a transformation function to:

1. Remove all the thousand green services that no one cares about when looking at AWS status

2. Upgrade all yellows to reds because Amazon refuses to list anything as "down" no matter how bad the outage is.

3. Insert a snarky legend


I mean, sounds like it's working as intended then?


Funny I didn't know that and assumed it was okay


That’s hilarious


Yes - but here, you can also save hosts credentials (ip, port, user, password/key...) on a remote server. This data is E2E encrypted, and the key to decrypt it is a hash of your master password (the hash is stored in the credential vault of your OS, but never your clear password - and the remote server only stores the encrypted hosts data).

This allows you to connect to your account on the app from any computer and be able to connect to your saved SSH hosts.

This also means that only your master password can unlock the encrypted data, so if you lose it: no way to recover the data.

Note: You will be able to self-host the server if needed.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: