Hacker News new | past | comments | ask | show | jobs | submit | lucacasonato's comments login

Deno is NPM compatible


Only recently... and yes, to Deno's great credit. Deno has an amazing team and this isn't commentary about their hard work; just disagreement with one of the early decisions.


`is:open is:issue label:bug label:"node compat"` -> 203 issues currently open.


Neat. But for a relevant query, try searching for "npm" instead. 0 issues listed. Further, in the `deno_npm` repo for npm resolving by the deno cli, there are two open issues. One is a feature request, and the other is not clearly an important issue.


> Neat. But for a relevant query, try searching for "npm" instead

`is:issue is:open npm` 467 open

https://github.com/denoland/deno/issues?q=is%3Aissue+is%3Aop...

`is:issue is:open label:bug npm` 199 open

https://github.com/denoland/deno/issues?q=is%3Aopen+is%3Aiss...

For the record, "NPM compat" probably does include many feature requests (legitimately), not just bugs.


I did make a mistake. Thanks for pointing that out.

However, the point that deno is npm compatible is still true, contrary to what you originally said.


Deno is probably not "fully" NPM compatible, arguably, because you still need to prefix imports with "npm:" in a lot of cases. I've filed PRs to that effect, is why I know -- a breaking change for many widely-deployed libraries that need to maintain older node compat. That should not be necessary for a runtime that is "fully" NPM compliant.

Aside from that, Node compliance is a much larger ball of wax than simple NPM compliance, and as far as I know Deno is not there and will not be there for some time.

My gripe with Deno is the choice to hard-fork. That is core to the idea; so, we will disagree. No big deal. As a result of Deno's choice, people now have many runtimes to choose from: Bun, LLRT, and the one I have written on top of GraalJs, Elide. Many others, too.

Most of these runtimes shoot for as full NPM / Node compat as they can muster without intolerable contortions in the code. Why? Because a ton of software out there runs on NPM, or on Node APIs. Users think of server-side JavaScript and Node as the same thing -- this is not true, Node is both the "Node APIs" and the V8-based engine underneath it.

But this is exactly what I mean: for Deno to claim that Oracle does not "use" the JavaScript trademark requires a completely ignorant stance about technologies like GraalJs. Last we benched, Elide (on top of GraalJs) outperformed Deno by a wide margin. Oracle has invested years of engineering work into it. Enough that it outperforms Node (by a long shot) and Deno, and typically ties with Bun. So why does Deno get to say what happens with the trademark? It makes no sense to me.

I might even agree that Oracle should not own it or control it the way they do. I don't know. But I do know that Deno's demands are Node-centric, and the JavaScript ecosystem is simply way bigger than that.


I think the history, causes, and goals here are getting a bit wobbly. Ryan has been outspoken about what he thinks he did wrong with Node and why he decided to make Deno. I'm obviously not him, so I can't speak much about all of that.

Node compatibility is interesting because it's only useful until it's not. If all these other runtimes eat away enough at node's share, it won't make sense to be compatible with node anymore.

Just to finish the original discussion, as much as I love talking about the current state of js (heh) runtimes: the mention of graaljs supports deno's argument. Oracle, for whatever reason, used "js" rather than "JavaScript".


GraalJs is just a name. It was created by Oracle so they can use the trademark if they want to. I have no idea what you are talking about.

> Ryan has been outspoken about what he thinks he did wrong with Node and why he decided to make Deno

Yes, I explained that we disagree. Clearly he also feels he was wrong about Deno’s NPM choices since he reversed course and added NPM support. So Ryan agrees with me, I guess.

> If all these runtimes eat away enough at node’s share

WinterCG and other efforts are formalizing APIs that can be shared across runtimes. Which is great. Many of these APIs are at least informed by, if not outright standardized versions of, Node APIs. They are here to stay.


JavaScript is the world's most popular programming language.

https://www.statista.com/statistics/793628/worldwide-develop...


That graph can't be right. Everyone knows it's a toss-up between Rust and Clojure. :)


Sorry, but the second largest "language" on that graph is HTML/CSS. I mean that's not even a programming language.


So ignore that entry


That's a poor stats to be polite. SQL is no programming language, PL/SQL and its variants are. Without scripting languages you can't embed much logic in it.

I also wonder how they count multi users, I can easily mark 10 of those if not more.

> HTML/CSS were the most commonly used programming languages

I am sorry but that can't be taken seriously, then I know a ton of Excel developers, and wait till I get to Powerpoint ones.


You clearly have not read the post :D


If you mean:

Brendan Eich, the creator of JavaScript and a co-signatory of this letter, wrote in 2006 that “ECMAScript was always an unwanted trade name that sounds like a skin disease.”

Then this is not a matter of trademark or identity. It is only a matter of marketing. If this is just a matter of trademark my comment remains equally valid. Just use a different name.


"Why should I change? He's the one who sucks."

Oracle gets no real benefit from the trademark, and getting everyone to stop calling it JavaScript is basically impossible. It would be better for everyone for Oracle to just abandon the trademark.

I don't expect Oracle to actively release the trademark, but it would be better if they did.


Don't worry about other people. Just worry about yourself. It is either your problem or it isn't a problem at all.


How many books and blog posts with "JavsScript" in the title are there? Several quadrillion is my guess.


It is practically abandoned, but unless the USPTO or a US court says that that is so, it is not legally abandoned. That is a problem because the confusion and insecurity about the trademark remains until it is legally considered abandoned.

That's why we need to file a petition with the US Patent and Trademark office.


like 90% of the DMA is just this haha


Except that is not what Google is doing. They have exclusive access to the one line that is preinstalled for all houses. Only they can use it. And if you want a different provider, you can't use that same line. You have to pay for the installation of a line from that provider with your own cash.


Huh, Chrome doesn't come preinstalled unless you are talking about ChromeOS.

I guess I just don't see the problem in a feature like this in a third party browser software that is completely optional to install and use and has lots of alternatives.


Yeah, so people with a choice of browsers might prefer it not be the one exposing exclusive APIs for its parent company, and it might affect that company's "we're not evil" image.


This also works on all chromium based browsers such as Edge, Vivaldi and Brave


I agree it is very useful! This is also how I discovered this in the first place.

But that is not at all my point. The point is that google.com web properties have access to an API and a browser capability that is not available to it's competitors. Google only allows reading CPU info for itself.

The reason the data is not available for everyone, is because it would be a huge tracking vector. Same reason we don't allow webpages to read the device hostname, or username, or Chrome profile name. Google exposes this to google.com because it trusts itself. That poses this antitrust issue though.


No, this is used by Google Meet right now. Open the "Troubleshooting" panel in meet.google.com in Chrome, and you'll see live system wide CPU usage reporting :)


In principle, that's something that could be allowed without giving access "to" Google/the site owner—even allowing the site author to provide their own functions for formatting and drawing the values—and thus could be allowed for _any_ website. Designing and implementing it is a fun technical problem, so it's a wonder why it wasn't, considering the motivations of a typical programmer (and those at Google especially).


How would you make an API accessible on the browser side but prevent the return values from being sent to the server? Somebody would surely find a way to use it for user fingerprinting.

Edit: I guess if you only want to make a local debug tool, you could make it callable only from a completely isolated sandbox. Maybe?


> How would you make an API accessible on the browser side but prevent the return values from being sent to the server?

Create an API for starting a "performance-metrics visualization Service Worker", that takes two things from a page as input:

1. the service-worker script URL

2. the handle of a freshly-allocated WebGL Canvas (which may or may not already be attached to the DOM, but which has never yet received any WebGL calls.) This Canvas will have its ownership moved to the Service Worker, leaving the object in the page as only an opaque reference to the Canvas.

The resulting Service Worker will live in a sandbox such that it 1. doesn't have network access, 2. can receive postMessage calls, but not make them; and 3. doesn't have any write access to any storage mechanism. Other than drawing on the Canvas, it's a pure consumer.

Also, obviously, this special sandbox grants the Service Worker the ability to access this performance API, with the metrics being measured in the context of the page that started the Worker.

The Service Worker is then free to use the info it gathers from making perf API calls, to draw metrics onto the moved Canvas. It's also free to change how/what it's drawing, or quit altogether, in response to control messages posted to it from the page.

The page can't introspect the moved Canvas to see what the Service Worker has drawn. All it can do is use the Canvas's now-opaque handle to attach/detach it to the DOM.


The worker could still send the data back to the page via side-channels.

For example by using up resources like the cpu, the gpu or ram in timed intervalls. The page would then probe for the performance fluctuations of these resources and decode the data from the pattern of the fluctuations.


IMO that shouldn't be part of the threat model. I could run an ad right now that consumes CPU in timed intervals and estimates CPU usage using a microbenchmark to communicate with js on other pages. This sort of fingerprinting and bits/minute side-channels are impractical to block. You'd have to give each origin its own CPU cores, cache partitions, etc


Sigh. You don't prune threats you can't control from a threat model, you document them so that the consumers and maintainers of the target of assessment can intelligently reason about the threats as the product evolves.


If a page can already deduce performance fluctuations all on its own, then you don't need a special access-limited performance API, do you? Just have the page do whatever you're imagining could be done to extract this side-channel info on the performance of the host — and then leak the results of that measurement over the network directly.

(I imagine, if such measurements done by pages are at-all distinguishable from noise, that they are already being exfiltrated by any number of JS user-fingerprinting scripts.)


A page can deduce performance fluctuations. It just needs to do the same calculation multiple times and measure the times.

The issue with the API is that it provides specifics about the CPU like "Apple M2 Max". If you give this info to a worker, the worker can encode it into a side-channel and send it to the page.


I imagine you could "solve" this (for a painful and pointless value of "solve") by 1. only allowing the Service Worker to do constant-time versions of operations (like the constant-time primitives that cryptographic code uses), and 2. not allowing this special Service Worker the ability to ever... execute a loop.

But at that point, you've gone so far to neutering the page-controlled Service Worker, that having a page-controlled Service Worker would be a bit pointless. If the Service Worker can only do exactly one WebGL API call for each metric timeseries datapoint it receives, then the particular call it's going to be making is something you could predict perfectly in advance given the datapoint. So at that point, why have the page specify it? Just let the browser figure out how to render the chart.

So I revised the design to do exactly that: https://news.ycombinator.com/item?id=40929284


After some of the replies, I gave this a bit more thought, and I came up with an entirely-different design that (IMHO) has a much better security model — and which I personally like a lot better. It eschews webpage-supplied Arbitrary Code Execution altogether, while still letting the user style the perf-visualization charts to match the "theme" of the embedding page. (Also, as a bonus, this version of the design leans more heavily on existing sandboxing features, rather than having to hypothesize new ones!)

1. Rather than having the page-perf API be a Web API "only available on specific origins" or "only available to weirdly-sandboxed Service Workers", just make it a WebExtension API. One with its own WebExtension manifest capability required to enable it; where each browser vendor would only accept WebExtensions requesting that particular capability into their Extension Stores after very thorough vetting.

(Or in fact, maybe browser vendors would never accept third-party WebExtensions with this capability into their Extension Stores; and for each browser, the capability would only be used in a single extension, developed by the browser vendor themselves. This would then be analogous to the existing situation, where the Web API is only available on a first-party domain controlled by the browser vendor; but as this would rely on the existing WebExtensions capabilities model, there would be no need for a separate one-off "WebAPI but locked to an origin" security-model. Also, unlike with a "WebAPI but locked to an origin" capability, you could play with this capability locally in an unpacked extension!)

2. Browsers that want to offer this "visualize the performance of the current page" ability as a "thing the browser can do", would just bundle their first-party WebExtension that holds this capability [and the "access current tab" capability] as a pre-installed + force-enabled component of the browser, hidden from the extensions management view. (I believe that this is already a thing browsers do — e.g. I believe Chrome implements its PDF viewer this way.)

3. This WebExtension would consist, at its core, of an extension page (https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...), that could be embedded into webpages as an iframe. This extension page would contain a script, which would use the WebExtension version of the page-perf API to continuously probe/monitor "the tab this instance of the extension-page document lives within." On receiving a perf sample, this script would then render the sample as a datapoint, in a chart that lives in some sense in the extension page's own DOM. There'd be no Service Worker necessary.

(Though, for efficiency reasons, the developer of this WebExtension might still want to split the perf-API polling out into a Service Worker, as a sort of "weak-reference perf provider" that the extension page subscribes to messages from + sends infrequent keepalive polls to. This would ensure that the Service Worker would unload — and so stop polling the page-perf API — both whenever the extension page's tab goes inactive, and whenever the browser unloads extension Service Workers generally [e.g. whenever the lid of a laptop is closed.] The page-perf API could itself be made to work this way... but it's easier to take advantage of the existing semantics of Service Worker lifetimes, no?)

4. But is there an existing API that allows a web-origin page to access/embed an extension page, without knowing the extension's (browser-specific) extension ID? Yes! Just like native apps can register "app intents", WebExtensions can register URI protocol handlers! (https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...) The performance-visualization WebExtension would register a protocol_handler for e.g. ext+pageperf:// , and then a page that wants to render its own perf would just create an <iframe allowTransparency="true" src="ext+pageperf://..." /> and shove it into the DOM. The privileged-origin-separation restrictions for iframes, would then stop the (web-origin, lower-privilege-level) page from introspecting the DOM inside the (extension-origin, higher-privilege-level) iframe — accomplishing the same thing that the "make the Canvas into an opaque handle" concept did in the previous design, but relying 100% on already-standardized security semantics of existing browser features.

---

But how would the webpage style the extension page to match itself? Well, that depends on what the extension page is doing to render the metrics.

An elegant assumption would be that the page-perf extension page is creating + incrementally updating an SVG DOM that lives embedded in the page's DOM — on each frame, appending new vector elements (e.g. bezier control-points) to that SVG's DOM to represent new datapoints in the time-series. (I believe e.g. Grafana's charting logic works this way.)

If this is the case, then all the parent page needs, is a way to pass a regular old CSS stylesheet to the extension page; which the extension page could then embed directly into its own <head>. Just have the parent page construct a <style> Element, and then call:

    perfIframe.postMessage('applyStyles', {transfer: constructedStyleElement})
(And the beauty of doing that, is that once you embed the transferred <style> from the webpage into the extension page, the very same privileged-origin-separation logic will kick in again, preventing the extension pages from loading insecure web-origin content from an origin not in the extension's manifest's whitelisted origins. Which in turn means that, despite the webpage author being able to write e.g. `background: uri(...)` in their stylesheet, and despite the extension page doing nothing to explicitly filter things like that out of the stylesheet before loading it, the extension page would get to the "making a network request" part of applying that style, hit a security violation, and fail the load. Thereby neutering the ability of the page developer to use web-origin URL-references within a stylesheet as a side-channel for communicating anything about the metrics back to them!)

---

And all of this would be easily made cross-browser. You'd need a little standalone "Browser Support for Embedded Performance Visualizations" spec, that specifies two things:

1. a well-known URI scheme, for each browser's built-in page-perf extension to register a protocol_handler for. (Since it's standardized, they can drop the "ext+" part — it can just be `pageperf://`.)

2. a fixed structure for the page-perf extension page's metrics-visualization DOM — i.e. a known hierarchy of elements with specific `id` and `class` attributes (think CSS Zen Garden) — so that the same webpage-author-supplied stylesheets will work to style every browser's own metrics-chart implementation. (This would seem constraining, but remember that WebComponents exist. Standardize a WebComponent for the metrics chart, and how styles affect it. Browser vendors are then free to implement that WebComponent however they like in their own impl of the extension. Very similar to how browser vendors are free to implement the internals of a new HTML element, actually — in fact, in theory, for maximum efficiency, browsers could even implement this WebComponent's shadow-DOM in their renderers in terms of a custom "internal" HTML element!)


The browser has CORS and CSRF to sandbox similar activities to this.


Why not just ask for permission when needing that sort of data?


Some people wont give it and then they will have less data.


Those hypothetical programmers at Google could start by doing a Manifest V4 that would be like V3 but actually useful and privacy-respecting. I’ll believe it when it happens.


Right, Meet is derived from the Hangouts codebase, I still think they'll probably just delete it. Meet is a stable product, how valuable is this special privilege now?


This is interesting to me because you have all the right facts and are reasoning well with them. But, we end up at: "Yeah you're right it wasn't killed, just a rebrand, so they'll probably just delete the code for it"

I worked at Google, and I can guarantee ya people don't go back and change names in old code for the latest rebrand done for eyewash 4 layers above me. Not out of laziness, either, it just has 0 value and is risky.

Also, video conference perf was/is a pretty big deal (c.f. variety of sibling comments pointing out where it is used, from gSuite admin to client app). It is great on ye olde dev machine but it's very, very hard on $300 WintelChromebook thrown at line-level employees

FWIW, they shouldn't have hacked this in, I do not support it. And I bet they'll just delete it anyway because it shouldn't have been there in the first place. Some line-level employee slapped it in because, in the wise words of Ian Hickson: "Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision."


Sure, I was sloppy in my use of the term "dead". Hangouts the product/brand ceased to exist, Hangouts the codebase lives on. It was ever thus. I worked at Google too, y'know ;)


Cheers


Google videoconferencing runs astronomically better on a $300 Chromebook than on a $2500 Intel Mac.


Heh, 100% agree. I switched to Chromebook went WFH started because of it. It couldn't handle it on an external display but at least it wasn't painfully bad


This decision was to the benefit of users if it got videoconferencing off the ground before Zoom came along.

(I swear, sometimes I think the Internet has goldfish-memory. I remember when getting videoconferencing to work in a browser was a miracle, and why we wanted it in the first place).


Okay.

Pretending you said something conversational, like: "is that quote accurate in this case? The API may have literally enabled the creation of video conferencing. I, for one, remember we didn't used to have it."

I see.

So your contention is:

- if anyone thinks a statsd web API, hidden in Chrome, available only to Google websites is worth questioning

- they're insufficiently impressed by video conferencing existing

If I have that right:

I'm not sure those two things are actually related.

If you worked at Google, I'm very intrigued by the idea we can only collect metrics via client side web API for statsd, available only to Google domains.

If you work in software, I'm extremely intrigued by the idea video conferencing wouldn't exist without client site web API for statsd, available only to Google domains.

If you have more details on either, please, do share


Scoping the data collection to Google domains is a reasonable security measure because you don't want to leak it to everybody. And in general, Google does operate under the security model that if you trust them to drop a binary on your machine that provides a security sandbox (i.e. the browser), you trust them with your data because from that vantage point, they could be exfiltrating your bank account if they wanted to be.

But yes, I don't doubt that the data collection was pretty vital for getting Hangouts to the point it got to. And I do strongly suspect that it got us to browser-based video conferencing sooner than we would have been otherwise; the data collected got fed into the eventual standards that enable video conferencing in browsers today.

"Could not have" is too strong, but I think "could not have this soon" might be quite true. There was an explosion of successful technologies in a brief amount of time that were enabled by Google and other online service providers doing big data collection to solve some problems that had dogged academic research for decades.


To be more clear:

After your infelicitous contribution, you were politely invited to consider _a client side web API only on Google domains for CPU metrics_ isn't necessary for _collecting client metrics_.

To be perfectly clear: they're orthogonal. Completely unrelated.

For some reason, you instead read it as an invitation to continue fantasizing about WebRTC failing to exist without it


What would the alternative be?

(Worth noting: Google Hangouts predates WebRTC. I think a case can be made that big data collection of real users machine performance in the real world was instrumental for hammering out the last mile issues in Hangouts, which informed WebRTC's design. I'm sure we would have gotten there eventually, my contention is it would have taken longer without concrete metrics about performance).


I made this.

  +------------------+
  |   Web Browser    |
  | +--------------+ |
  | |  WebRTC      | |
  | |  Components  | |
  | +------+-------+ |
  |        |         |
  | +------v-------+ |    +---------------+
  | | Browser's    | |    |   Website     |
  | | Internal     | |    | (e.g. Google  |
  | | Telemetry    | |    |  Meet)        |
  | +------+-------+ |    |               |
  |        |         |    |  (No direct   |
  | +------v-------+ |    |   access to   |
  | |  CPU Stats   | |    |   CPU stats)  |
  | |  (Internal)  | |    |               |
  +------------------+    +---------------+
           |
           | WebRTC metrics
           | (including CPU stats as needed)
           v
  +------------------+
  |  Google Servers  |
  | (Collect WebRTC  |
  |    metrics)      |
  +------------------+
Another attempt, in prose:

I am referring to two alternatives to consider:

A) Chrome sends CPU usage metrics, for any WebRTC domain, in C++

B) as described in TFA: JavaScript, running on allow-listed Google sites only, collect CPU usage via a JavaScript web API

There's no need to do B) to launch/improve/instrument WebRTC, in fact, it would be bad to only do B), given WebRTC implementers is a much less biased sample for WebRTC metrics than Google implementers of WebRTC.

I've tried to avoid guessing at what you're missing, but since this has dragged out for a day, I hope you can forgive me for guessing here:

I think you think there's a _C++ metrics API for WebRTC in Chrome-only, no web app access_ that _only collects WebRTC on Google domains_, and from there we can quibble about whether its better to have an unbiased sample or if its Google attempting to be a good citizen via collecting data from Google domains.

That's not the case.

We are discussing a _JavaScript API_ available only to _JavaScript running on Google domains_ to access CPU metrics.

Additional color commentary to further shore up there isn't some WebRTC improvement loop this helps with:

- I worked at Google, and it would be incredibly bizarre to collect metrics for improvements via B) instead of A).

- We can see via the rest of the thread this is utilized _not for metrics_, but for features such as gSuite admins seeing CPU usage metrics on VC, and CPU usage displayed in Meet in a "Having a problem?" section that provides debug info.


I also worked at Google, and this kind of telemetry collection doesn't seem surprising to me at all. I don't know if you are / were familiar with the huge pile of metrics the UIs collect in general (via Analytics). I never worked on anything that was cpu-intense enough to justify this kind of back-channel, but I don't doubt we'd have asked for it if we thought we needed it... And you'd rather have this as an internal Google-to-Google monitor than punch a big security hole open for any arbitrary domain to query.

JS is easier to debug (even with Google's infrastructure), and they have no need of everyone else's videoconference telemetry (which when this was added, would have been, iirc, Flash-based).

I believe the things they learned via this closed loop let Google learn things that informed the WebRTC standard, hence my contention it got us there faster. Unless I've missed something, this API was collecting data since 2008. WebRTC was 3 years later.

I think you've misunderstood my question regarding "What would the alternative be?" I meant what would the alternative be to collecting stats data via a private API only on Google domains when we didn't have a standard for performance collection in browsers? We certainly don't want Google railroading one into the public (with all the security concerns that would entail). And I guess I'm just flat out not surprised that they would have dropped one into their browser to simplify debugging a very performance intensive service that hadn't been supported in the browser outside plugins before. Is your contention that they should have gone the flash route and done a binary as a plug-in, then put telemetry in the binary? Google had (And mostly still has) a very web-centric approach; doing it as a binary wouldn't be in their DNA.


It was just updated to extension manifest v3 version and someone went to the trouble of having some sort of field test id mess for it on top of all the nonsense. Doesn't seem like anyone is planning to get rid of it anytime soon.

But the Git history of it is fascinating, starting at the initial merge that got it in that went with the old school trick of "just call X to explain why this is needed" to get your stuff merged. Then every non-trivial change ever to it is inevitably auto-reverted due to some failure before being resubmitted, this must be the "unparalleled Google developer environment" in action - nobody can or bothers to run the tests on a piece of software this big. Half the commits are various formatting nonsense. One third is my favorite - someone making a change to an extension API only to realize the fucking hangout guys sneaked an actual extension into the code base and they will have to update that one to reflect their change. I can feel their anger personally.


It works perfectly well in Firefox without it, so I guess not much.


unsure how it's reported back now, but I believe (it's been a while since i've dug in there) it's also exposed as a metric for Google Workspace administrators to monitor client perf during said calls as well

(but yeah it would just be easier to yoink it)


We do not use GKE.

The Google infra that we do use in this hot path (GFE via Google Global External Load Balancers, and Colossus via Google Cloud Storage) is the same infra that powers serving static assets for Google internal services.


The SLO (objective) is an uptime of 100%. That means that we have no error budget to use for scheduled maintenance or anything of that sort. This means that we can not use software in this hot path that would require scheduled maintenance (ie a relational database that requires periodic downtime for major version upgrades). We additionally minimize risk here: no code that is written by us sits in the path that targets 100% uptime. Ie if it breaks, its due to an upstream failure within Google's web serving infrastructure.

If we were to provide an SLA (an agreement, stating the minimum level of service to a customer) for this service, it would not be 100%. It would be 99.99%. This is to avoid risk. But we can still have a higher internal target than the provided SLA.

If we have to make all changes in a way that requires that we do not even have 8 seconds of downtime a year (but 0 seconds of downtime), that significantly changes how you design a system and roll out changes.

TLDR: SLA != SLO


Hi, that makes sense, thank you - I didnt realize that this was meant in terms of "we have to choose technologies that never ever have to have maintenance", that would have been a better way to put it. Thanks :)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: