Naive question, but isn’t every output token generated in roughly the same, non-deterministic, way? Even if it uses its actual history as context, couldn’t the output still be incorrect?
Have you ever seen those posts where AI image generation tools completely fail to generate an image of the leaning tower of Pisa straightened out? Every single time, they generate the leaning tower, well… leaning. (With the exception of some more recent advanced models, of course)
From my understanding, this is because modern AI models are basically pattern extrapolation machines. Humans are too, by the way. If every time you eat a particular kind of berry, you crap your guts out, you’re probably going to avoid that berry.
That is to say, LLMs are trained to give you the most likely text (their response) which follows some preceding text (the context). From my experience, if the LLM agent loads a history of commands run into context, and one of those commands is a deletion command, the subsequent text is almost always “there was a deletion.” Which makes sense!
So while yes, it is theoretically possible for things to go sideways and for it to hallucinate in some weird way (which grows increasingly likely if there’s a lot of junk clogging the context window), in this case I get the impression it’s close to impossible to get a faulty response. But close to impossible ≠ impossible, so precautions are still essential.
Yes, but Claude Cowork isn't just an LLM. It's a sophisticated harness wrapped around the LLM (Opus 4.5, for example). The harness does a ton of work to keep the number of tokens sent and received low, as well as the context preserved between calls low. This applies to other coding agents to varying extents as well.
Asking for the trace is likely to involve the LLM just telling the harness to call some tools. Such as calling the Bash tool with grep to find the line numbers in the trace file for the command. It can do this repeatedly until the LLM thinks it found the right block. Then those line numbers are passed to the Read tool (by the harness) to get the command(s), and finally the output of that read is added to the response by the harness.
The LLM doesn't get a chance to reinterpret or hallucinate until it says it is very sorry for what happened. Also, when it originally wrote (hallucinated?) the commands was when it made an oopsy.
After reading this comment thread, I got curious and went through his history. While I agree the prose reeks of LLM tells, the messaging seems a little too nuanced and correct for 100% LLM use. Also, he's directly confirmed using the LLM to write clearly as English is not a primary language.
@miwa, thank you for taking the time to look into my history. It is encouraging to hear that you felt the "nuance" in my words, as I struggle a lot to balance my thoughts with the limitations of translation tools. Your comment gives me the confidence to keep trying.
Actually I wanted to engage with you on the original comment on this thread, which was unfortunately flagged. In a separate thread you discussed Shugyo and the value of repetitive training. I find this topic particularly relevant for this thread as I am a lifelong fighting game player, but only recently given serious thought to the craft of fighting games. Not just in playing, but in how they're made.
I've been focusing strictly on my execution lately after I was able to find a method to slow the framerate of the game down. The inspiration came from my musician days where guitar practice consisted mostly of very slow, deliberate repetitions of scales and exercises. The immediate goal was to be able to do the exercise. But the secondary, and perhaps more important, goal was to do the exercise without tension. Trying to consciously do both is challenging. It is only when the exercise has been repeated enough that it is internalized and I can draw my attention to tension.
So in the same way that a scale is like a "combo" of notes, fighting game execution requires very similar timing and awareness of internal tension. Translating this mode of practice means repeating the same simple combos that I use to take for granted, but in a very deliberate and intentional way. I'm talking hour long sessions of the same kick, kick, kick, quarter-circle-back+kick sequence. As a result I feel much greater confidence in my execution.
But also, slowing the game down and doing practice in this way has actually brought a greater appreciation of the design of fighting games. To really internalize when a button should be pressed to successfully execute a combo, a player should anchor their timing to visual and auditory cues. SNK does a really a good job of this with their hit spark animations. Attending to when it appears and when it recedes gives a visual indication of the necessary timing, which is something easily overlooked by casual and even veteran players.
All this to say that there is a subtle and profound undercurrent of craftsmanship that I now appreciate in fighting games.
There's no better time to play fighting games than right now. Street Fighter 6 has one of the best training rooms that I've seen. I also will slow the game down to 50% speed when internalizing a new combo sequence.
There's something zen and theraputive about sitting in the training room, working on the same combo over and over. Really working it into the muscles so that it becomes fluid and effortless in a real match.
Absolutely! Although I feel 6's combo structure is... Stifled. For the most part every character has roughly the same combo pattern, but I still find satisfaction in learning and executing.
@sanwa, thank you for such a profound and passionate comment. As a banker, I’ve seen many businesses, but your perspective as a musician adds a beautiful layer to the concept of "Shugyo."
I especially resonate with your method of "slowing down the tempo." In my peak days 30 years ago, I used to perform Guile’s Sonic Boom and Somersault Kick as naturally as breathing. I now realize that this "effortless" state was only possible because of the slow, deliberate repetitions I did back then, just as you described.
By slowing down until all tension is gone, you are not just learning a move—you are removing the "noise" from your mind and body. This is the ultimate "subtraction" and the only way to "Forge the Steel." Whether it's a guitar scale or the core philosophy of a 500-year-old company, the logic is the same: true strength is born from quiet, intentional repetition.
Thank you for sharing your journey of Shugyo. It’s an honor to find a fellow traveler here.
This is a great project! I like and use Wayland but the portal protocols and extension mechanism does leave a lot to be desired. Wayland is still quite a way behind Windows and macOS in terms of what productivity users need
An X11 rewrite with some security baked in is an awesome approach. Will be watching!
I thought for a long time that rather than move to Wayland, we could come up with a tidied-up version of X. Sounds like a good and useful project, I hope it progresses.
If you take the time to read through that (very partial) list of cruft and footguns in X11 it probably makes it a little easier to understand why a clean-slate approach was able to attract momentum and why many hands-on involved developers were relatively tired of X11. Critics would of course respond that backwards compatibility is worth the effort and rewrites are often the wrong call, etc. It's the Python 2/3 debate and many others.
Realistically rewrite would keep X11 compatibility layer and just do same wayland did, make new protocol.
Just... without all that mess that turned out to be at best +/-, at worst outright negative causing problems for everyone involved.
And near all of the "advantages" are "the server is built from scratch" not "the protocol was the limitation"
Python 3 was actively antagonistic to Python 2 code for no reason other than to lecture us about how we were doing things wrong, writing code to support 2 and 3 to help transition was dumb etc etc.
For example, in python 2 you could explicitly mark unicode text with u"...". That was actively BLOCKED with python 3.0 which supposedly was about unicode support! The irony was insane, they could of just no-oped the u"". I got totally sick of the "expert" language designers with no real world code shipping responsibilities lecturing me. Every post about this stuff was met by comments from pedantic idiots. So every string had to have a helper function around it. Total and absolute garbage. They still haven't explained to my satisifaction why not support u"..." to allow a transition more easily to 3.
Luckily sanity started prevailing around 3.5 and we started to see a progression - whoever was behind this should be thanked. The clueless unicode everything was walked back and we got % for bytes so you could work with network protocols again (where unicode would be STUPID to force given the installed base). We got u"" back.
By 3.6 we got back to reasonable path handling on windows and the 3 benefits started to come without antagonistic approaches / regressions from 2. But that was about 8 years? So that burnt a lot of the initial excitement.
> Python 3 was actively antagonistic to Python 2 code for no reason other than to lecture us about how we were doing things wrong, writing code to support 2 and 3 to help transition was dumb etc etc.
> [...]
> By 3.6 we got back to reasonable path handling on windows and the 3 benefits started to come without antagonistic approaches / regressions from 2. But that was about 8 years? So that burnt a lot of the initial excitement.
So it's a great analogy. Wayland started out proudly proclaiming that it intentionally didn't support features in the name of "security" but everyone should "upgrade" because this was totally better, and has been very slowly discovering that actually all the stuff it willfully dropped was useful and has mostly evolved back to near feature parity with Xorg.
Uhm no? As I mentioned, Wayland is simple because it was designed with the idea that there will be many implementations. It turns out that once you have many implementations, you can't just implement screen recording in one implementation and directly integrate with that implementation, because someone might use a different implementation. This then necessitates extensions for features that go beyond displaying things.
15 years ago I tried it and got that path error.
1 year ago I tried again and still got the same error.
I'm well aware that it's simple enough to fix. But I was baffled that the same error was still there.
I dunno there's a lot to pick from when it comes to "worst designed"!
It's definitely not well designed though.
And I agree about recommending it to beginners. Sure, a for-loop and a simple function look very friendly and easy, but good luck explaining to them why they can't import from a file in a different directory...
It was always an option, but "just" needed someone to dedicate all their time to it and pull in a group of long term maintainers. The real question is what will happen with the project in 2 years and will it be stable for day to day use.
The fact that you can "assume Vulkan exists" helps a lot (both hardware and software renderers exist). Do remember--Wayland predates Vulkan by almost a full decade.
In addition, you can offload OpenGL compatibility to Zink (again leaning into Vulkan).
> pull in a group of long term maintainers.
"Use new cool language" seems to be a prerequisite for this nowadays ...
You can't "assume Vulkan exists". Any pre-2016 hardware won't have proper hardware support for Vulkan and that's a lot of hardware still in use. Software renderers are unworthy of any serious consideration due to the perfomance drawbacks.
Just use OpenGL. I don't know when this trend to overcomplicate everything using Vulkan began, but I hate it.
Nvidia had a driver for Vulkan for Kepler which launched in 2012, AMD had support all the way back to GCN 1.0 (also 2012). Intel did have issues supporting it, I can't recall if it was for hardware reasons or just lack of desire for a driver.
Vulkan has substantial advantages for multi-threaded code, as well as exposing the underlying asynchronous nature of running code on the GPU. The kind of thing you want to be able to control with a desktop compositor where controlling vsync and present timing is very important.
I don't really understand what is supposedly missing in Wayland for productivity users? At work I have been using gnome with the wayland backend for years at this point and I can't really figure out anything that's missing.
Accessibility is apparently a big problem with wayland. E.g., the most popular / ?only? app that supports hardware eye trackers on Linux does not work with wayland, and states that it likely never will as wayland does not provide what it needs to add support (it is also the most popular app for voice/noise control). Even basic things like screen readers are apparently still an issue with wayland. Without a strong accessibility story, systems running wayland would have been banned at my last employer (a college).
Personally, I have a 3200x2400 e-ink monitor that has a bezel that covers the outer few columns of pixels. I use a custom modeline to exclude those columns from use. And, a fractional scaling of .603x.5 on this now 3184x2400 monitor to get 1920x1200 effective resolution. Zero idea how to accomplish this with wayland-- I do not think it is possible, but if anyone knows a way, I am all ears.
I ran into, at least, ten issues without solutions/work-arounds (like the issue with my monitor) when I tried to switch this year, after getting a new laptop. Reverted to a functional, and productively familiar, setup with X.
The xdg-desktop-portal stuff is still too immature. For example, my friend wanted my help after upgrading his Pop_OS to 24.04, and 24.04 replaced GNOME with COSMIC. COSMI had no RemoteDesktop portal (and still doesn't have it), so we couldn't use RustDesk like we always did without him installing a GNOME session just for that.
I've been an i3 user for almost two decades, but eventually switched to Sway - to this day there's no InputCapture portal, so I can't use Synergy with Sway, forcing me to switch to i3 while I'm working.
It's been over 10 years of things like that. There's always SOMETHING missing.
Screenshots are just completely broken. People always tell me to use other apps like flameshot but IME it just doesn't work and I don't want to have to mess around so much to take screenshots.
I'm still using Wayland because it's what came with my distro (endeavour OS, gnome), but it's really strange how it came broken out of the box.
Hmm, you mention in the README that it only works in a privileged container. This of course negates the security benefits Wayland supposedly has over X11, so it doesn't seem ideal.
My desktop is a bit long in the tooth (22.04), but I've long given up on trying to screen shot or screen share from Wayland. I have my Macbook sitting next to it and use it for those things, where it works basically flawlessly.
Kind of waiting for 26.04 to upgrade at this point, but I'm not really expecting any of this to be better yet.
edit: If I had it to do over again, I wouldn't have gone Wayland at 22.04.
Window positioning? You cannot position the window, you cannot send a hint, nothing? So my pop-up with GTK4 will randomly be placed somewhere, anywhere, without any control. OK, GTK4 went further and also removed popups without the parent, so you hack that with an invisible anchor window and then write platform-specific code for sane platforms that CAN, of course, move the window. And let's not talk about window icons that you have to put somewhere on the file system?
Have you considered if someone wants to make a compositor where each window is projected onto a facet of a hyper cube and must place windows in 4 dimensions? These are important use cases we should support, we should make cross platform software as difficult possible to develop for Linux by removing features that have been standard on desktop operating systems for decades.
I must correct you! Wayland has not and indeed cannot remove features because Wayland is a “protocol”. It is the compositors that are removing features.
This dilution of responsibility should make you feel much better.
It's not technically behind on window positioning. Rather, it was a deliberate choice not to support it. You can very reasonably object to that, but it is sorta a necessary measure to prevent clickjacking.
And common sense mitigations: if a new program I've never seen before drops an actionable control under my cursor, maybe just default to not immediately accepting the next input to it so I have a chance to see it.
I mean, you can create alternate APIs that would work for the pop-up use case: you could have a command to create a new window positioned relative to the current window’s coordinate space.
That limited capability still has a risk of denial attacks (just throwing up pop-ups that extend beyond the current window’s boundaries), but those can be mitigated in a number of ways (limit the new window’s boundaries to the current window’s, or just limit how many windows can be opened, etc.).
BS, windows and macos cant even do proper window managing for a start, and then it just goes downwards from there on.. You can perhaps install various weird third party things, but it does not come with it by default.
If you took people who absolutely never tried any computing, and gave them macos, windows, and for example Plasma, they would NOT consider windows or macos to be ready for the desktop. If you go 15 years back, even way more so.
even in the early 2000s, windows was so hilariously crappy that you had to make floppy disks to even get to install the thing. If PCs didnt come preloaded with windows, regular users would never ever be able to install it, versus the relative ease a typical linux distribution was to install. This is also one of the large reasons that when their windows slowed down due to being a piece of shit with 1000000 toolbars, people threw it out and bought a new, despite the fact that a reinstall would have solved it.
Windows in early 2000s didn't even detect your early 2000s SATA drive
Windows in early 2023 didn't even detect the network card it needed to download network card drivers. After changing mobos I needed to boot into linux to download network drivers for windows...
Windows in early 2025 still uses SCSI emulation to talk with NVMe and only now the server part got a proper driver
Windows in early 2025s still need virtio driver injection to boot properly as a VM without IDE emulation
"Drivers working out of the box" were never windows strong part
> Windows in early 2025 still uses SCSI emulation to talk with NVMe and only now the server part got a proper driver
You can enable this in Win 11 25H2. I have it enabled on my box. Doesn't seem to make that much of a difference, so it's more or less a moot point that it has been using SCSI emulation.
> Windows in early 2025s still need virtio driver injection to boot properly as a VM without IDE emulation
> Windows in the early 2000s installed just fine without a floppy directly from CD or PXE booting.
when was it sata became the norm? im thinking circa 2001-ish, and what windows was latest here? im thinking windows xp. lets try remember, did windows xp include sata drivers on the installation medium?? oh wait, it didnt. There wasnt even ahci at the time, and windows xp didnt include a single sata driver for any of the chipsets at the time
> A Window Manager and Window Server don't come by default with Linux... It's always an install-time option on the major distros.
desktop distributions generally come with a desktop environment default selected, or prompt you to choose between a few. one feature that has been there since more or less forever is alt + left/rightclick mouse to move/resize windows, which is significantly better than finding the title bar or corners like. for an operating system called "windows" its pretty hilarious it has the worst window management of them all, dont you think?
> If you took people who absolutely never tried any computing, and gave them macos, windows, and for example Plasma, they would NOT consider windows or macos to be ready for the desktop.
There's some truth to this. I've been installing fresh Windows 11s on family computers this holiday season, and good lord is it difficult to use.
The number of tweaks I had to configure to prevent actively hostile programs from ravaging disk read/writes (HDD pain), freezing and crashing, or invasive popups was absurd.
As someone who came from Windows, and has used Linux as my primary OS for 15 years, and MacOS here and there (cos work provided laptop), I can tell you that Linux was not ready for prime time 15 years ago. Today, I feel it is, but definitely not 15 years ago.
With prime time I mean being comfortable enough to install it for a non-technical user. Even during Ubuntu's Unity days it didn't feel like I could install it on a computer for my parents or siblings for them to use as a daily driver.
My parents did fine with Linux. My mom still does; it's certainly less maintenance effort from me than Windows would require.
It was fine for non-technical users since at least early GNOME 2, if you're ready to help them set up and maintain. Semi-technical users (Windows power users, gamers, &c — people who like to install and configure things, but fear the deep dark abyss of the terminal) were and remain more problematic.
Unity days were the nadir of linux desktop ux — it was when Gnome 2 was gone, and 3 not yet there. Still better than contemporaneous Windows 8, though.
I can bet there’s no OS that are easy to install for a non-technical user. And that start from booting the installation media. Give someone a OS with their software already installed and they will use whatever OS that is.
People are always task-oriented, not tool oriented unless they’re nerds.
I had Kubuntu installed on my grandfather's computer for a year. I ended up replacing it for Windows because my aunt likes to install stuff on it. But my grandfather was happy with it. He only needed a working web browser and a program to use the TV tunner.
15 years back people were given Windows macOS and Linux and people voted which OS were ready for the Desktop and which were not. The only BS is your inflammatory contribution to this topic.
Nope, Macs were expensive stuff games did not run on, and linux was just not pushed by near anyone.
It was not a war "which desktop is easier to use", it was "which system can run stuff I need". And if "the need" was "video games and office stuff", your only choice was windows.
they were not, they purchased what was in the stores, which was only windows. all the way from first windows to windows xp it was the biggest pile of shit imaginable. the average user wouldnt even have half a chance of installing it, and certainly couldnt use it with any kind of reasonableness, it was a giant mess, it was just the mess people were used to. Most people would throw out their computer and buy a new when windows became slow, because, of course it gradually becomes slower, makes perfect sense, no?
KDE from 15 years back was HUGELY better than windows at the time, and frankly, also windows now
Windows is reasonably OK, but MacOS' window management has always been really terrible.
Just think through the many different iterations over the years of what the green button on the deco does, which still isn't working consistently, same as double-clicking the title bar. Not to mention that whatever the Maximize-alike is that you can set title bar double click too (the options being Zoom and Fill, buried in settings somewhere) is different from dragging the title bar against the top of the screen and chosing single tile. Which is different from Control-Clicking the green button. Maybe. It depends on the app.
What a mess.
Both of them miss (without add-ons) convenience niche features I cherish, such as the ability to pin arbitrary windows on-top, but at least the basics in Windows work alright and moreover predictably and reliabily. Window management in MacOS just feels neglected and broken.
There may be many other ways in which MacOS shines as a desktop OS, and certainly in terms of display server tech it has innovated by going compositing first, but the window manager is bizarrely bad.
There is at least one area where both macos and windows suck - handling window focus. MacOS is regularly having trouble with tracking focus across multiple monitors and multi-window apps, making it unusable with keyboard only. And Windows just loves to steal focus in the most inappropriate moments.
>> windows and macos cant even do proper window managing for a start
> Well they certainly manage them better than x11 and wayland.
X11 doesn't manage Windows. You'd know this if you used it, and if you've used it, you'd know why some consider the window management on Windows and MacOS very primitive.
> X11 doesn't manage Windows. You'd know this if you used it, and if you've used it, you'd know why some consider the window management on Windows and MacOS very primitive.
Sure. Windows and macos are also fallible. But there has never been a project that competes with these two brands that can boast a similar commitment to stability and usability.
I don't use a Mac, but have you ever used Windows?
I mean, maybe you have, but if you are not fussy then at worst MacOS is quirky and Windows and Linux are identical and merely have different icons.
If you pay a little bit of attention you will notice that on linux things seem more flexible and intuitive.
If you are very finnicky, there is nothing that comes close to X11 window managers when it comes to window management flexibility, innovation and power.
Windows allows you to launch applications from a menu or via search. You can switch between windows with a mouse or keyboard shortcuts. Windows can either be floating, arranged in pseudo-tiled layers, or full screen. KDE can pretty much do the same under Wayland. Ditto for Gnome under Wayland, albeit to a lesser degree. That covers the bases for most people.
X11 window managers were a mixed bag. While there were a few standouts, most of the variation was in the degree to which they could be configured and how they were configured. There may be fewer compositors for Wayland because of the difficulty in developing them, but the ones that do exist do standout.
> I don't use a Mac, but have you ever used Windows?
I have
> I mean, maybe you have, but if you are not fussy then at worst MacOS is quirky and Windows and Linux are identical and merely have different icons.
Neither have keybindings that make any sense. The other failures are secondary
> If you pay a little bit of attention you will notice that on linux things seem more flexible and intuitive.
Only for windows refugees that have never used Mac OSX
> If you are very finnicky, there is nothing that comes close to X11 window managers when it comes to window management flexibility, innovation and power.
Unless you want to copy and paste, or have consistent key bindings cross applications, or take screenshots. Sure
I can agree on Windows, but there is no such thing as "keybindings that don't make sense" on a proper Linux WM given that you can literally make up any keybindings you want. I mean this strictly from a window management perspective, yes applications running in those windows have often got their own idea of what good UX is, and this clashes. That's just a trade-off of Linux and to a lesser extent Windows not being complete walled gardens.
> that have never used Mac OSX
I have _used_ Mac OSX. It was and continues to be a confusing experience every time. I'm not saying that this would be the case if I bothered to learn it, but in all the times I have used it, I have failed to see any feature which would make me want to switch to it over i3 or which I feel like is missing in i3. Really it doesn't seem like there is any way of making it act remotely close to i3. Tiling as an option on top of whatever Mac OSX has is just as appealing to me as tiling on top of what Windows has.
> Unless you want to copy and paste, or have consistent key bindings cross applications, or take screenshots. Sure
I've never had copy and paste fail on Linux. The only issues I've had is with more modern applications not implementing the selection properly which is a feature you don't have on windows in the first place. No idea about Macs.
Screenshots have always and will continue to work (the way I want them to) because I can, as mentioned, bind any key to any action.
I use Firefox personally, where do people who care about privacy go? For those of you who’ve already given up on Firefox (I can understand why..), where did you go?
I'm also still a FF user, but I'm eyeing Waterfox [1] and Floorp [2], both FF forks. Waterfox has the stronger privacy focus out of the two, but Floorp doesn't strike me as being any less private that vanilla FF.
I'm surprised nobody has mentioned Librewolf. It had sensible defaults and just works. I barely care about the privacy aspects. I just don't want to touch the settings
> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.
The point is that it doesn’t matter. A single site going down has a very small chance of impacting a large number of users. Cloudflare going down breaks an appreciable portion of the internet.
If Jim’s Big Blog only maintains 95% uptime, most people won’t care. If BofA were at 95%.. actually same. Most of the world aren’t BofA customers.
I'm not sure I follow the argument. If literally every individual site had an uncorrelated 99% uptime, that's still less available than a centralized 99.9% uptime. The "entire Internet" is much less available in the former setup.
It's like saying that Chipotle having X% chance of tainted food is worse than local burrito places having 2*X% chance of tainted food. It's true in the lens that each individual event affects more people, but if you removed that Chipotle and replaced with all local, the total amount of illness is still strictly higher, it's just tons of small events that are harder to write news articles about.
No it's like saying if one single point of failure in a global food supply chain fails, nobody's going to eat today. And which is in contrast to if some supplier fails to provide a local food truck today their customers will have to go to the restaurant next door.
Ah ok, it is true that if there's a lot of fungible offerings that worse but uncorrelated uptime can be more robust.
I think the question then is how much of the Internet has fungible alternatives such that uncorrelated downtime can meaningfully be less impact. If you have a "to buy" shopping list, the existence of alternative shopping list products doesn't help you, when the one you use is down it's just down, the substitutes cannot substitute on short notice. Obviously for some things there's clear substitutes though, but actually I think "has fungible alternatives" is mostly correlated with "being down for 30 minutes doesn't matter", it seems that the things where you want the one specific site are the ones where availability matters more.
The restaurant-next-door analogy, representing fungibility, isn't quite right. If BofA is closed and you want to do something in person with them, you can't go to an unrelated bank. If Spotify goes down for an hour, you're not likely to become a YT Music subscriber as a stopgap even though they're somewhat fungible. You'll simply wait, and the question is: can I shuffle my schedule instead of elongating it?
A better analogy is that if the restaurant you'll be going to is unexpectedly closed for a little while, you would do an after-dinner errand before dinner instead and then visit the restaurant a bit later. If the problem affects both businesses (like a utility power outage) you're stuck, but you can simply rearrange your schedule if problems are local and uncorrelated.
If utility power outage is put on the table, then the analogy is almost everyone solely relying on the same grid, in contrast with being wired to a large set of independent providers or even using their own local solar panel or whatever autonomous energy source.
Look at it a user (or even operator) of one individual service that isn’t redundant or safety critical: if choice A has 1/2 the downtime of choice B, you can’t justify choosing choice B by virtue of choice A’s instability.
The world dismantled landlines, phone booths, mail order catalogues, fax machines, tens of millions of storefronts, government offices, and entire industries in favor of the Internet.
So at this point no, the world can most definitely not “just live without the Internet”. And emergency services aren’t the only important thing that exists to the extent that anything else can just be handwaved away.
Terrible examples, Github and Google aren't just websites that one would place behind Cloudflare to try to improve their uptime (by caching, reducing load on the origin server, shielding from ddos attacks). They're their own big tech companies running complex services at a scale comparable to Cloudflare.
> If Cloudflare is at 99.95% then the world suffers
if the world suffers, those doing the "suffering" needs to push that complaint/cost back up the chain - to the website operator, which would push the complaint/cost up to cloudflare.
The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.
In the mean time, BofA saved cost in making their site 99.95% uptime themselves (presumably cloudflare does it cheaper than they could individually). So the entire system became more efficient as a result.
> The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.
What an utterly clueless claim. You're literally posting in a thread with nearly 500 posts of people complaining. Taking action takes time. A business just doesn't switch cloud providers overnight.
I can tell you in no uncertain terms that there are businesses impacted by Cloudflare's frequent outages that started work shedding their dependency on Cloudflare's services. And it's not just because of these outages.
Risky. If I interview somebody and their resume is inflated or wrong, at best as a candidate you wasted my time reviewing your resume and scheduling interviews and what not, and now you're starting from a disadvantage because my first impression of you is one of being misled. We're a high-trust organization and anything that causes doubt on your integrity puts you at a disadvantage. If I'm interviewing you, it's because I considered you against the torrent of other applicants, and I likely excluded one that is more qualified than you based on your misrepresentation. That also doesn't work in your favor.
If you are a truly exceptional dev in your previous field and can convince me of that, along with an up-front and transparent explanation of why you lied to me as our first interaction, it is possible to overcome this. However, that is a pretty small pool of people.
Definitely fair but for the candidate considering this, it's a numbers game. They're just looking to get their foot in the door for a new career path. You care, and you're right to care, but there will be others who don't.
Then next time, it's no longer a lie and they can (in theory) get by on merit
Not trolling, asking as a regular user
reply