I see these posts a lot, but this really does not match my experience. I find I run into many more bugs in kde than in gnome or other desktop environments. This bug made kde absolutely unusable for me: https://bugs.kde.org/show_bug.cgi?id=365255
(I think this bug is still present in X11, but I've moved on to Wayland.)
The other bug I run into constantly is that "exposé" sometimes makes all the windows invisible. The only fix is logging out and logging in again. I've seen this across a number of different distros. Gnome is mostly boring and just works for me.
I have the same problem with KDE, which is why is use gnome. I've tried a lot of the mainstream KDE distros like KDE neon, kubuntu and Fedora KDE and a few times I've had it break within a day of installing. Most of the time SDDM was at fault. I should switch it out but I can't be bothered, especially with gnome working flawlessly.
I wanted to use Wayland but with both NVIDIA and AMD I would get this abnoxious display bug that would make all my open windows black, so I'm stuck with X11. For whatever reason this doesn't happen with Wayland+Intel.
I used to have this problem with NVIDIA, but I don't think I've seen it since upgrading to the latest major version or two (41/42) of Fedora. That bug might have been fixed in recent versions of drivers/Wayland/desktop environments.
I've been using Wayland KDE on my gaming machine recently, it somehow made my cursor movement stutter. Switching to KDE X11 or Gnome Wayland solved the issue.
I hope they address this kind of thing prior to making it a default.
I use Wayland KDE but have shortcuts for Steam to launch within Gamescope [].
# normal steam with experimental HDR enabled, native resolution for my laptop and 165hz refresh rate.
alias steam 'steam-run gamescope -f --hdr-enabled -e -W 2560 -H 1440 -r 165 -- steam'
# full screen gamepad ui
alias steam_gamepadui 'steam-run gamescope -f --hdr-enabled -e -W 2560 -H 1440 -r 165 -- steam -gamepadui'
Computer is Asus G513qy, a a full AMD laptop with discrete graphics. Janky thermal design, no thunderbolt... so more like a stationary desktop. Wouldn't buy something like this again but for now it does the job and runs great with any modern distro.
I've been using Wayland + KDE (currently Plasma 5.27.8) for a very long time now on my machine with an Nvidia GPU (first a 1080TI, then a 4090). There are annoying bugs, but they're not show stoppers. I'm not someone who whines about problems like these, especially because I'm at the forefront of the Linux desktop, on a known-bad combination, so there are bound to be bugs :P
For example, a persistent bug that hasn't gone away in all this time is flaky wake from sleep. Sometimes, one of my monitors breaks in such a way that it drops down to 640x480 res and can't be changed back unless I do a list of different rituals. Another bug is that KDE's "Night Color" feature doesn't work, and whenever it turns on it completely freezes the display every few minutes (although unfreezing it is just a matter of pressing the Meta key, so it's not a big deal). I keep sleep and Night Color on, even though they're broken in the hopes that I'll one day wake up to an update that fixed them. I'm 99% sure those are Nvidia driver bugs, and Nvidia has been working to improve their Wayland support recently (for example, there's an upcoming Night Light fix that I'm excited for: https://www.omgubuntu.co.uk/2023/10/gnome-night-light-nvidia...). Being able to follow progress like this is fun for me.
I'm sure there are some other issues that I'm not remembering, but overall the desktop is completely 100% usable and reliable (for both working and gaming). My only other machine is a laptop, and that too is running KDE+Wayland (though no Nvidia). I have not switched back to X11 once in the months (year?) since I switched to Wayland, and have not missed anything.
EDIT: also, I should mention that I'm on Fedora Kinoite. I feel like a lot of problems people have with KDE/Wayland/Nvidia comes down to bad configuration somewhere. If you're on a similar machine, I recommend you try switching to Kinoite since it's designed to just work out of the box, and is pretty much impossible to break.
All of that work for me on AMDGPU, so likely not a KDE bug.
Same here I have been using wayland with sway and KDE depending on the machines (some with intel gpu, but no nvidia) for a few years now and nothing to complain about (fedora, arch and ubuntu)
This might very well be a nvidia problem, their driver comes with its own suspend functions which replaces your default one during install (it usually tells you on the dnf install screen)
> 80% of executives say they would have approached their company’s return-to-office strategy differently if they had access to workplace data to inform their decision-making.
Envoy's not exactly an unbiased source here, since they sell software to collect this data.
So what this really says is that 80% of executives would have used a different approach if they had data about the subject. Which... is pretty lackluster? If you don't change your strategy in response to new data, you're bad at your job.
It says they would change their approach. It does not say they would change their position.
So while some may have been more open to letting employees work from home, others may have just decided they would’ve used different arguments against it instead now that they know the ones that used didn’t hold up (as widely forecast).
I know for a fact that many HAD data on employee willingness and on Productivity and it didn't lead to 3 days in the office RTO but that followed interests in the retail value and government taxes due to "reviving" downtown and similar ideas of centralized economic growth (which only benefits the few who have investments in those dense office zones and actually damages decentralization, horizontal growth and environmental improvements).
Interesting. While 80% said they would approach it differently, 66% felt they would have improved layout and space on-site and 52% would have improved amenities. That tells a different story.
I gave it a try the other day using copilot.lua, and it made the whole editor unusably slow. It somehow caused syntax highlighting to "lag" for tens of seconds every time I made any sort of edit.
Maybe I need to be using the official plugin, and maybe I need to disable LSP based syntax highlighting, but that makes me wonder what it's doing.
What languages and LSPs are you using? I've never experienced this with Python (pyright), Go (gopls) or Rust (rust-analyzer). I'm using tree-sitter for syntax highlighting. The only special thing is that I'm using lazy.nvim (the plugin manager) to load copilot.lua on the InsertEnter event.
I am using typescript with typescript-language-server. I'll give the lazy stuff a try, but I am not sure why this would make the editor more responsive outside of the initial load.
In my experience living in SF, these cars are much safer than the typical drivers in my neighborhood for pedestrians. Most human drivers around here don't even bother to stop at stop signs, and instead just slow slightly. With the cruise and Waymo cars, I feel like the risk is a bit lower when I'm on my bike or walking.
I have witnessed a Cruise car stopping in the middle of the road when faced with an oncoming emergency vehicle, so I totally buy that they aren't ready for prime time yet.
Honestly, I'd prefer if we prioritized enforcing existing traffic laws for regular vehicles.
I live in "the outside lands" [1] so I see them all the time. I think they do a lot of testing out here. I can only think of one time that a Cruise acted erratically and put me on guard. But honestly they drive so gradually and predictably that I am confident I could have got out of the way. I don't recall a single sketchy moment with a Waymo, but I'm a Googler so maybe we just chalk that up as incentivized blindspot and render that as inadmissible in the court of HN.
On the other hand, every day I see human drivers needlessly endangering themselves and others. Every. Single. Day.
Maybe the equation changes with more robo-drivers and less human drivers. I would take the other side of that bet any day.
I moved here from Europe and I am absolutely shocked at how reckless the drivers are. In the old country I think I saw a car run a red light once in 20 years of driving. Here is seems I see it every other day.
>The only time you need to consider a client-server setup is: Where you have multiple physical machines accessing the same database server over a network. In this setup you have a shared database between multiple clients.
This caveat covers "most cases". If there's only a single machine, then any data stored is not durable.
Additionally, to my knowledge SQLite doesn't have a solution for durability other than asynchronous replication. Arguably, most applications can tolerate this, but I'd rather just use MySQL with semi-sync replication, and not have to think through all of the edge cases about data loss.
That's not what the parent means by durability, they mean having your data survive any one of your machines being instantly nuked from orbit at the most inconvenient possible time.
Just having sync replication is enough, doesn't have to be fancy like semi-sync.
'durability' already has a well-established, rigorously-defined meaning in this context, which is confusingly similar to pitr but definitely not the same thing
the downside of sync replication, as i understand it, is that although your data will survive any one of your machines being instantly nuked from orbit, your entire service will go down; semi-sync avoids this problem
But they’re using the other well-established meaning of durability a la how AWS and others describe their storage platforms. It’s pretty much the same thing but taken at whole system level. On that level an ACID database is as durable as the underlying storage medium which is sadly not very durable.
well, it's sort of arbitrary that the standard definition of durability requires your data to survive machine checks and kernel panics and power outages but not disk failures, isn't it
especially since nowadays in many data centers disk failures are far more common
The OPs point is that the single process ACID semantics of SQLite don't provide a durability guarantee that includes replication.
Other databases have a commit level that makes sure logs have been shipped.
For me this is an edge case in just about everything except financial transactions (the performance penalty of distributed transactions is pretty high!) but it is correct to note it.
Sounds like IMS runs on Z system mainframes with redundant hot-swappable CPUs and memory. They pay IBM a lot of money for the illusion of a single reliable machine, when a different OS would manage it as a small cluster.
We economize by using racks of cheap, flaky commodity hardware, but we have to be ready for one computer to die by failing each application over to another.
Does this practically improve the situation? The odds of two servers breaking at the same time for the same reasons seems very high. I actually can't think of a single example where the secondary sever would keep running.
Regression via a code or dependency update? Full disk? DNS is down? Too much load? All of these would bring down both servers in quick succession.
I guess something like a "once every 2 days" race condition could buy you some time if you had a 2nd server. But that's not a common error
Zero downtime upgrades, hardware fault, aws decides that specific instance needs to die. It also doesn't let you cheat statelessness very easily, so it's easier to scale horizontally.
Fair enough I guess. I don’t think you need two servers to do zero downtime upgrades. And the other issues are, imo, beyond the 0.99 uptime threshold that most services realistically have when you add in breakage due to upgrades.
I like your statelessness point. I suppose in your view it’s better to have the concentrated stateful core with stateless servers as opposed to just one stateful instance. Two instances mean you can’t easily store foo in memory and hope the server doesn’t die until it’s not needed there anymore. Counterpoint is that the extra layer of indirection is 10x slower and horizontal scaling won’t be needed as much if you don’t pay that price in the first place, but you are right, the temptation to store foo in memory would still be in its prime. The thing is, if one machine can scale, putting foo in memory isn’t actually bad. It’s only when things don’t scale that it’s bad.
> I don’t think you need two servers to do zero downtime upgrades
Absolutely not and I can't understand why I keep hearing this argument. Doing zero downtime upgrades on a single server have been simple since basically forever, run another process on another port, change config, restart front balancer gracefully and there you go.
We use 3 node MSSQL and it happens all the time where the primary gets in a bad state (100% cpu, high latency etc)and simply failing over to another instance fully recovers.
It could be bad hardware, it could be bad query (left dangling/canceled on old instance), could be bad statistics and unlocks disk fragmentation etc etc.
I’m with you, but you could also make the case that most small web service businesses still run a single Postgres instance with no redundancy— just backups. So, you have a single point of failure. You can get quite decent uptime out of a single VPS.
This project comes to mind https://github.com/rqlite/rqlite but I've never used it, and I'm not sure if it would count as "pure sqlite" like the op advocated anymore.
https://litestream.io/ does streaming replication to S3 (or similar service). With this, you probably have better data durability than a small database cluster.
My understanding is that it provides asynchronous replication, so you'd still lose some data if you lose a machine. This is documented here https://litestream.io/tips/#data-loss-window
I guess this is an interview level question.
1) Drain connections from your instance. Stop taking new connections and let all existing requests timeout. This could be by removing it from a load-balancer or dns. This ensures your litestream backup is "up-to-date".
2) Bring up the new deployment, it restores by litestream. When restore is complete, register it with the load balancer (if you are using one) or dns.
3) Delete the old instance.
Yes... and all I see here is downtime. How do we do this without services failure? With a postgres db, you can spin up and down ec2 instances to your hearts content. Every 1-4 years, you can upgrade your db by using a replica with no down time.
Depends exactly what you mean with "durable". One machine with RAID10 can be pretty durable and solves the most common problems with disk issues, other risks can be managed too.
Ah, that brings back memories. Had 2 RAID 10 MySQL servers run for a decade without rebooting. One had an app db, the other a stats db, and the two replicated to each other.
Spinning disks and all, I was terrified to reboot them and have the boot disk fail (which was not on RAID).
The main disks failed once or twice which slowed the servers down considerably until rebuild of the raid finished. Very nervous time.
Durable in the database context refers to durability of transactions, i.e. your database does not lose a record of committed transactions. A good example is an ATM withdrawal.
I gave Helix a try the other day and really enjoyed the "feel" of things. I'm also generally impressed with their philosophy / approach, after having spent way too much time tuning my Neovim configuration. However, I ran into several blockers for it being my daily driver, I thought I'd share them:
1. It's generally janky whenever the language server is slow, in ways that Neovim is not. The most visible one of these is that completion does not take into account the characters that were typed between when the completion request started and when the completion was selected. For example, if I type "abc", and the LSP suggests "abcdef" after having only observed the "a" keystroke, Helix will end up with "abcdefbc" as the final output. This is annoying.
3. Helix doesn't detect the correct root for my language servers in my environment, and it doesn't seem to have any way to override the detected root. This means that have to start the editor in the directory that I intend to use as the root. This gets really problematic when I'm working across several languages.
Hey, thanks for creating helix. IMO it is a big improvement to neovim and it is so much easier to get started with it. I was really sceptical at first, but its now my main editor.
FYI your github link in your opencollective [0] profile seems to be wrong?
I've been using a lot of typescript lately. The ide support in vscode/tsserver is very complete, but it is NOT fast. As an example, it frequently takes multiple minutes to use find references. Rg or git grep with fzf are both hundreds of times faster, and are easier for me to navigate. (Obviously language aware find references is much more powerful, but with carefully named variables it's acceptable to just use rg.). Sometimes, even a simple "go to definition" results in a 45 second hang. (This is on a 2019 Macbook pro)
I've also used intelij idea with Java, and while it was much smoother than anything in the typescript ecosystem, it's definitely not faster than using grep.
(I think this bug is still present in X11, but I've moved on to Wayland.)
The other bug I run into constantly is that "exposé" sometimes makes all the windows invisible. The only fix is logging out and logging in again. I've seen this across a number of different distros. Gnome is mostly boring and just works for me.