I've been hit by that bug, although it only deletes mail AFAIK. There's a separate bug that completely corrupts the mail database on compaction, making Thunderbird lock up including for every future launch.
Its a beautiful open source effort but products that have bugs like that languish for 10-20 years just aren't reliable. I need my mail client to be reliable.
Although in recent years it looks like it turned from a bug about one specific (never resolved) issue to a more general troubleshooting session related to data loss issues.
Yes, FUD and long held myths can be found anywhere. But speaking as a staff member and someone who has seen first hand user reports, here is some straight shooting:
* there are rare cases of a profile either misplaced (exists but not correctly pointed to) or gone - it is something which I understand Firefox people are working on (Thunderbird uses the Firefox profile system)
* there are extremely rare reports where prefs.js is corrupted
* there are no compact failures in current versions - there are no open bug reports for recent versions, so it has been totally obliterated by a rewrite and subsequent fixes. Most user reports of compact failure are attributed to other causes of folder corruption
* folder corruption can occur as easily from external sources as from product bugs.
Also, beware drawing broad conclusions about other users' experience from one's own personal experience. I have almost never experienced corruption - once in the last 10 years. But I am also using a Thunderbird profile that has gone through 5 different laptops, two different OS, using daily builds, which is AMPLE opportunity to have had multiple catastrophic failures. But because I know other users experiences I consider myself lucky.
What I don't understand is why the AV1 pool isn't activating their MAD clause.
Part of the idea with AV1 was that with the constituents also holding such a massive warchest of patents (plus big tech being richer than god), they would countersue and demolish anyone that tries to bully AV1 users. Which would act like deterrence.
Where is all that might? Was it all just saber rattling, and are they basically going to let the AVC / HEVC patent holders make a fool out of them?
You can set your ULA to something like "fddd:192:168::/48" and then on your vlan you prefix hint, say, "66". Now, any device on that vlan will be addressable by "fddd:192:168:66::$host". For example, your gateway ('router') for that vlan would be "fddd:192:168:66::1".
If you want to be really wonky you can script DHCPv6 to statically assign ULA IPv6 leases that match the IPv4, and expire them when the IPv4 lease expires, but like said upthread, addressing hosts via IPv6 is the wrong way to go about it. On your lan, you really want to be doing ".local" / ".lan" / ".home".
> addressing hosts via IPv6 is the wrong way to go about it. On your lan, you really want to be doing ".local" / ".lan" / ".home".
.local is fine as long as all the daemons work correctly, but AFAIK there's no way to have SLAAC and put hosts in "normal" internal DNS, so .lan/.home/.internal are probably out.
> On your lan, you really want to be doing ".local" / ".lan" / ".home".
The "official" is home.arpa according to RFC 8375 [1]:
Users and devices within a home network (hereafter referred to as
"homenet") require devices and services to be identified by names
that are unique within the boundaries of the homenet [RFC7368]. The
naming mechanism needs to function without configuration from the
user. While it may be possible for a name to be delegated by an ISP,
homenets must also function in the absence of such a delegation.
This document reserves the name 'home.arpa.' to serve as the default
name for this purpose, with a scope limited to each individual
homenet.
It may be the most officially-recommended for home use, but .internal is also officially endorsed for "private-use applications" (deciding the semantics of these is left as an exercise to the reader): https://en.wikipedia.org/wiki/.internal
".home" and ".lan" along with a bunch of other historic tlds are on the reserved list and cannot be registered.
Call techy people pathologically lazy but no one is going to switch to typing ".home.arpa" or ".internal". They should have stuck with the original proposal of making ".home" official, instead of sticking ".arpa" behind it. That immediately doomed the RFC.
I do it by abusing the static slaac address. I have a set of wierd vms where they are cloned from a reference image, so no fixed config allowed. I should have probably just have used dhcp6 but I started by trying slaac and the static address were stable enough for my purposes so it stuck.
How does that work? I initially assumed you meant you just statically assigned machines to addresses, which I think would work courtesy of collision avoidance (and the massive address space), but I can't see how that would work for VMs. Are you just letting VMs pick an IP at random and then having them never change it, at which point you manually add them to DNS?
Pretty much. A given mac address assigned in the vm config maps directly to a static slaac address(the ones they recommend you not use) and those preknown slaac address are in dns, Like I said, I should probably use dhcp6 but it was a personal experiment in cloning a vm for a sandbox execution environment. and those slacc address were stable enough for that. every time it gets cloned to the same mac address it ended up with the same ip6 address. works for me, don't have to faf around with dhcp6, put it in dns. time for a drink.
But the point is that is the address you would put in dns if you also wanted to use slaac. Most of the time however you will just set a manual address. And this was with obsd, where when slaac is setup you get the slaac address and a temporary address. I don't really know what linux does. Might have to try now.
Clarification for others: with privacy extensions disabled, SLAAC'd IPv6 addresses are deterministically generated based on MAC addresses. There's also an inbetween (IPv6 are stable per network by hashing).
I wonder if there are low power Intel or AMD boards that accept DDR3. So many sticks of 2 / 4 / 8GB DDR3 inside laptops going into recycling or landfills which would do perfectly fine for low power purposes. Hell, performance for standard workloads scales with access times, not bandwidth, and DDR3 sits nicely at CAS8 1600MHz and CAS10 2133MHz..
For a second I hoped you were gonna comment on how LLMs are going to rot out our skillset and our brains. Like some people already complaining they "have to think" when ChatGPT or Claude or Grok is down.
The other day I was doing some programming without an LSP, and I felt lost without it. I was very familiar with the APIs I was using, but I couldn't remember the method names off the top of my head, so I had to reference docs extensively. I am reliant on LSP-powered tab completions to be productive, and my "memorizing API methods" skill has atrophied. But I'm not worried about this having some kind of impact on my brain health because not having to memorize API methods leaves more room for other things.
It's possible some people offload too much to LLMs but personally, my brain is still doing a lot of work even when I'm "vibecoding".
Ironically this is one of my main use cases for LLMs
“Can you give me an example of how to read a video file using the Win32 API like it’s 2004?” - me trying to diagnose a windows game crashing under wine
Exactly. I feel this is the strongest use case. I can get personalized digests of documentation for exactly what I'm building.
On the other hand, there's people that generate tokens to feed into a token generator that generates tokens which feeds its tokens to two other token generators which both use the tokens to generate two different categories of tokens for different tasks so that their tokens can be used by a "manager" token generator which generates tokens to...
This really makes me think of A Deepness in the Sky by Vernor Vinge. A loose prequel to A Fire Upon The Deep, and IMO actually the superior story. It plays in the far future of humanity.
In part of it, one group tries to take control of a huge ship from another group. They in part do this by trying to bypass all the cybersecurity. But in those far future days, you don't interface with all the aeons of layers of command protocols anymore, you just query an AI who does it for you. So, this group has a few tech guys that try the bypass by using the old command protocols directly (in a way the same thing like the iOS exploit that used a vulnerability in a PostScript font library from 90s).
Imagine being used to LLM prompting + responses, and suddenly you have to deal with something like
sed '/^```/d;/^#/d;s/^[[:space:]]\*//;/^$/d' | head -1); [[ $r ]]
and generally obtuse terminal output and man pages.
:)
(offtopic: name your variables, don't do local x c r a;. Readability is king, and a few hundred thousand years from now some poor Qeng Ho fellow might thank his lucky stars you did).
I'm glad you guys at least went with CloudFlare. LMarena went with Google's ReCaptcha, which is plain evil. It'll often gaslight you and pretend you failed a captcha of identifying something as simple as fire hydrants. Another lovely trick is asking you to identify bridges or busses, but in actuality it also wants you to identify viaducts or semi-trucks.
I'm not completely sure I would call Apple the accessibility king. It's UI gets worse with each release. Modal dialogues with no keyboard options to make a choice in the window at times, etc.
Eh, no. My experience working in web accessibility, it seems most visually impaired individuals have a preference for windows, seemingly because it has the most mature set of accessibility/screen reader tools around largely because of how long windows has been around and how much of a requirement it is for enterprise environments.
As far as I know, accessibility has been built into macOS since the early days, and with great care. Which then propagated to application built for macOS, and later on, iOS. iOS is rather magnificent for (visually) impaired people.
In contrast, Windows has had its accessibility features bolted on, and the best ones are third-party which makes it even more bolted-on. And then you have twenty different frameworks to make Windows applications, all with varying (but usually mediocre) levels of accessibility support built in.
Reminds me of what Frank Sobotka says in The Wire: "We used to make shit in this country, build shit here. Now all we do is put our hand in the next guy's pocket."
Its a beautiful open source effort but products that have bugs like that languish for 10-20 years just aren't reliable. I need my mail client to be reliable.
reply