I wonder why timelines aren't implemented as a hybrid gather-scatter choosing strategy depending on account popularity (a combination of fan-out to followers and a lazy fetch of popular followed accounts when follower's timeline is served).
When you have a celebrity account, instead of fanning out every message to millions of followers' timelines, it would be cheaper to do nothing when the celebrity posts, and later when serving each follower's timeline, fetch the celebrity's posts and merge them into the timeline. When millions of followers do that, it will be cheap read-only fetch from a hot cache.
This is probably what we'll end up with in the long-run. Things have been fast enough without it (aside from this issue) but there's a lot of low-hanging fruit for Timelines architecture updates. We're spread pretty thin from a engineering-hours standpoint atm so there's a lot of intense prioritization going on.
Just to be clear, you are a Bluesky engineer, right?
off-topic: how has been dealing with the influx of new users after X political/legals problems aftermath? Did you see an increase in toxicity around the network? And how has you (Bluesky moderation) dealing with it.
I understand why some people vote for some parties and why they’re “voting on inflation” or “right to abortion” but I guess, for me, keeping checks and balances and democracy is the one value above ALL for me.
In the span of human history, not a lot of countries and civilizations have lasted long, marked by constant instability and uncertainty for the future. We have a boring and imperfect political system created by our founding fathers but at least it’s been stable for nearly 250 years. A lot of people have tried standing up their own political system… most fail and everyone suffers. Even the founding fathers completely failed once first.
I know times are tough now but, in the context of history, they can be much worse and I rather not lose what good we currently do have.
We may have arguably recovered from it, but we rather famously did not get 250 years without the union violently fragmenting. (Our best record on that is right around 160, currently.)
While it’s true we came close during Civil War, we still decided to keep the same system of government. In the end, while the Civil War did result in some constitutional crises, the root of the problem was more that one half of the country completely disagreed with the other half… I don’t think any political system can really work with that level of division and yet we kept the same one. Obviously the Civil War did very much bring into the question of states’ rights but, for better or worse, the founders were a little vague on that so we can still keep most of the same system and quabble over the details for the rest of eternity…
Trump refusing to accept the 2020 election results should've been the line for many voters, but sadly it wasn't. And the potential crimes he and some of his allies may have committed while trying to overturn it will now never be prosecuted.
2024:
> More than 155 million people cast ballots in the 2024 presidential election. It's second only in U.S. history to the 2020 election. Turnout in 2024 represented 63.9% of eligible voters, the second-highest percentage in the last 100 years, according to the University of Florida Election Lab. The only year that beat it – again – was 2020 when universal mail-in voting was more widely available.
2020:
> More than 158 million votes were cast in the election
So 3 millions of Democrats suddenly decided to not go out to vote "to save democracy" against "fascism"?
The simpler and much more likely answer, my friend, is that people didn’t vote from a combination of disillusionment, assuming Kamala would win, and likewise factors.
I saw many people close to me not bother voting because they didn’t enjoy Biden’s presidency, despite voting for him in 2020.
So, I find that FAR more likely as a reason than supposed election fraud.
I'm really confused how tech people shifted from "voting machines are inherently insecure" to simply ignoring the issue despite many political connections between Democrats and voting machine vendors. I'll stick with the results of my research into the matter. If you think you're well enough informed and that your sources actually care about the truth, let's agree to disagree.
This is one of the most investigated issues in American legal history. There was absolutely no indication of fraud. You've fallen for a conspiracy theory. It's now Pizzagate-tier.
(I still argue with Pizzagate adherents on a monthly basis. They think it's perfectly logical.)
Oh fully agreed. But there's a large contingent of folks that are well represented here who think that it's inherently more intelligent to act like/be a centrist, that "both sides have something to offer," which isn't strictly untrue, but in practice especially with American politics just results in mealy-mouthed acceptance of pretty brutal status quos.
Like even left and right in terms of the mainstream here is nonsense. We don't have a left party at all, we have a conservative party, and we have an authoritarian fascist party. As a lefty none of my values are represented at all, I just get to vote each election for the conservative party that doesn't want my friends dead.
Yup. This is a well-tread philosophical problem: the Paradox of Tolerance. Greater minds have concluded "to protect tolerance, one has to be intolerant of intolerance."
And, as always, bsky is a place of business - it is not a public venue. They can decide not to admit individuals who would threaten their business.
I have heard it much more aptly described as “enforcing the social contract”.
You agree to uphold the contract of tolerance with everyone that participates. If someone refuses to uphold the contract with others who do, then you have no obligation to uphold the contract with that individual.
Funny how you call trump administration fascist. (theoretically its anti fascist but its still bad ,
Taking from the description of the video since this was what immediately ringed when you said trump===fascism
The liberal theory of the rise of Trumpism and its supposed fascistic features is inadequate in both effectively analysing and offering solutions to the present situation. Liberals often personalise or individualise people like Donald Trump and Elon Musk, casting them as deviations, as opposed to manifestations of class society. Class analysis suggests that fascism was a unique response to growing anti-capitalist organisations, socialist and/or anarchist, gaining prominence and posing threats to the economic base. The owning class required a mass movement which enveloped otherwise disillusioned people into a political project which had the collectivist, anti-free market appeal that socialist and anarchist organisations had, but nonetheless committed to solidifying and strengthening the economic base and profit motive. In modern America, no such anti-capitalist threat exists. Neoliberalism has created significant disillusionment with mainstream social and political institutions and systems, but this disillusionment hasn’t been captured by anti-capitalist forces, but rather by the populist right. As such, the populist right doesn’t need to give up the economic game, i.e. free markets, deregulation, privatisation, austerity, etc (with the exception of tariffs), but can purely rely on minorities as scapegoats in a constructed culture war, such as immigrants, ‘wokeness’, transgender people, etc. Therefore, capital doesn’t need to be subordinated to the nation-state, like pursued by contemporary fascist governments. Rather, in this ‘inverted’ fascism, capital takes over and exploits the state in a rather oligarchic manner.
I find communist analysis tiresome, especially when in this case the populist right under Trump seems to be motivated in part by anti-free market ideas. The communist kneejerk reaction to every single situation is "this can be explained by class analysis". It's them trying to shoehorn their pet theory into everything.
I've stood up machines for this before I did not know they had a name, and I worked at the mouse company and my parking spot was two over from a J. Beibe'rs spot.
So now we have Slashdot effect, HN hug, and its not Clarkson its... Stephen Fry effect? Maybe can be Cross-Discipline - there's a term for when lots of UK turns their kettles on at the same time.
I should make a blog post to record all the ones I can remember.
Do you know the name of the problem or strategy used for solving the problem? I'd be interested in looking it up!
I own DDIA but after a few chapters of how database work behind the scenes, I begin to fall asleep. I have trouble understanding how to apply the knowledge to my work but this seems like a useful thing with a more clear application.
> and later when serving each follower's timeline, fetch the celebrity's posts and merge them into the timeline
I think then you still have the 'weird user who follows hundreds of thousands of people' problem, just at read time instead of write time. It's unclear that this is _better_, though, yeah, caching might help. But if you follow every celeb on Bluesky (and I guarantee you this user exists) you'd be looking at fetching and merging _thousands_ of timelines (again, I suppose you could just throw up your hands and say "not doing that", and just skip most or all of the celebs for problem users).
Given the nature of the service, making read predictably cheap and writes potentially expensive (which seems to be the way they've gone) seems like a defensible practice.
> I suppose you could just throw up your hands and say "not doing that", and just skip most or all of the celebs for problem users
Random sampling? It's not as though the user needs thousands of posts returned for a single fetch. Scrolling down and seeing some stuff that's not in chronological order seems like an acceptable tradeoff.
To serve a user timeline in single-digit milliseconds, it is not practical for a data store to load each item in a different place. Even with an index, the index itself can be contiguous in disk, but the payload is scattered all over the place if you keep it in a single large table.
Instead, you can drastically speed up performance if you are able to store data for each timeline somewhat contiguously on disk.
Think of it as pre-rendering. Of pre-rendering and JIT collecting, pre-rendering means more work but it's async, and it means the timeline is ready whenever a user requests it, to give a fast user experience.
(Although I don't understand the "non-celebrity" part of your comment -- the timeline contains (pointers to) posts from whoever someone follows, and doesn't care who those people are.)
Perhaps I misunderstanding, I thought the actual content of each tweet was being duplicated to every single timeline who followed the author, which sounded extremely wasteful, especially in the case of someone who has 200 million followers.
Battery-only EVs have a much simpler drive train, and longer-lasting batteries.
Hybrids are not simply EV+ICE, they have very different kind of batteries (low-voltage, high C-rate).
In a hybrid, you have a battery that is 1/10th of the size, so the battery works 10x harder – needs to discharge at higher rate to move the car by itself, and usually there's no room for proper cooling of the battery.
In a BEV you have 10x more modules working at 1/10th of that rate, and there's battery management system keeping it at optimal temperature.
Batteries live longer when they're kept in 20-80% state of charge, and don't like to be cycled deeply. Small hybrid batteries get charged and discharged fully quite regularly, while the same distance needs only 10% of BEVs battery.
In the UK (and probably most of Europe) there is already regulation requiring new residential parking and garages to have a grid connection installed, to be able to easily add chargers at residents' request (even if they're renting).
BEVs used just for city driving need to be charged about once a week, so you just plug in when there's an opportunity. A basic 7kW charger is enough to charge full battery overnight.
GameBoy Advance had a CPU without cache, but with 32kbit of fast RAM on the chip. That was pretty close to a fully manual cache, and turned out to be a complete waste in practice.
It was impractical to share/reuse it like a real cache, because that would require writing some sort of memory allocator or a hash table — in software. Managing that was a hassle, and an overhead in itself. Only a few games (mostly 3D) took advantage of it, and simply by copying their hottest loop in there.
EVs on public roads must still obey the same speed limits and common sense behavior. Having more torque doesn't mean drivers have to slam brakes like on a race track.
In normal driving regenerative braking takes away most of the energy, leaving very little work for the friction brakes. Sometimes EVs even have the opposite problem — brakes rust due to very low use.
Who's going to pay for carbon capture? Definitely not the current polluters who benefit from fossil fuel prices that don't include the cost to clean that up. This is like a fossil fuel subsidy from a debt left to someone else to pay.
>Who's going to pay for carbon capture? Definitely not the current polluters who benefit from fossil fuel prices that don't include the cost to clean that up.
Carbon emitters through carbon pricing schemes. They already cover more than 20% of worldwide emissions, with China joining a few years ago.
Sandboxed is better than unsandboxed, but don't mistake it for being secure. A sandboxed JSON parser can still lie to you about what's been parsed. It can exfiltrate data by copying secrets to other JSON fields that your application makes publicly visible, e.g. your config file may have DB access secret and also a name to use in From in emails. It can mess with your API calls, and make some /change-password call use attacker's password, etc.
You seem to have a very narrow understanding of the utility of language purity and effects systems.
Yes, the parser can lie to you. But the actual lying can only depend on the code you are parsing. No it can't just exfiltrate data by copying it into other messages.
I've said fields, not messages. It can exfiltrate data by copying it between fields of the single message it parses.
Imagine a server calling some API and getting `{"secret":"hunter2"}` response that isn't supposed to be displayed to the user, and an evil parser pretending the message was `{"error":{"user_visible_message":"hunter2"}}` instead, which the server chooses to display.
I'm trying to puzzle this one out a bit. Who are the good and bad actors in this threat model?
I wrote a server:
myServer = do
fetched : Bytes <- fetchFromExternalApi
let parsed : SecretResponse = jsonParse fetched
return parsed
This code is all mine except for the jsonParse which i imported from a nefarious library. If jsonParse returns a SecretResponse, then the code will compile. If jsonParse returns an ErrorResponse, it won't compile.
In more mature implementations a simple "doesn't parse" doesn't cut it. You may want to get specific error codes to know if you should retry the request, or blame the user for bad inputs, or raise an alarm because the API changed its schema unexpectedly. You'll also want to report something helpful to the end users, so they can understand the issue or at least have something useful to forward to your tech support, so you don't just get "the app is borken!! don't parse!!11".
JSON APIs often have a concept of an envelope that gives them a standard way to report errors and do pagination, so the message would have been parsed as some Envelope<SecretResponse>, or reparsed as an ErrorResponse if it didn't parse as the expected kind.
JSON is used in lots of places where lying about the content could cause trouble, and this is just one hypothetical example. I just want to bring attention to the class of attacks where a malicious dependency can lie through its normal API, and may have opportunity to turn its caller into a Confused Deputy instead of having to break out of the sandbox itself.
The change itself was very reasonable. They only missed the mark on how that change was introduced. They should have waited with it until the next Rust edition, or at least held back a few releases to give users of the one affected package time to update.
The change was useful, fixing an inconsistency in a commonly used type. The downside was that it broke code in 1 package out of 100,000, and only broke a bit of useless code that was accidentally left in and didn't do anything. One package just needed to delete 6 characters.
Once the new version of Rust was released, they couldn't revert it without risk of breaking new code that may have started relying on the new behavior, so it was reasonable to stick with the one known problem than potentially introduce a bunch of new ones.
But that is not how backwards compatibility works. You do not break user space. And user space is pretty much out of your control! As a provider of a dependency you do not get to play such games with your users. At least not, when those users care about reliability.
Meaning of this code has not changed since Rust 1.0. It wasn't a language change, nor even anything in the standard library. It's just a hack that the poster wanted to work, and realized it won't work (it never worked).
This is equivalent of a C user saying "I'm disappointed that replacing a function with a macro is a breaking change".
Rust had actual changes that broke people's code. For example, any ambiguity in type inference is deliberately an error, because Rust doesn't want to silently change meaning of users' code. At the same time, Rust doesn't promise it won't ever create a type inference ambiguity, because it would make any changes to traits in the standard library almost impossible. It's a problem that happens rarely in practice, can be reliably detected, and is easy to fix when it happens, so Rust chose to exclude it from the stability promise. They've usually handled it well, except recently miscalculated "only one package needed to change code, and they've already released a fix", but forgot to give users enough time to update the package first.
When you have a celebrity account, instead of fanning out every message to millions of followers' timelines, it would be cheaper to do nothing when the celebrity posts, and later when serving each follower's timeline, fetch the celebrity's posts and merge them into the timeline. When millions of followers do that, it will be cheap read-only fetch from a hot cache.