Elections in most countries involve tens of thousands of volunteers for running ballot stations and counting votes.
That is a feature, not a problem to be solved. It means that there are tens of thousands of eyes that can spot things going wrong at every level.
Any effort to make voting simpler and more efficient reduces the number of people directly involved in the system. Efficiency is a problem even if the system is perfectly secure in a technological sense.
The fact that all SETI endeavors haven’t really found anything is actually a very valuable result, because it constrains “they’re everywhere, we just haven’t been looking” arguments quite a bit.
Even humanity’s (weak) radio emissions would be detectable from tens of light years away, and stronger emissions from much further. So the idea that intelligent life is absolutely everywhere that was liberally tossed around a few decades ago is pretty much on life support now.
>Even humanity’s (weak) radio emissions would be detectable from tens of light years away, and stronger emissions from much further.
That's not true. Non-directional radio transmissions (e.g. TV, broadcast radio) would not be distinguishable from cosmic background radiation at more than a light year or two away [0]. Highly directional radio emissions (e.g. Arecibo message) an order of magnitude more powerful than the strongest transmitters on Earth would only be visible at approximately 1000 light years away [1], and would only be perceptible if the detector were perfectly aligned with the transmission at the exact time it arrived.
This is my biggest issues with all of the messages we keep sending out to space. By the time it gets to its destination, it will basically be indistinguishable from noise
That depends. If there is "someone" within 20 light years advanced enough to detect our signals we can establish communication and learn from each other - the 40 year round trip time means we can only ask long term questions, but just sending all of human knowledge, and them returning with their knowledge can be a big leg up for both (though sorting through all the things we already know will be a big effort). They may have solved fusion, while we are still 50 years away, meanwhile we have solved something else they are interested in but haven't solved yet.
20 light years is about the farthest useful communication can be established. The farther out things are the longer the round trip and thus the more likely we have already figured things out by the time we get their answer. It would still be interesting to get a response, but our (and we assume their) civilization is moving too fast for much knowledge sharing. Eventually with knowledge sharing you assume something is obvious that isn't and so you get another round trip. Watching an alien movie no matter who far away they are will be interesting (even if it is more a smell based or something that we don't think of)
There is no reason to think we will ever visit them, but we can do other things when they are close.
There are not many stars within 20 light years though. The Femi paradox doesn't exist at that distance, there just not enough stars to expect to find life that close.
Is there a reason we would need to coordinate on what to exchange rather than, say, beginning with encyclopedias and textbooks then moving to a constant stream of notable papers, news, discoveries, etc? What kind of bandwidth can you hit with a cooperating neighbour where improvements become civilisationally important? How many bytes (megabytes? Terabytes?) of meaningful new data does humanity produce per second? I suspect it's reasonably low.
Good question. My thought is similar to yours, but there is a lot of room for debate on what to send. They probably don't care about the roman empire like we do - but there are enough references in modern science that we need to send a summary just so they understand some things. We produce a lot of data, but most of it isn't meaningful.
if this is a real situation I wouldn't be asked. So use salt
When you say perfectly aligned, what kind of precision are we talking about? If we aimed a receiver at a nearby star, would we be able to achieve this kind of precision?
You can actually count to 12 on your fingers using one hand. Use the thumb as a pointer, then for each of your other fingers you have three joints. So 3*4=12.
It's hard to actually count using more than 4 bits/hand though. The quickest methods that require the least dexterity are those that count the knuckles (which are actually used in some counting traditions, unlike binary finger-counting).
That’s where Bayesian reasoning comes into play, where there are prior assumptions (e.g., that engineered reality is strongly biased towards simple patterns) which make one of these hypotheses much more likely than the other.
Deciding that they are both equally likely is also a deciding a prior.
Yes, "equally likely" is the minimal information prior which makes it best suited when you have no additional information. But it's not unlikely that have some sort of context you can use to decide on a better prior.
> The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.
But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.
Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.
I'm a believer that LLMs will keep getting better. But even today (which might or might not be "sufficient" training) they can easily run `rm -rf ~`.
Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.
LLM's have surpassed being Turing machines? Turing machines now think?
LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.
Hassabis put forth a nice taxonomy of innovation: interpolation, extrapolation, and paradigm shifts.
AI is currently great at interpolation, and in some fields (like biology) there seems to be low-hanging fruit for this kind of connect-the-dots exercise. A human would still be considered smart for connecting these dots IMO.
AI clearly struggles with extrapolation, at least if the new datum is fully outside the training set.
And we will have AGI (if not ASI) if/when AI systems can reliably form new paradigms. It’s a high bar.
Maybe if Terence Tao had memorized the entire Internet (and pretty much all media), then maybe he would find bits and pieces of the problem remind him of certain known solutions and be able to connect the dots himself.
But, I don't know. I tend to view these (reasoning) LLMs as alien minds and my intuition of what is perhaps happening under the hood is not good.
I just know that people have been using these LLMs as search engines (including Stephen Wolfram), browsing through what these LLMs perhaps know and have connected together.
Projects like Wikipedia never have meaningful competition, because the social dynamics invariably converge to a single platform eating everything else.
Wikipedia is already dead, they just don't know it yet. They'll get Stackoverflowed.
The LLMs have already guaranteed their zombie end. The HN crowd will be comically delusional about it right up to the point where Wikimedia struggles to keep the lights on and has to fire 90% of its staff. There is no scenario where that outcome is avoided (some prominent billionaire will step in with a check as they get really desperate, but it won't change anything fundamental, likely a Sergey Brin type figure).
The LLMs will do to Wikipedia, what Wikipedia & Co. did to the physical encyclopedia business.
You don't have to entirely wipe out Wikipedia's traffic base to collapse Wikimedia. They have no financial strength what-so-ever, they burn everything they intake. Their de facto collapse will be extremely rapid and is coming soon. Watch for the rumbles in 2026-2027.
Wikipedia is not even in the game you are describing here. Wikipedia does not need billions of users clicking on ads to convince investors in yet another seed. They are an encyclopedia, and if fewer people will visit, they will still be an encyclopedia. Their costs are probably very strongly correlated with their number of visitors.
This. I'm really bothered by the almost cruel glee with which a lot of people respond to SO's downfall. Yeah, the moderation was needlessly aggressive. But it was successful at creating a huge repository of text-based knowledge which benefited LLMs greatly. If SO is gone, where will this come from for future programming languages, libraries, and tools?
You talk about news here like it's some irrefutable ether LLMs can tap into. Also I'd think newspapers and scientific papers cover extremely little of what the average person uses an LLM to search for.
This always feels to me like, an elephant in the room.
I’d love to read a knowledgeable roundup of current thought on this. I have a hard time understanding how, with the web becoming a morass of SEO and AI slop - with really no effort being put into to keeping it accurate - we’ll be able to train LLMs to the level we do today in the future.
It’s indeed rapidly progressing feature-wise, but I have yet to see an explanation for how they intend to manage security once market adoption happens.
Ladybird is written in C++, which is memory-unsafe by default (unlike Rust, which is memory-safe by default). Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security. I don’t understand how the Ladybird team could possibly hope to secure a C++ browser engine, given that even engineering giants have consistently failed to do so.
> Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security.
And part of Firefox/Chromes security effort has been to use memory safe languages in critical sections like file format decoders. They're far too deeply invested in C++ to move away entirely in our lifetimes, but they are taking advantage of other languages where they feasibly can, so to write a new browser in pure C++ is a regression from what the big players are already doing.
I just checked out Servo, and like all browsers it has a VERY large footprint of dependencies (notably GStreamer/GOject, libpng/jpeg, PCRE). Considering browsers have quite the decent process isolation (the whole browser process vs heavily sandboxed renderer processes), I wonder how tangible the Rust advantage turns out to be.
I just looked at the top CVEs for chrome in 2025. There are 5 which allow excaping the sandbox, and the top ones seem to be V8 bugs where the JIT is coaxed into generating exploitable code.
One seems to be a genuine use-after-free.
So I can echo what you wrote about the JS engine being most exploitable, but how is Rust supposed to help with generating memory-safe JITed code?
I know they have said that. But it feels a bit strange to me to continue to develop in C++ then, if they eventually will have to rewrite everything in Swift. Wouldn't it be better to switch language sooner rather than later in that case?
Or maybe it doesn't have to take so much time to do a rewrite if an AI does it. But then I also wonder why not do it now, rather than wait.
That is the plan, but they are stalled on that effort by difficulties getting Swift's memory model (reference counting) to play nice with Ladybird's (garbage collection)
I think there was some work with the Swift team at Apple to fix this but there haven't been any updates in months
I know that that’s the plan, but I believe it when I see it. Mozilla invented entire language features for Rust based on Servo’s needs. It’s doubtful whether a language like Swift, which is used mostly for high-level UI code, has what it takes to serve as the foundation of a browser engine.
I remember a time when entire discussion threads were swiftly culled from HN based on the magnitude of their political content.
These days, it’s pretty clear that the direction matters a lot more than the magnitude, and “flamebait” is only a problem when the flames blow a certain way.
The reason political discussion needs to be limited is exactly for comments like these. Low effort characterizations of mainstream politics as racist or fascist is purely inflammatory, and only going to further turn HN into Reddit but for tech.
That is a feature, not a problem to be solved. It means that there are tens of thousands of eyes that can spot things going wrong at every level.
Any effort to make voting simpler and more efficient reduces the number of people directly involved in the system. Efficiency is a problem even if the system is perfectly secure in a technological sense.
reply