Hacker Newsnew | past | comments | ask | show | jobs | submit | p-e-w's commentslogin

Elections in most countries involve tens of thousands of volunteers for running ballot stations and counting votes.

That is a feature, not a problem to be solved. It means that there are tens of thousands of eyes that can spot things going wrong at every level.

Any effort to make voting simpler and more efficient reduces the number of people directly involved in the system. Efficiency is a problem even if the system is perfectly secure in a technological sense.


The fact that all SETI endeavors haven’t really found anything is actually a very valuable result, because it constrains “they’re everywhere, we just haven’t been looking” arguments quite a bit.

Even humanity’s (weak) radio emissions would be detectable from tens of light years away, and stronger emissions from much further. So the idea that intelligent life is absolutely everywhere that was liberally tossed around a few decades ago is pretty much on life support now.


>Even humanity’s (weak) radio emissions would be detectable from tens of light years away, and stronger emissions from much further.

That's not true. Non-directional radio transmissions (e.g. TV, broadcast radio) would not be distinguishable from cosmic background radiation at more than a light year or two away [0]. Highly directional radio emissions (e.g. Arecibo message) an order of magnitude more powerful than the strongest transmitters on Earth would only be visible at approximately 1000 light years away [1], and would only be perceptible if the detector were perfectly aligned with the transmission at the exact time it arrived.

[0] https://physics.stackexchange.com/a/245562

[1] https://arxiv.org/pdf/astro-ph/0610377.pdf


This is my biggest issues with all of the messages we keep sending out to space. By the time it gets to its destination, it will basically be indistinguishable from noise

That depends. If there is "someone" within 20 light years advanced enough to detect our signals we can establish communication and learn from each other - the 40 year round trip time means we can only ask long term questions, but just sending all of human knowledge, and them returning with their knowledge can be a big leg up for both (though sorting through all the things we already know will be a big effort). They may have solved fusion, while we are still 50 years away, meanwhile we have solved something else they are interested in but haven't solved yet.

20 light years is about the farthest useful communication can be established. The farther out things are the longer the round trip and thus the more likely we have already figured things out by the time we get their answer. It would still be interesting to get a response, but our (and we assume their) civilization is moving too fast for much knowledge sharing. Eventually with knowledge sharing you assume something is obvious that isn't and so you get another round trip. Watching an alien movie no matter who far away they are will be interesting (even if it is more a smell based or something that we don't think of)

There is no reason to think we will ever visit them, but we can do other things when they are close.

There are not many stars within 20 light years though. The Femi paradox doesn't exist at that distance, there just not enough stars to expect to find life that close.


Is there a reason we would need to coordinate on what to exchange rather than, say, beginning with encyclopedias and textbooks then moving to a constant stream of notable papers, news, discoveries, etc? What kind of bandwidth can you hit with a cooperating neighbour where improvements become civilisationally important? How many bytes (megabytes? Terabytes?) of meaningful new data does humanity produce per second? I suspect it's reasonably low.

Good question. My thought is similar to yours, but there is a lot of room for debate on what to send. They probably don't care about the roman empire like we do - but there are enough references in modern science that we need to send a summary just so they understand some things. We produce a lot of data, but most of it isn't meaningful.

if this is a real situation I wouldn't be asked. So use salt


Thanks, these rules of thumb are very useful.

When you say perfectly aligned, what kind of precision are we talking about? If we aimed a receiver at a nearby star, would we be able to achieve this kind of precision?


Probably due to the Great Filter: https://www.youtube.com/watch?v=UjtOGPJ0URM

or not:

"Dissolving the Fermi Paradox": https://arxiv.org/abs/1806.02404


Being able to count using fingers is more valuable than having one more prime factor.

You can actually count to 12 on your fingers using one hand. Use the thumb as a pointer, then for each of your other fingers you have three joints. So 3*4=12.

If you include the tip, you can do base 16.

Let’s go hexadecimal all the way.


No. Base 16 is only divisible by 1, 2, 4, and 8, while Base12 is divisible by 1, 2, 3, 4, and 6. Of course, Base 10 is only divisible by 1, 2, and 5.

Switching from Base 10 to Base 12 would be difficult. Instead we should go back in time and ensure we evolve with 6 fingers on each hand and foot.


This is why men are superior to women, we can always count to one higher. (or two including the tip, as someone suggested with the fingers) :-p ducks

But all the techniques to multiply numbers with your fingers are more confusing in base 12.

https://www.wikihow.com/Multiply-With-Your-Hands

Those techniques can be useful. If you add toes, multiplying numbers up to 20 (like 16x18) is easy.


Or use a hand as a 5-bit integer, then you can count to 31 :)

It's hard to actually count using more than 4 bits/hand though. The quickest methods that require the least dexterity are those that count the knuckles (which are actually used in some counting traditions, unlike binary finger-counting).

That’s where Bayesian reasoning comes into play, where there are prior assumptions (e.g., that engineered reality is strongly biased towards simple patterns) which make one of these hypotheses much more likely than the other.

yes, if you decide one of them is much more likely without reference to the data, then it will be much more likely :)

Deciding that they are both equally likely is also a deciding a prior.

Yes, "equally likely" is the minimal information prior which makes it best suited when you have no additional information. But it's not unlikely that have some sort of context you can use to decide on a better prior.


Well that would be extra information. Wherever you find the edge of your information, you will find the "problem of induction" as presented above.

> The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.

Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.

[1] https://arxiv.org/abs/2301.02679


I'm a believer that LLMs will keep getting better. But even today (which might or might not be "sufficient" training) they can easily run `rm -rf ~`.

Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.


LLM's have surpassed being Turing machines? Turing machines now think?

LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.


Every time this topic comes up people compare the LLM to a search engine of some kind.

But as far as we know, the proof it wrote is original. Tao himself noted that it’s very different from the other proof (which was only found now).

That’s so far removed from a “search engine” that the term is essentially nonsense in this context.


Hassabis put forth a nice taxonomy of innovation: interpolation, extrapolation, and paradigm shifts.

AI is currently great at interpolation, and in some fields (like biology) there seems to be low-hanging fruit for this kind of connect-the-dots exercise. A human would still be considered smart for connecting these dots IMO.

AI clearly struggles with extrapolation, at least if the new datum is fully outside the training set.

And we will have AGI (if not ASI) if/when AI systems can reliably form new paradigms. It’s a high bar.


Maybe if Terence Tao had memorized the entire Internet (and pretty much all media), then maybe he would find bits and pieces of the problem remind him of certain known solutions and be able to connect the dots himself.

But, I don't know. I tend to view these (reasoning) LLMs as alien minds and my intuition of what is perhaps happening under the hood is not good.

I just know that people have been using these LLMs as search engines (including Stephen Wolfram), browsing through what these LLMs perhaps know and have connected together.


Projects like Wikipedia never have meaningful competition, because the social dynamics invariably converge to a single platform eating everything else.

Wikipedia is already dead, they just don't know it yet. They'll get Stackoverflowed.

The LLMs have already guaranteed their zombie end. The HN crowd will be comically delusional about it right up to the point where Wikimedia struggles to keep the lights on and has to fire 90% of its staff. There is no scenario where that outcome is avoided (some prominent billionaire will step in with a check as they get really desperate, but it won't change anything fundamental, likely a Sergey Brin type figure).

The LLMs will do to Wikipedia, what Wikipedia & Co. did to the physical encyclopedia business.

You don't have to entirely wipe out Wikipedia's traffic base to collapse Wikimedia. They have no financial strength what-so-ever, they burn everything they intake. Their de facto collapse will be extremely rapid and is coming soon. Watch for the rumbles in 2026-2027.


Wikipedia is not even in the game you are describing here. Wikipedia does not need billions of users clicking on ads to convince investors in yet another seed. They are an encyclopedia, and if fewer people will visit, they will still be an encyclopedia. Their costs are probably very strongly correlated with their number of visitors.

SO was supposed to be much the same, though. I guess you really do have to get directly funded by users for the model to work.

If we kill all the platforms where content for training LLMs comes from, what do LLMs train on?

This. I'm really bothered by the almost cruel glee with which a lot of people respond to SO's downfall. Yeah, the moderation was needlessly aggressive. But it was successful at creating a huge repository of text-based knowledge which benefited LLMs greatly. If SO is gone, where will this come from for future programming languages, libraries, and tools?

Newspapers, scientific papers and soon, real-world interactions.

News is the main feed of new data and that can be an infinite incremental source of new information


You talk about news here like it's some irrefutable ether LLMs can tap into. Also I'd think newspapers and scientific papers cover extremely little of what the average person uses an LLM to search for.

This always feels to me like, an elephant in the room.

I’d love to read a knowledgeable roundup of current thought on this. I have a hard time understanding how, with the web becoming a morass of SEO and AI slop - with really no effort being put into to keeping it accurate - we’ll be able to train LLMs to the level we do today in the future.


Most people went to SO because they had to for their job. Most people go to Wikipedia because they want to, for curiosity and learning.

LLMs will use Wikipedia the same way humans use it

It’s indeed rapidly progressing feature-wise, but I have yet to see an explanation for how they intend to manage security once market adoption happens.

Ladybird is written in C++, which is memory-unsafe by default (unlike Rust, which is memory-safe by default). Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security. I don’t understand how the Ladybird team could possibly hope to secure a C++ browser engine, given that even engineering giants have consistently failed to do so.


> Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security.

And part of Firefox/Chromes security effort has been to use memory safe languages in critical sections like file format decoders. They're far too deeply invested in C++ to move away entirely in our lifetimes, but they are taking advantage of other languages where they feasibly can, so to write a new browser in pure C++ is a regression from what the big players are already doing.


I just checked out Servo, and like all browsers it has a VERY large footprint of dependencies (notably GStreamer/GOject, libpng/jpeg, PCRE). Considering browsers have quite the decent process isolation (the whole browser process vs heavily sandboxed renderer processes), I wonder how tangible the Rust advantage turns out to be.

Browsers have had sandboxing for well over a decade, and the 3-4 catastrophic vulnerabilities per year happen in spite of that.

And most of them are in the browser code itself, not in dependencies. By far the biggest offender tends to be the JavaScript engine.


Are you sure?

I just looked at the top CVEs for chrome in 2025. There are 5 which allow excaping the sandbox, and the top ones seem to be V8 bugs where the JIT is coaxed into generating exploitable code. One seems to be a genuine use-after-free.

So I can echo what you wrote about the JS engine being most exploitable, but how is Rust supposed to help with generating memory-safe JITed code?



Ladybird is going to use Swift.

I know they have said that. But it feels a bit strange to me to continue to develop in C++ then, if they eventually will have to rewrite everything in Swift. Wouldn't it be better to switch language sooner rather than later in that case?

Or maybe it doesn't have to take so much time to do a rewrite if an AI does it. But then I also wonder why not do it now, rather than wait.


That is the plan, but they are stalled on that effort by difficulties getting Swift's memory model (reference counting) to play nice with Ladybird's (garbage collection)

I think there was some work with the Swift team at Apple to fix this but there haven't been any updates in months


I would love it if you would provide a reference I could go look at


Thank you! I look forward to perusing these.

That is very good news!

I've used Swift a bunch for hobby projects, and the two things that suck about it are:

1. XCode

2. Compile times

I would assume if you're coming from C++ or Rust the compile time issues aren't really something you notice anyway :P


You don't strictly have to use Xcode to use swift, there's a good LSP for use in other editors.

That said, if you're using Swift to build an app, you're probably still going to want to use Xcode for building and debugging


Yea, I'm building iOS apps mostly, and some macOS apps, so definitely need to use XCode :/

I have a nice workflow going for the iOS apps I work on where I use neovim for all my editing, and Xcode for building and debugging.

If I remember correctly, the guy behind it used to work at Apple, maybe that has to do something with it?

perhaps they do not think Rust is the best option for Ladybird

I know that that’s the plan, but I believe it when I see it. Mozilla invented entire language features for Rust based on Servo’s needs. It’s doubtful whether a language like Swift, which is used mostly for high-level UI code, has what it takes to serve as the foundation of a browser engine.

Swifts most notable use case is certainly making apps but if I recall correctly Apple has converted a good bit of their networking code to Swift.

It may not be the lowest of the low level but it certainly is more flexible than meets the eye


what technical demerits specifically make Swift a doubtfully viable option for a browser?

I’m guessing the uncertainty about whether Servo will be meaningfully maintained 5 years from now is the main problem.

I remember a time when entire discussion threads were swiftly culled from HN based on the magnitude of their political content.

These days, it’s pretty clear that the direction matters a lot more than the magnitude, and “flamebait” is only a problem when the flames blow a certain way.


[flagged]


The reason political discussion needs to be limited is exactly for comments like these. Low effort characterizations of mainstream politics as racist or fascist is purely inflammatory, and only going to further turn HN into Reddit but for tech.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: