It says that for a sufficiently large storage system, the information within will ultimately be limited by the surface area and not the volume. That you can indeed judge a book by its cover. For the sake of asymptotic analysis of galactic algorithms, one need only consider schemes for reading and writing information on the surface of a sphere. Where it comes to "real hardware," this sort of analysis is inapplicable.
It's obviously about latency. How do you not see the latency aspect of it?
Latency is directly bound by the speed of light. A computer the size of the earth's orbit will be bound by latency of 8 minutes. A computer the size of 2x of earth's orbit will be bound by a latency of 16 minutes, but have 4x the maximum information storage capacity.
The size of the book is directly proportional to the speed of light. Ever heard of ping? Does the universe you live in have infinite speed of light, and therefore you don't see how R contributes to latency?
And now with the advent of highly capable LLMs, we don't even need humans watching and listening. The data streams can be captured, analyzed, summarized, for any behavior, mention, suspicion, or hallucination of undesirable activity. In a population inured to masked agents snatching people off the street domestically and
semi*-autonomous drone strikes abroad*, our future doesn't look rosy.
This is the key realization which is missing from talks about AI dangers.
Total surveillance used to be impossible because the government needed people to spy on other people. They needed to find somebody willing and pay them.
Now it can be automated.
The war won't be humans vs an AI controlling robots. It'll be humans vs the government and rich people controlling AI controlling robots.
It's crossed my mind that a couple of a certain class of typos in a document has become a signal of authenticity. It's only a matter of time* before we see prompting or even manual editing adapt to falsify that signal.
* before this comment gets a single upvote, somebody will have vibe-coded this
> AFAIK nobody has figured out how to create such an equivalent.
I'm curious if anybody has even attempted it; if there's even training data for this. Compartmentalization is a natural aspect of cognition in social creatures. I've even known dogs to not to demonstrate knowledge of a food supply until they think they're not being observed. As a working professional with children, I need to compartmentalize: my social life, sensitive IP knowledge, my kid's private information, knowledge my kid isn't developmentally ready for, my internal thoughts, information I've gained from disreputable sources, and more. Intelligence may be important, but this is wisdom -- something that doesn't seem to be a first-class consideration if dogs and toddlers are in the lead.
This makes me a bit sad. Over the years I've posted PRs to several, but not many, repos with a one-off fix, issue or improvement. It's a great opportunity to say hello and thanks to the maintainers.
Wild. I flew domestically about a week after 9/11 and forgot that I had my leatherman in my pocket until I got to security... and the xray operator didn't see it in my backpack.
Oh that could have ended quite differently. I've had stuff that looked on second thought very much like explosive devices (little black boxes with a bunch of wires sticking out, internal pouch batteries) in my luggage on more than one occasion. I never so much as got a peep out of anything like that. But for some reason my elderly laptop is a real magnet for official attention and there is absolutely nothing non-stock about that one.
Spend a few years handling data in arcane, one-off, and proprietary file formats conceived by "brilliant" programmers with strong CS backgrounds and you might reconsider the conclusion you've come to here.
This is a presentation problem, or possibly a lack of tooling problem.
A binary format with a tool that renders it to text works the same as a text format; if the rendering is lossless, you could even consume the text format rather than the binary.
A "text" format is built to be understandable, but that's not a requirement; you could write a text format that isn't descriptive, and you'd have just as much trouble understanding what 'A' means as you would understanding what 'C0' means for a binary format.
Undocumented formats are a pain, whether they're in text or binary.
It's a lack of tooling problem. Because if you're a bioinformatics researcher, you want to devote your time, money and energy towards bioinformatics. You don't want to spend weeks getting tooling written to handle an arcane file format, nor pay for that tooling, nor hire a "brilliant" programmer. That tooling needs to be written, packaged and maintained for perhaps dozens of programming languages.
Instead, you want to use the format that can be read and written by a rank novice with a single programming course under their belt, because that's what makes the field approachable by dewey-eyed undergrads eager to get their feet wet. Giving those folks an easy on-ramp is how you grow the field.
And then you want to compress that format with bog-standard compression algorithms, and you might get side-tracked investigating how to improve that process without exploding your bioinformatics-focused codebase. Which is an interesting show of curiosity, not a reason to insult a class of scientists.
There's also a distribution problem. When people break history, by introducing new file formats, or updating old file formats, that impedes archival and replication. And once you've got a handful of file formats, now you've got the classic n+1 problem where no single format is optimal in all ways so people are always inventing new formats to see what sticks. And now an archivist needs to maintain tooling with an ever-increasing overhead. Here we see a clash of wisdom versus intellect, and if you're trying to foster a healthy field of research, wisdom wins the long game.
> Instead, you want to use the format that can be read and written by a rank novice with a single programming course under their belt, because that's what makes the field approachable by dewey-eyed undergrads eager to get their feet wet. Giving those folks an easy on-ramp is how you grow the field.
This is good, but you're implicitly saying there's a tradeoff made to achieve this, because otherwise we wouldn't be talking about alternatives. And an on-ramp for novices is good, but I don't see a step where the standards of the field move on from what novices can handle.