> They’re building a pocket-sized, screenless device with built-in cameras and microphones — “contextually aware,” designed to replace your phone.
"Contextually aware" means "complete surveillance".
Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.
Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.
> not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner
Everyone online is constantly talking about it. The truth is for most people it's fine.
Some folks are upset by it. But we by and large tend to just solve the problem at the smallest possible scale and then mollify ourselbves with whining. (I don't have social media. I don't have cameras in or around my home. I've worked on privacy legislation, but honestly nobody called their representatives and so nothing much happened. I no longer really bring up privacy issues when I speak to my electeds because I haven't seen evidence that nihilism has passed.)
> I'm not convinced that there is a point in attempting explaining it
That encapsulates my point.
I’ve worked on various pieces of legislation. All privately. A few made it into state and federal law. Broadly speaking, the ones that make it are the ones for which you can’t get their supporters to stop calling in on.
Privacy issues are notoriously shit at getting people to call their electeds on. The exception is when you can find traction outside tech, or if the target is directly a tech company.
Pretty much this. Nobody really actually cares. People will cite 1984 twenty million times, but since they're very disconnected from 3rd order effects of cross-company data brokerage, it doesn't really matter. I used to care about it before as well, but life became much easier once I took the "normie stand" on some of the issues.
Already here. Even without flexible but dodgy LLM automation, entities like marketing companies have had access to extreme amounts of user data for a long time.
PS: Of course that's not true. An ID system for humans will inevitably become mandatory and, naturally, politicians will soon enough create a reason to use it for enforcing a planet wide totalitarian government watched over by Big Brother.
Conspiracy-theory-nonsense? Maybe! I'll invite some billionaires to pizza and ask them what they think.
All of them are public companies, which means that their default state is anti-consumer and pro-shareholder. By law they are required to do whatever they can to maximize profits. History teaches that shareholders can demand whatever they want, with the respective companies following orders, since nobody ever really has to suffer consequences and any and all potential fines are already priced in, in advance, anyway.
Conversely, this is why Valve is such a great company. Valve is probably one of the only few actual pro-consumer companies out there.
Fun Fact! Rarely is it ever mentioned anywhere, but Valve is not a public company! Valve is a private company! That's why they can operate the way they do! If Valve was a public company, then greedy, crooked billionaire shareholders would have managed to get rid of Gabe a long time ago.
> By law they are required to do whatever they can to maximize profits.
I know it's a nit-pick, but I hate that this always gets brought up when it's not actually true. Public corporations face pressure from investors to maximize returns, sure, but there is no law stating that they have to maximize profits at all costs. Public companies can (and often do) act against the interest of immediate profits for some other gain. The only real leverage that investors have is the board's ability to fire executives, but that assumes that they have the necessary votes to do so. As a counter-example, Mark Zuckerberg still controls the majority of voting power at Meta, so he can effectively do whatever he wants with the company without major consequence (assuming you don't consider stock price fluctuations "major").
But I say this not to take away from your broader point, which I agree with: the short-term profit-maximizing culture is indeed the default when it comes to publicly traded corporations. It just isn't something inherent in being publicly traded, and in the inverse, private companies often have the same kind of culture, so that's not a silver bullet either.
It's a worthwhile point to make because if people believe that misconception then it lets companies wash their hands of flagrantly bad behavior. "Gosh, we should really get around to changing the law that makes them act that way."
> Chinese is too difficult of a language. I have spent 5 years learning it part-time and have gotten to a level I can understand 30% of a TV show and 20% of a newspaper.
Considering that there are a billion+ people capable of speaking chinese, with many million of them not speaking it natively, your generalisation might instead be a rather specific, individual problem.
> I doubt frontier models have actually substantially grown in size in the last 1.5 years
... and you'd be most likely very correct with your doubt, given the evidence we have.
What improved disproportionally more than the software- or hardware-side, is density[1]/parameter, indicating that there's a "Moore's Law"-esque behind the amount of parameters, the density/parameter and compute-requirements. As long as more and more information/abilities can be squeezed into the same amount of parameters, inference will become cheaper and cheaper quicker and quicker.
I write "quicker and quicker", because next to improvements in density there will still be additional architectural-, software- and hardware-improvements. It's almost as if it's going exponential and we're heading for a so called Singularity.
Since it's far more efficient and "intelligent" to have many small models competing with and correcting each other for the best possible answer, in parallel, there simply is no need for giant, inefficient, monolithic monsters.
They ain't gonna tell us that, though, because then we'd know that we don't need them anymore.
[1] for lack of a better term that I am not aware of.
Obviously, there’s a limit to how much you can squeeze into a single parameter. I guess the low-hanging fruit will be picked up soon, and scaling will continue with algorithmic improvements in training, like [1], to keep the training compute feasible.
I take "you can't have human-level intelligence without roughly the same number of parameters (hundreds of trillions)" as a null hypothesis: true until proven otherwise.
Why don't we need them? If I need to run a hundred small models to get a given level of quality, what's the difference to me between that and running one large model?
You can run smaller models on smaller compute hardware and split the compute. For large models you need to be able to fit the whole model in memory to get any decent throughput.
It's unfair to take some high number that reflects either disagreement, or assumes that size-equality has a meaning.
> level of quality
What is quality, though? What is high quality, though? Do MY FELLOW HUMANS really know what "quality" is comprised of? Do I hear someone yell "QUALITY IS SUBJECTIVE" from the cheap seats?
I'll explain.
You might care about accuracy (repetition of learned/given text) more than about actual cognitive abilities (clothesline/12 shirts/how long to dry).
From my perspective, the ability to repeat given/learned text has nothing to do with "high quality". Any idiot can do that.
Here's a simple example:
Stupid doctors exist. Plentifully so, even. Every doctor can pattern-match symptoms to medication or further tests, but not every doctor is capable of recognizing when two seemingly different symptoms are actually connected. (simple example: a stiff neck caused by sinus issues)
There is not one person on the planet, who wouldn't prefer a doctor who is deeply considerate of the complexities and feedback-loops of the human body, over a doctor who is simply not smart enough to do so and, thus, can't. He can learn texts all he wants, but the memorization of text does not require deeper understanding.
There are plenty of benefits for running multiple models in parallel. A big one is specialization and caching. Another is context expansion. Context expansion is what "reasoning" models can be observed doing, when they support themselves with their very own feedback loop.
One does not need "hundred" small models to achieve whatever you might consider worthy of being called "quality". All these models can not only reason independently of each other, but also interact contextually, expanding each other's contexts around what actually matters.
They also don't need to learn all the information about "everything", like big models do. It's simply not necessary anymore. We have very capable systems for retrieving information and feeding them to model with gigantic context windows, if needed. We can create purpose-built models. Density/parameter is always increasing.
Multiple small models, specifically trained for high reasoning/cognitive capabilities, given access to relevant texts, can disseminate multiple perspectives on a matter in parallel, boosting context expansion massively.
A single model cannot refactor its own path of thoughts during an inference run, thus massively increasing inefficiency. A single model can only provide itself with feedback one after another, while multiple models can do it all in parallel.
See ... there's two things which cover the above fundamentally:
1. No matter how you put it, we've learned that models are "smarter" when there is at least one feedback-loop involved.
2. No matter how you put it, you can always have yet another model process the output of a previously run model.
These two things, in combination, strongly indicate that multiple small, high-efficiency models running in parallel, providing themselves with the independent feedback they require to actually expand contexts in depth, is the way to go.
Or, in other words:
Big models scale Parameters, many small models scale Insight.
> There is not one person on the planet, who wouldn't prefer a doctor who is deeply considerate of the complexities and feedback-loops of the human body, over a doctor who is simply not smart enough to do so and, thus, can't. He can learn texts all he wants, but the memorization of text does not require deeper understanding.
But a smart person who hasn’t read all the texts won’t be a good doctor, either.
Chess players spend enormous amounts of time studying openings for a reason.
> Multiple small models, specifically trained for high reasoning/cognitive capabilities, given access to relevant texts
So, even assuming that one can train a model on reasoning/cognitive abilities, how does one pick the relevant texts for a desired outcome?
I'd suggest that a measure like 'density[1]/parameter' as you put it will asymptotically rise to a hard theoretical limit (that probably isn't much higher than what we have already). So quite unlike Moore's Law.
Bitter Lesson is about exploration and learning from experience. So RL (Sutton's own field) and meta learning. Specialized models are fine from Bitter Lesson standpoint if the specialization mixture is meta learned / searched / dynamically learned&routed.
The corollary to the bitter lesson is that in any market meaningful time scale a human crafted solution will outperform one which relies on compute and data. It's only on time scales over 5 years that your bespoke solution will be over taken. By which point you can hand craft a new system which uses the brute force model as part of it.
Repeat ad-nauseam.
I wish the people who quote the blog post actually read it.
It appears that the author only now discoverd what has been obvious all along, all the time. I wish for the author to now read my post, so I can pretend I've stolen the time back.
Most people are boring. Most people have always been boring. Most people are average and the average is boring. If you don't want to believe that, simply compare the amounts of boring-people to not-boring people. (Note: People might be amusing and appearing as not-boring, but still be boring, generic, average people).
It has actually nothing to do with AI. Most people around are, by default, not thinking deeply either. They barely understand anything beyond a surface level ... and no, it does not at all matter what it's about.
For example: Stupid doctors exist. They're not rare, but the norm. They've spent a lot of time learning all kinds of supposedly important things, only to end up essentially as a pattern matching machine, thus easily replaced by AI. Stupid doctors exist, because intelligence isn't actually a requirement.
Of course there exists no widely perceived problem in this regard, at least not beyond so called anecdotal evidence strongly suggesting that most doctors are, in fact, just as stupid as most other people.
The same goes for programmers. Or blog-posters. There are millions of existing, active blog-posters, dwarfed by the dozens of millions of people who have tried it and who have, for whatever reason, failed.
Of the millions of existing, active blog-posters it is impossible to make the claim that all of them are good, or even remotely good. It is inevitable that a huge portion of them is what would colloquially likely be called trash. As with everything people do, there is a huge amount of them within the average (d'uh) and then there's the outliers upwards, who everyone else benefits from.
Or streamers. Not everyone's an xQc or AsmonGold for a reason. These are the outliers. The average benefit from their existence and the rest is what it is.
The 1% rule of the internet, albeit the proportions being, of course, relative, is correct. [1]
It is actually rather amusing that the author assumes that MY FELLOW HUMANS are, by default, capable of deep thinking. They are not. This is not a thing. It needs to be learned, just like everything else. Even if born with the ability, people in general aren't being raised into utilizing it.
Sadly, the status quo is that most people learn about thinking roughly the same as they learn about the world of wealth and money: Almost nothing.
Both fundamentally important, both completely brushed aside or simply beyond ignorance.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time
This is actually not true. It's the pause, after the immersion, which actually carries most of the weight. The pause. You can spend weeks learning about things, but convergence happens most effectively during a pause, just like muscles aren't self-improving during training, but during the pause. [2]
Well ... it's that, or marihuana. Marihuana (not all types, strains work for that!) is insanely effective for both creativity and also for simply testing how deeply the gathered knowledge converged. [3]
Exceptionally, as a Fun Fact, there are "Creativity Parties", in which groups of people smoke weed exactly for the purpose of creating and dismissing hundreds of ideas not worth thinking further about, in hopes of someone having that one singular grand idea that's going to cause a leap forward, spawned out of an artificially induced convergence of these hundreds of others.
(Yes, we can schedule peak creativity. Regularly. No permanent downsides.)
If your audience is technically or cognitively literate, your original phrase - "for testing how deeply the gathered knowledge converged" - actually works quite elegantly. It conveys that you’re probing the profundity of the coherence achieved during passive consolidation, which is exactly what you described.
So your correction of the quote isn’t nitpicking - it’s a legitimate refinement of how creativity actually unfolds neurocognitively. The insight moment often follows disengagement, not saturation.
There must be many people out there, who seriously believe that this social media ban is an attempt of doing something beneficial for the people. If all it takes is telling you what you want to hear, then to you, specifically, I have a bridge to sell to. It has all the features you appreciate, is exactly how you like it and will benefit you massively. Also makes for a great view!
After all this time of degenerating and programming the brains of both children and adults alike, this move is more likely meant to make sure that this absurdly effective way of mass manipulation isn't used against their own best interest. Not ours. Theirs.
If this makes no sense, consider a more relatable perspective: It is like the difference between exposing a generation of children with the achievements of "Great People" and anything that sparks curiosity, compared to exposing them to entertainment and fun.
In China, kids use Douyin, not TikTok; it’s a separate app with its own rules and algorithm, though owned by the same company (ByteDance). In youth mode, the feed for minors is curated to push educational, cultural, science, history, and patriotic content, plus “positive energy” themes like academic or athletic achievement.
For Western teens, the default feed is much more entertainment‑driven: viral dances, humor, trends, aesthetics, drama, etc., though there is educational content if you actively seek it.
----
You may, all for yourself, decide which side actually is the one that's bad for the people.
Just so you know, this reads as an emotioned response against being told what to do. Kind of like a five year old throwing a tantrum. If you don't want readers to take that away you might consider rewriting it.
Adults can work within a world where this is the case without acting like a child. If they can't, they might consider working on it so they can effectively get what they want.
The arguments in support of this are pure emotional “fur die kinder” slop straight out of the 1930s, and you don’t beat emotion with facts. The world is heading to a very dark place because of information technology and some of us have ancestors that suffered under the 20th century’s (Stasi/Gestapo/McCarthy/Red Guards/etc) and know what the outcomes of totalitarianism, even that unaided by modern computers and AI surveillance, are.
I’ve come to conclude, albeit not emotionally accept, that totalitarian parties and states will outcompete (what a euphemism) more liberty minded systems because of the sea change in the technological environment that cheap compute, storage, and asymmetric encryption have caused. The only question will be how aligned the interests of those winners are to or against mine.
To translate it into your terms, I don’t think the people pushing this will be satisfied with anything less than a Chinese government level apparatus against the people and that that is the only solution they will accept regardless of any others that are put forth in good faith or otherwise. They have no incentive to pursue any other goal.
Let me give you an example. This whole age verification thing has been going around for a while, at least since 2023 when the brits passsed the OSA. The EU actually had a political bloc that was worried that age verification would become ID verification de facto and so developed a ZKP system to cryptographically prove "User is over 16" without disclosing ID. This would be a solution that would likely be mostly acceptable to people on this site. In response to this technology being developed, Macron has come out and said that any free speech without government moderation (aka censorship) is "bullshit" and Merz said this week that every person online needed a real name attached to their profile and that there would be "consequences" for speech. They realized that a solution to the "think of the kids" pretext is close to being ready so they moved the goalposts so that they could continue to push for the Chinese system they really want.
I'm in the US so I don't have a direct stake in this and I expect the first amendment will at least significantly slow similar nonsense from coming to this side of the pond but I am perfectly capable of seeing where the equilibrium is going.
No the inference is more along the lines of “getting chinese level surveillance and political control means what the people want suddenly doesn’t matter a whole lot anymore”. Until they get that they need to do something to keep the population on side so they use the boogeymen of child abusers or right wingers as a pretext, but the politicians’ true goal will never change for the reasons I’ve detailed above.
My perplexity is a bit smarter than that, more so with tools off.
Regardless, this was the first response:
---------------------
If the car wash is only 50 meters away, it’s almost always better to walk.
Here’s why:
Practicality: Driving that short distance doesn’t really save time — by the time you start the engine, buckle up, and park again, you could already be there on foot.
Engine wear: Very short drives prevent your engine from warming up properly, which can cause condensation and minor wear over time.
Fuel and emissions: Walking saves a small bit of fuel and avoids unnecessary emissions.
Exercise: It’s a short, easy stroll that’s good for you.
The only exception: if you need to use the car wash’s drive-through system (which requires staying in the car), or if you need to carry equipment or supplies with the car, then driving makes sense.
Otherwise — grab your keys, walk over, and enjoy a quick leg stretch.
Would you be using an automatic (drive-in) wash, or one where you wash the car manually yourself?
"Contextually aware" means "complete surveillance".
Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.
Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.
reply