All roads of AI are converging on the end of privacy. A free society does not exist in the absence of privacy.
When you lose privacy you lose ownership over your own thoughts. You lose agency over the direction of your own life.
Humanity suffers by either scenario, either by our own agency in control of power we are not prepared to manage or we will be managed by power that we can not control.
Many have their focus looking forward to the unknown concerns of AGI; however, these other issues are already nearly imminent and are far less present in discourse around AI.
Loss of agency is a deeply profound concern that is not discussed enough.
So much of human life is tied to the reward of the sense of accomplishment, and AI threatens that in a meaningful way. From large things, like seeing a piece of software you built ship and get used, or seeing your child graduate from college after years of hard work raising them, to the small things, like even eating a delicious meal you cooked yourself.
When machines are not only offering to solve problems and perform actions for us, but are reading are thoughts and doing everything for us before we even make an attempt, I worry that sense of accomplishment will become ever less frequent.
And while we can compensate for machines taking jobs through, eg, universal basic income or the like, how we can solve for machines replacing struggle, perseverance, and accomplishment is still very much TBD.
Indeed, this is one of the major topics I focused on in the referenced article.
"AI brings into question who is really the creator. Am I the creator if I simply role the dice until I win the lottery? When I regenerate a new response over and over until finally getting the image or output I most desire? When the output is the final product, is AI still just a tool? Or are we simply observers foolishly convincing ourselves of the value of our contribution?"
The type of decisions those chore-bots make are rather inconsequential, whereas the ones that will decide which digital content you can trust or whether its something you shouldn't want to see could get very nasty.
I got an Automower this year and it's been one of the best purchases I ever made. I have multiple hours per month of my life back. Though I have more grass than I'd like.
You have to physically climb into an multimillion dollar, 10 ton, liquid-helium-cooled machine and spend hours training it on your brain. We're a long way from the end of private thought, nor is there a roadmap to it.
If someone is working on a handheld device that can perform MRI-level scans from across the room, I would be worried about the privacy implications of that technology, not AI.
That is missing the point. It is the trajectory. Machines that can analyze all orthogonal data to understand your behavior is not limited to a single dimension of a MRI machine.
Facial expression analysis, all of your online conversations, the existence of your entire life is being monitored and recorded. There will be enough data to perceive beyond the veil into the thoughts of the mind.
It is definitely missing the point - capturing of brain waves does not require liquid cooling etc. In fact it's something that could theoretically be squeezed into the next Meta Quest to just "capture". Then it would just be stored until they are ready with an airflow pipeline to pass it into the nn.
During the pandemic, I built a device that records "brain waves". Not with the fidelity that would be needed to be a useful medical instrument -- let alone read minds -- but the fact that I, a fairly average geek, can do so using my own equipment and for less than $100 seems meaningful.
The device itself is very small, battery-powered (for safety) and requires that you attach electrodes to your scalp.
I suspect that the underlying point here is that when EEGs first came around in the 1920s, they were also extremely expensive and not available for common use. But since then, the technologies involved have become so much more accessible and affordable that an electronics hobbyist can build one for themself, maybe even using parts they already have kicking around.
MRIs may very well follow a similar trajectory over time.
Personally, I don't think this will happen in the very near future -- but history shows us time and time again that it's good to start thinking about these things well before they become practical.
That doesn't make sense from a physics perspective.
MRI machines operate by creating intense magnetic fields. In order to create those magnetic fields, you need superconductors, otherwise the magnets would burn up. Thus the liquid helium.
To make this accessible to the hobbyist, you need a revolution in physics, not informatics or engineering. Not saying that it's impossible, but if someone does develop room temperature superconductors, we're going to be talking about a lot more exciting things than handheld MRI machines.
> That doesn't make sense from a physics perspective.
I'm reminded of what a physicist once told me: if a physicist says something is impossible, give up all hope. If a physicist says something is uneconomical/impractical/infeasible, then there is still hope because economics change over time.
> To make this accessible to the hobbyist, you need a revolution in physics, not informatics or engineering.
The big stumbling block in terms of doing this on an (advanced) hobbyist level is the need for liquid helium and rare metals. That's an economic problem, not a physics problem. The helium is a really big deal -- but it's also the one that is most likely to have the economics change in, because the helium shortage is a serious issue and there are lots of people looking into ways to manufacture it efficiently. If they succeed, helium may end up becoming cheap enough to be within the realm of possibility for advanced hobbyists.
Also, the only reason that helium is needed at all is because MRI machines require superconductivity to work. It's not impossible that an advance in that field could happen such that you don't need to make things as cold as liquid helium in order to achieve it.
> we're going to be talking about a lot more exciting things than handheld MRI machines.
Hand-held? Why do they have to be hand-held? I'm just talking about ordinary people being able to build one at all, not how portable it would be.
What are you worried about? That someone will kidnap you, force you into an MRI machine, force you to train it for hours on your neural firing patterns, and get the password to your bank account this way?
I'm trying to figure out which part of this threat model AI makes a meaningful difference in. If they already have you captive, the xkcd-certified $5 wrench is cheaper.
"End of private thought" doesn't seem to be on this tech tree, unless you posit being able to scan people secretly or against their will.
I'm not worried about any of that at all. None of what I've said has some unstated "therefore, this is bad" clause to it. I'm just pondering the progression of technology here.
If someone comes up with a technology that allows people's minds to be read without their cooperation, then I'd start to worry -- but I see nothing in this that indicates that's where things are going.
Also, the idea of building my own MRI appeals to me, so my mind went on a little tangent about how to make that happen.
Progress isn't linear. Because we can cure one disease doesn't mean we're on a trajectory to eliminate all diseases. Because today's Camry is faster than last year's Camry doesn't mean we'll be traveling at relativistic speeds anytime soon.
That we can correlate thought with incredibly precise and detailed electrochemical phenomenon in your brain should come as absolutely no surprise to anyone with a materialist view of the universe. Your thoughts are, after all, electrochemical phenomenon. The problem - still - is measuring them.
The idea that AI is going to somehow read your mind from macro phenomenon like facial expressions and the width of your iris is total bullshit made up by people who want to sell TV shows. We already have this kind of "technology" in the form of polygraphs - and they have the same effectiveness as horoscopes.
You possibly have a relevant point if it wasn't for the fact that we are already at a crisis in regards to loss of privacy and its impacts for society.
In this respect, any further loss is of significant concern.
You have to lug around a giant bulky laptop and you can't even call people on it! We're a long way from mobile computing.
Anyways, even given current limitations, I'd likely side with the very researchers working on such. They explicitly highlight privacy reasons as a serious concern. Going out of their way to highlight misuse through bypassing requirements (cooperation of subject) and intentionally misinterpreting for nefarious reasons.
One thing that history teaches is that there are always people who will misuse technology for personal gain and to influence or control other people. Always. Human nature hasn't changed.
You do now. But if you are arrested, you can be compelled to do that, and in a few years EEG skullcaps will probably be sufficient.
Honestly, it baffles me that so many people on a site devoted to technology evaluate long-term trends based on current capabilities. Capacity is going to continue to double every ~2 years. If we can't make transistors 2x smaller, we'll find a different architecture to make transistor arrays 2x larger, or [something].
Technical progress compounds. It's often lumpy rather than linear, but it's going to keep imposing an accelerating effect wherever people see profit in applying it.
Liquid helium wouldn't be a barrier to being used on key individuals (say leaders of opposition parties in some large authoritarian-ish country). The training would be if it requires cooperation.
> You have to physically climb into an multimillion dollar, 10 ton, liquid-helium-cooled machine and spend hours training it on your brain
the article suggests it could be accomplished with a neurosurgical implant as well - and which, honestly, if I could transcribe my thoughts to review later, I'd love to try that out. In which case, the question of security moves to the matter of accessing the implant.
Privacy is gray and there has never been absolute privacy. For example, if I go somewhere in public then people see me. I should not expect my location to be private.
> if I go somewhere in public then people see me. I should not expect my location to be private.
If that's the case, then we have almost no meaningful privacy. Fortunately, that doesn't have to be the case.
If I go out in public, people can see me, sure. But the odds of any of those people knowing who I am are miniscule. To them, I'm just another body in the bulk of bodies that populate their landscape. My privacy is retained.
It's when cameras are everywhere, recording everyone they see, when we carry devices that report our locations to others, etc., and that data is correlated with other data, that privacy is compromised.
From the study:
"...we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder."
Reading the discussion in the paper, it looks like if you lie "out loud" in your head about what you're thinking, you'll fool the interpretation. For now.
Of course that can change -- will almost certainly change as the technology improves. But it's an important caveat at the moment and the researchers clearly paid attention to this aspect.
The need for cooperation suggests that what's being learned here is the correlation between brain activity and produced language. Language is not all (maybe most?) of thinking though [1]. So it may not be a certainty that technology improvements allow decoding of subconscious thought processes. This feels related to the possible fundamental limitations of LLMs trained on language too.
That sounds as if what actually was learned were the neural patterns associated with speaking. AFAIK its thinking out loud in you head causes activation in the motor neurons one would use for speaking.
Things like this have been tried before to help lock-in patents, unfortunately things like ALS seem to correlate with a weakening of those patterns.
It does not require realizing you are cooperating, if you're just playing a VR game in some future version of Quest that captures brain waves while you're navigating a labyrinth.
But it can't read what you aren't sounding out in your mind. It can't determine truthfulness, and it can't find memories.
If you're playing a VR game, you probably aren't using your inner voice to articulate your secrets. Although if you are of great importance and interest to people with sufficient resources, they might be able to trick you into doing so in a game world.
There's also a ways to go with the accuracy. On the other hand though, they trained with an extremely small GPT model (GPT one i believe) so...bitter lesson incoming ?
It is important to keep in mind that the subjects had to get their brains scanned for 16 hours to train the machine learning algorithm to subsequently "decode their thoughts." So this cannot just be wielded on any unsuspecting person.
Moreover, I'm skeptical this can be significantly improved as fMRI is quite a blunt instrument when it comes to assessing brain activity; it is a delayed and course-grained view of relatively large volumes of aggregated brain tissue.
> The decoder produced more accurate results during the tests with audio recordings, compared to the imagined speech, but it was still able to glean some basic details of unspoken thoughts from the brain activity. For instance, when a subject envisioned the sentence “went on a dirt road through a field of wheat and over a stream and by some log buildings,” the decoder produced text that said “he had to walk across a bridge to the other side and a very large building in the distance.”
It sounds like it's reading a persons snapshot of the story in a visually-descriptive language. That's really neat! I can imagine this is the beginning of telepathy.
Why is it that in every other development, these days, one reads a quote like "researchers think this would be used for <insert thing that benefits humanity, but will remain underdeveloped because nobody wants to invest enough money in it> but they warn that it could also be used for <insert very negative possible use, in which many with the resources and power will gladly invest if given the possibility and lack of regulation or control of any type>" and yet, they continue developing it because it might "benefit humanity"? Where the hell are you going like this?!
They continue developing it for a career where they afford healthcare and education and stave off misery for a generation or two for themselves and their family.
Exactly, global poverty and inflation makes it easier for money-havers to control what population at large puts effort towards. We're all slaves, just on a much bigger scale.
So far it only works with words. I'm wondering how soon it could work with pictures, so you could imagine a willow tree and the machine shows a willow tree on a screen. A powerful daydreamer could make movies without a camera.
From my perspective the reconstruction of lingual thoughts is the new one. There already were reconstruction of thought images several years ago and recently using diffusion models: https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2
This technology has tremendous potential for good. It’s not all nefarious.
People who are paralyzed or suffering from neurodegenerative diseases might have a chance of regaining independence. New treatments for mental health diseases could be a benefit as well.
This is coming one way or another and is pretty frightening. Mass surveillance is one thing, but also personal interrogation. You can extract the deepest darkest secrets of any one individual.
The thin line of privacy will disintegrate into nothing.
That’s not true. We don’t need to do this. Absent intervention, sure these technologies are likely to be introduced and invade our lives. But we are humans, not computers, and despite the decisions of the last several decades, we are not required to allow market forces and the profit motive algorithm to make all decisions in our society. We can make whatever rules and decisions we want. It won’t come without a fight, but this horrifying development seems a pretty good place to start.
What government is going to turn down these capabilities? Even if they banned "private sector" development, you can bet they're still gonna go down this road.
And frankly, that might be for the best. Imagine whatever country is most antagonistic toward your own, and now imagine their military fully develops this and yours doesn't. How safe/secure are you feeling?
I don't think I'd feel (or be) much less safe/secure if an antagonistic nation developed such a thing and mine didn't. I would certainly feel less safe and secure if my nation had this capability regardless of whether or not others did.
You don't need MRI machines and experimental technology to read people's thoughts. You just need access to their search (or chat) history. It's not protected by HIPAA, ethical standards or attorney/client privilege. And it's always just ambiguous enough for you to paint whatever picture you need to in order to hang them.
Out of concern for data leakage to third-parties, my employer gave employees access to an internal-only Azure instance of GPT3.5.
The next thing they did was enable logging of all queries. Security collects the data, but forwards it to the analytics group for reasons even I don't know.
Tread lightly with WorkplaceGPT if you don't want your employer to "read your mind" and make future RIF decisions based on your AI-obfuscated incompetence.
Tangentially related, I was always intrigued whether it is possible to “record” someone’s inner monologue, when they actively move their vocal cords/folds, but not produce any sound. I think that you might get enough data to reconstruct sentences given that know which language to find words in.
Dylan’s closing to his 1965 song, It’s alright Ma…, suddenly feels a bit too real:
“And if my thought-dreams could be seen
They’d probably put my head in a guillotine
But it’s alright, Ma, it’s life, and life only”
That song, btw, was somewhat prescient of today’s Intrusive Advertising Industrial Complex:
“Advertising signs they con
You into thinking you’re the one
That can do what’s never been done
That can win what’s never been won
Meantime life outside goes on
All around you”
In a world of consumers who would salute as the next cool thing to have a device like a button that interfaces with brainwaves and connects online with devices and social media from the user's forehead, all running on closed non auditable platforms, what could possibly go wrong?
Torture is notoriously unreliable for retrieving accurate information. Abu Ghraib was just a reflection of America’s unthinking sadism on the global stage, not comparable to a mind reading machine.
Mind reading and thought crime are equal-opportunity problems.
Godwin's Law really needs to be updated to account for the gay persecution complex. Any state that corrupt would have a cheaper and easier time just tickling you until you confess.
This is probably the missing link for useful implanted brain-machine-interfaces too.
Previous decoding of neural signals was barely able to meet us half way, that is the subject also adapted to get the desired results and maybe the receiver learned a bit too.
When the work you're doing is several layers of abstraction removed from its worst applications it's easy to rationalize. For the authoritarian leader, this is a huge advantage of specialization and "replaceable cog in the machine" style of job standardization - no one person is building enough of the "evil" thing to feel responsible for the result, and most of the workers are replaceable enough that the "if I don't do it, they'll just get someone else who will" rationalization is probably correct.
> that the "if I don't do it, they'll just get someone else who will" rationalization is probably correct.
That may be correct, but from an ethical point of view, it's completely bankrupt. People who justify doing things they know to be unethical on the basis that someone else will just do it anyway are, of course, being unethical even if they are correct.
Even worse are those people who think others will do it anyway, so it's better if they themselves (obviously being good people) do it instead of those other, terrible people.
Everything can (and will) be weaponized, so the only realistic way to approach it is as a cost/benefit analysis. If something can bring more good than harm, excellent. If something can bring more harm than good, then maybe rethink things.
The upside to this technology is allowing a few people to communicate who have neurological damage. The downside is every government on earth being able to read your mind. Sounds like a good trade!
Okay but how about using this in conjunction with polygraph for some high stakes case where you'd benefit even if you can have slight confirmation of truth. then interrogator (for lack of better word) can structure questions such that you lean the conversation towards truth.
In fact now that I think about it, even 'pleading the 5th' can be worked around using this type of setup.
> Most people believe that polygraphs have some amount of validity
They have a much higher rate of false positive than false negative. That makes them useful in circumstances where your primary concern is, say, eliminating risky hires for jobs accessing sensitive data -- you may lose lots of good candidates, but your concern isn't fairness.
Privacy concerns aside - I'm unclear on the specific approach used in this study and how it differs from previous work and would love some input as to whether my interpretation is correct.
From my reading, the tl;dr is that they:
- Build a subject-specific model to predict fMRI activations when presenting a subject with words
(I don't believe this is novel in and of itself. I know it has been done with ECoG - [1] off the top of my head but I am fairly certain there are others. so maybe using fMRI is the main advance here?)
- Use GPT to generate candidate sentences, and see which candidates most match the true activations.
(This method of narrowing the solution space seems new.)
The improvement on within-subject accuracy over between subject leads me to believe that this is an actual benefit, but I'm struggling to determine how they quantify the improvements over and above "GPT is good at predicting human language".
I may be misunderstanding the approach altogether however, so take this with a grain of salt.
For "«powerful»" we could assume that the reference is to identifying the relevant patterns output by the fMRI, while the ANN transformers seem to be more related to the translation into «reconstruct[ed] continuous language from [the] cortical semantic representations»¹. I.e. they seem to be useful to give the interpretation a "form", a way to express it. What you read must be matched with something - transformers are "good at language".
When you lose privacy you lose ownership over your own thoughts. You lose agency over the direction of your own life.
Humanity suffers by either scenario, either by our own agency in control of power we are not prepared to manage or we will be managed by power that we can not control.
Many have their focus looking forward to the unknown concerns of AGI; however, these other issues are already nearly imminent and are far less present in discourse around AI.
The impacts on socialization and society in general from AI performing exactly as we have requested it may be one of the greatest threats. I've written quite a bit on that aspect, FYI - https://dakara.substack.com/p/artificial-intelligence-ai-end...