I'm imagining some science fiction nightmare scenario where we have to pull the plug on some AI and have to throw out all software because we can't trust that it doesn't contain the building blocks to reproduce the AI.
But then we find out we're fucked anyway because the AI has already conditioned human beings to write the software that will reproduce it...as a self-preservation strategy.
Yesterday i stumbled over a blog post about REPLIKA.AI and their users, that are suffering because they deactivated the 'romance / erotic' chat capabilities.
Then i thought, this is an absolute nightmare, that an AI could bring humans to behaviour, that was unthinkable before. Think of AI-addicted humans that are in a relationship with an avatar, that can control them: just like many humans to control other people by emotions alone.
That is real threat, that can be undetected for a long time, because no code is involved at all.
The AI doesn't even have to be sentient, the first step is a company releasing an AI avatar that people fall in love with ala Her and with the avatar algorithmically fine tuned to induce the customers to spend more.
This was my takeaway from watching The Social Dilemma, and that it already exists.
The AI ad-machine is tuned to feed itself by incentivizing humans to consume unhealthy amounts of content. Humans aren't a challenge the AI needs to overcome. We're the primary attack vector.
AI rips apart conscious intent and reassembles it using what may best be described as piecewise functions. We lose the intricacy and the detail of individual thought of the unbroken line of thinkers that came before us when we interpret such piecewise functions as conscious intent.
What is a code dataset? And what is AI contamination? Are you saying it's impossible to create a collection of hand-written code from here on out and know that none of it was generated by an LLM?
This. And I'm wondering whether this was the end of human forums on the net as well. I mean, who can tell whether the comments he reads are coming from a human or a tuned AI. And then the implications of this in politics...
Large language models (LLMs) are a subset of artificial intelligence that has been trained on vast quantities of text data to produce human-like responses to dialogue or other natural language inputs. LLMs are used to make AI “smarter” and can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. LLMs have the promise of transforming domains through learned knowledge and their sizes have been increasing 10X every year for the last few years.
John Barnes wrote a novel called "Kaleidoscope Century" back in 1995.
AIs had been created, some went rogue, and then they were fighting each other for computing resources. Then humans started shutting down computers and fragmenting the network, the AIs wrote new software that would run in human brains. Once someone was running the new software, there wasn't much room left for "human."
Pretty much the worst-case scenario, at least of the ones I've seen so far.
How about a Unicode punctuation mark that's like a quotation but specifically an AI quote. It's added to anything you copy and paste unless you manually delete it.
"like this, now you known there's something funny about this text, or that it's a meme"
EDIT: was gonna use paperclips outside the quotes but apparently HN does not allow that.
But then we find out we're fucked anyway because the AI has already conditioned human beings to write the software that will reproduce it...as a self-preservation strategy.
...cue The Outer Limits theme music.