I am convinced that Q, if not started, was quickly taken over and turned into a Russian psyop. I expected the Mueller report to validate this (gross online astroturfing campaigns by state actors), but unless it is in the redacted parts, it is not there. But even casually investigating banned Twitter accounts used for Russian propaganda, you see a lot of clear connections with the "totally organic" Christian MAGA Qanon soccer mom accounts.
Leaves me mere speculation: I believe Q was initially conceived in case Trump had lost the election. It was to be a group of useful idiots/unwitting agents protesting about the "rigged" election, maybe even riled up to the point of taking up arms. Then when Trump won it was repurposed to sow disinformation about child sacrifice and child porn rings by the elite democrats. 9/11 saw the same thing: Massive efforts by Russia and Middle Eastern countries to seed inside job conspiracy theories. With the result that many Americans now believe 9/11 was an inside job, and can't really trust their government anymore (social cohesion damaged).
I believe some of these unwitting agents are so impressionable and gullible, that they can be made to act as "manchurian candidates", doing the bidding of foreign intelligence agencies (spreading propaganda, muddying the waters with disinformation and conspiracy, picking up a gun and going out to free children from a basement of a pizza parlor). I believe these intel agencies are able to infiltrate grassroots movements and subvert their ideals to be hostile to their host countries. I believe these agencies are able to create fake online realities, where unwitting agents are made to be believe they are part of something big, and everyone agrees with them, while they are psychologically manipulated.
> I believe these intel agencies are able to infiltrate grassroots movements
Yeah this happened, per Mueller. Very large BLM Facebook group that organized rallies is an example.
I previously thought most of the psyops efforts were lower level IRA trolling, per what was in un-redacted Mueller.
Digging into some of the private sector threat intel reports around semi-attributed campaigns (see the link), there's evidence that more deliberate operations are occurring.
Most of those complex campaigns I read seemed focused on border states between Great Powers, and CONUS campaigns focused on sowing nonsense, chaos and mucking up comms channels. But I guess it's no large feat, beyond boldness, to employ something with more structured narrative inside the CONUS digital sphere of influence.
I mean "of course this is a thing that happens," but it's still a seriously interesting read on paper, it could easily be employed in the US, and it's pretty bold to do.
Before the elections, when Putin thought that Trump would lose, and influence campaigns were designed to promote him as a detractor and divider, calling out the rigged elections and the deep state swamp. With Roger Stone laundering Russian "opposition research", like he did with Wikileaks, at that time a front for Russian intelligence.
4Chan has been very involved with these intelligence ARGs since after Project Chanology. Pedowood morphed into Pizzagate. "Seth Rich murder" was pushed hard on 4Chan, to detract from the fact that the Russians hacked the democrats. And it works: Instead of being insulted and angry that a foreign nation state so polutes the brains of your country and riles up the population to the level of riots and extreme polarization, some like to think that Hillary had Seth Rich assassinated for leaking. With so much smoke, who knows what to think? All the while you see artificial KONY2012-style meme campaigns accusing Mueller of being a deep state fixer, synced on Twitter, Facebook, and 4chan.
> anything trained on internet data is kinda doomed to poison itself on the high ratio of garbage floating around here?
Low-quality noise cancels out and leaves the high-quality signal. In the limit, the internet offers the true sequence probabilities for compression of natural text.
You can also put more weight on authoritative data sources, such as Wikipedia and StackOverflow, but even uniformly weighted: It is possible to sequence-complete prime numbers, despite the many many pages online with random numbers.
GPT-3 is trained on a filtered version of Common Crawl, enhanced with authoritative datasets, such as Books1, WebText, and Wikipedia-en. Moderation is done automatically, with a toxicity classifier/toggle. If GPT-n becomes good enough to be accepted in authoritative datasets, then it is perfectly fine training data, a form of semi-supervised learning.
Bias is going to be a double-edged sword: I believe it will be impossible to prescribe common sense, nor to sanitize common sense to remove, say, gender bias, and still be able to understand a sexist joke about female programmers, or male nurses. We want an AI to be human, but we don't want it to associate CEOs with white males, dark hair, wearing suits. That will conflict.
You can get a glimpse by scrolling websites like: https://www.darpa.mil/opencatalog?ppl=view200&sort=title&ocF... [.mil] and looking at DARPA and Office of Naval Research sponsored ML/AI research. The military has been deeply involved with ML/AI research since its inception, and it is near impossible to avoid first - or second degree involvement, if active in ML/AI.
The military wants: automated chat agents/web users that can be sent to dark web markets and hacker IRC channels and report back intelligence. Common sense inference from security and drone footage: predict who the killer is when watching a movie. Author deanonimization and cross-device tracking. Global-scale 99.9%+ accurate face detection.
The Dutch Intelligence Agency organizes a yearly competition with difficult codes to crack. [1] It is rare for someone to answer all questions correctly. The answers require logic, creativity, common sense, linguistics, causal inference, spatial reasoning, expertise, analysis, and systematic thinking. I bet the military would be mighty interested in an automated problem solver for that. And mighty scared some other country gets there first.
My cursory research, after sensing something was amiss or forced with the narritive, is that the situation is much the same as border detention centers. These cages were in use during Obama era, but only went viral under Trump. The same thing is happening here: The same not-so-secret police was active during the Ferguson Riots, but only now do we make a big stink out of it.
The police is not secret: They are from the Federal Bureau of Prisons (DoJ), called the Special Operations Response Team, specialized in disturbance/riot control and to assist local police in case of emergency.
Don't play the semantics game: Even rioters are protesters. It makes as much sense for a SORT to arrest innocent protesters than it makes sense for a taxi driver to sweer onto the pavement and hit people.
No, low-background steel is mentioned in the replies. It's similar in the sense that GPT-3 generated text is going to contaminate the data we collect from now on.
Giving certain people, highly visible on social media, pre-public access to the model, and letting them cherry pick their completions to post without the prompt or amount of tries, is a smart form of propaganda/hype building/PR management, that we have come to expect from "GPT-2 is too dangerous to release" openAI
Sometimes I forget that, while this model was created by scientists, and released with a scientific paper, it is essentially a for-profit business product, and such cheap tricks deserve harsh criticism.
> Sometimes I forget that, while this model was created by scientists, and released with a scientific paper, it is essentially a for-profit business product, and such cheap tricks deserve harsh criticism.
Sure, but this is akin to seeing bad science journalism and tarring the science itself with the same brush. GPT-3 still factually has certain properties, independently of anyone making grandiose assertions about those properties.
What those properties are, we can only say slightly—e.g. we know it’s capable of generating certain texts eventually, among an unbounded corpus of other texts it may have generated that were then human-discarded. But the fact that it can generate those texts at all—faster than brute-force, I mean—is an interesting fact on its own, worthy of scrutiny independent of whatever airier claims are being made.
It is certainly impressive, and I don't want to discard GPT-3. Just critiquing the (smart) release: make a select few feel special by giving them API access, and watch your product dominate the tech - and news cycle for weeks. You'll have VC money in the bank before showing actual worth or business value.
Maybe a bit simplistic, but I view GPT as a Markov chain text generator, operating on word vectors instead of word tokens, and having a larger look-back. It's like a child copying a joke, because she heard adults laughing about it, but she does not understand the punchline. You wouldn't say that child understands or even displays humor, despite substituting "horse" with "donkey" when retelling the joke.
I've spent ten hours playing with it over the last two days. It isn't perfect, and it feels short of the hype it's generating about itself, but it's an amazing leap nonetheless. It really seems to have an understanding of causality, biology, all sorts of fictional themes...
It isn't perfect. You frequently have to back it up and try again. Unless you make good use of the site's long-term memory function, it'll forget anything that happened over a page ago, and a lot of the time its idea of what should happen next doesn't match the plot I had in mind. I'm getting better at that.
However, as a writer myself, I can say that this is just as true for human writers as well. For every final draft you see there are ten discarded ones, and a hundred that never made it to paper.
Viewed that way, GPT-3 is actually much better at the core part of writing than I am! It's more creative, it uses English better, it's better at matching the narration to the characters than I am...
It's just that this isn't enough. It's missing a full model of the world, and it doesn't know how to look at what it's written and decide if it matches its intent, or whether it'll break consistency or get in the way later.
It doesn't have an intent. It doesn't know about consistency.
But that's also true for that part of me.
GPT-3 isn't a human-level writer. What I've determined, however, is that it's a huge part of one, and it's more than good enough to fulfill the role of that part already. Now we just need the other nine tenths.
> it doesn't know how to look at what it's written and decide if it matches its intent, or whether it'll break consistency or get in the way later.
And we can build other models specifically for this. We don't need to add this stuff to GPT-3; GPT-3 can literally act as a part, a component. GPT-3 can serve the role in a larger model that "imagination" does in a human brain—being fed inputs; having corresponding outputs scavenged through by the rest of the model; and then being "fed back" with input that relates to the scavenged outputs.
One thing I'd be very curious to see tried, is to get a system consisting of GPT-3 as "writer", and some other (summarization?) model as "editor", to attempt to dramatize or adapt into prose fiction, a machine-readable sequence of events (e.g. a machinima recording of a stage-play enacted within an MMO game.)
We already have models that turn machine-readable sequences of events directly into prose; see e.g. baseball news reporting. Such models can work just as well in reverse, summarizing in-domain prose back into machine-readable facts.
So if you take such a prose-to-factual-assertions "reading comprehension" model, and feed it GPT-3's output; and then measure the distance between the set of events comprehended by the "reading comprehension" model from GPT-3's output, and the source data (which is also in the form of a set of factual assertions), then you can iterate GPT-3 — maybe even one additional line of prose at a time — to find a story that is a consistent adaptation of the source. In this sense, GPT-3 is acting as a programmer, and the "reading comprehension" model as a compiler — with the compiler reaching out and erasing any line that doesn't compile.
Of course, you're limited in this by the "reading level" of the reading-comprehension model. But this is also true of humans; you can't get out a literary classic if the writer's editor and alpha-readers were five-year-olds.
The domain is play.aidungeon.io and the GPT3 based version is only available to sponsors right now.
After seeing that the domain name didn't work I thought for a moment that your post was GPT3 output-- imaginary URLs is a good GPT2 tell--, but some research shows that there actually is a GPT3 version:
I don't care if you followed a two month course by Joel Grus on Fizzbuzz and graduated magna cum laude. I care if you can actually code Fizzbuzz.
Coursera courses on your resume, is like listing your Microsoft Office skills. It would be noteworthy, if for some reason, you can't find your way around MS Office, or can't pick it up in a weekend.
> Missing parentheses in call to 'print'. Did you mean print(1, 2, 3)?
Yes, Python, I meant exactly that! I've never seen this error message in error. Now you know what I mean, please fix it automatically, I know you can do that. Heck, throw a single warning if you really want to enforce this. I can ignore warnings that I don't care about.
I'm not sure whether to take this as an in-joke (knowing the reference is part of the fun) or poor attribution (credits should be given where they're due).
Since at least 2011, the Dutch test wastewater of cities for traces of drugs, such as cocaine and XTC, and use this to inform public policy.
Already in 2009 it was known that coronaviruses can survive and remain infectuous in sewage water for up to two weeks.
Early covid wastewater research mostly focuses on waste water and stool as a possible infection vector, and in March 3rd the WHO suggested that COVID patients use their own toilet, to avoid spread through aerosolization during flushing, and to increase chlorination efforts of sewage systems.
From the first outbreak of SARS-CoV, it was found that apartment plumbing can be an infection vector, and due to this, in Hong Kong, people suggest closing the toilet lid before flushing.
Well I was taught to do it in one motion. One hand closes the lid and the other pushes the flusher. So it is closed by the time things start spraying around.
Leaves me mere speculation: I believe Q was initially conceived in case Trump had lost the election. It was to be a group of useful idiots/unwitting agents protesting about the "rigged" election, maybe even riled up to the point of taking up arms. Then when Trump won it was repurposed to sow disinformation about child sacrifice and child porn rings by the elite democrats. 9/11 saw the same thing: Massive efforts by Russia and Middle Eastern countries to seed inside job conspiracy theories. With the result that many Americans now believe 9/11 was an inside job, and can't really trust their government anymore (social cohesion damaged).
I believe some of these unwitting agents are so impressionable and gullible, that they can be made to act as "manchurian candidates", doing the bidding of foreign intelligence agencies (spreading propaganda, muddying the waters with disinformation and conspiracy, picking up a gun and going out to free children from a basement of a pizza parlor). I believe these intel agencies are able to infiltrate grassroots movements and subvert their ideals to be hostile to their host countries. I believe these agencies are able to create fake online realities, where unwitting agents are made to be believe they are part of something big, and everyone agrees with them, while they are psychologically manipulated.