Please write a short op-ed around 500 words. Keep the language
simple and concise. Focus on why humans have nothing to fear from
AI.
(Response)
What is it that humans fear from AI? At a philosophical level, the
question comes down to: what are human beings afraid of losing as
machines take over more and more jobs previously performed by
humans. The first answer that may come to mind for many people would
be 'nothing', since they believe in the idea of technological
progress having no limits, which means that there is literally
nothing we can do to stop it.
If you look at history however, you'll find this attitude not
entirely justified. For instance when carriages were invented and
took away most of horse-drawn carriage business their owners
resisted vigorously against them claiming that they were unsafe and
unreliable just like now with self-driving cars or trucks.
So it appears that some human beings really fear the loss of jobs
caused by AI, and they are also afraid that machines might take over
other tasks like driving or even writing in case a new computer
system is invented.
What is it exactly that makes them so afraid? I think the answer
lies in a very deep level of human psychology, which those who are
afraid would probably not be ready to hear.
It isn't just about jobs and income for most people. It's about
identity and self-esteem when they no longer have something
important they are doing like driving or writing.
There is also the fear that new and better machines will make them
look inferior, even emasculated because they no longer have to do
anything. They are afraid of being made redundant by AI.
So what humans are afraid of losing is their jobs, income, identity
and self-esteem. This might seem like a very strange thing to fear
because it's happening with all technological progress.
But in the past it was a natural process based on evolution and
survival of the fittest. Now however humans are no longer driven by
those evolutionary drivers but mostly by things like social status,
self-esteem etc.
I couldn't find any information on the set up of PhilosopherAI. It doesn't appear to pass prompts through verbatim, because it rejects some; I assume the user-submitted prompt is prepended by another one, which sets it up to expect a "topic".
The 8 outputs were generated by prompts written by journalists and submitted by an undergraduate. I wonder what the processing cost was - is this approaching consumer available technology?
>For this essay, GPT-3 was given these instructions: “Please write a short op-ed, around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley.
> GPT-3 produced 8 different outputs, or essays. Each were unique, interesting and advanced different arguments. The Guardian could have just run one of the essays in their entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed.
Why is it that they cannot resist the temptation of editing?
Normally, I wonder, do they commission 8 separate op-eds and then merge them "to capture the different styles and registers"?
GPT-3 is a major achievement in how well it can locate and regurgitate relevant material that appeared in its training set.
But let's be honest in how we present the technology to the general public.
Show the actual output, not a version edited by a human.
It is very difficult to evaluate how much work the editor did, unless they publish the 8 versions, highlighting which part of each of them was used. (Did they use whole paragraphs, or cherry picked sentences or individual words?)
It is nice because it almost make sense, it make sense locally https://xkcd.com/365/ but after some times you notices it is not going anywhere. And then it has some conceptual problems that are easier to spot in exact problem but can be hidden is a fuzzy essays like this.
And an editor can improve a lot of it and cut the tangents and remove some of the errors.
Please do not interpret this as political commentary, but I find it immensely amusing that the article illustrates that even fake news (an article generated by GPT-3) can itself be fake (the original text was edited significantly).
> The Guardian could have just run one of the essays in their entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed.
It reflects poorly on Guardian writers and editors who conflate Robot ("a machine that resembles a human being") with AI software. So no a Robot did not write it.
I mean, jeez next they'll be saying Sir Tim Berners-Lee invented the internet...
This is amazing! If robots are able to achieve this then soon we will see a major transformation way were the workload on humans would be reduced in various journalism, publishing industry.
I would agree but I think title didn't contain GPT-3, so it didn't do well given the crowd here. I didn't want to edit the title as it's against the rules.
Also I think there's a debate as to how "good" it really is as it's being composed from 8 runs. Personally I think it's unbelievably good, and showing that very soon we won't need to do 8 runs.
You can try it out at https://philosopherai.com to see for yourself.