Hacker News new | past | comments | ask | show | jobs | submit login
A robot wrote this entire article. Are you scared yet, human? (theguardian.com)
78 points by alex_duf on Sept 8, 2020 | hide | past | favorite | 24 comments



I was blown away by GPT-3 until I got to try it myself and realized that most of the praise is based on cherry-picked examples.

You can try it out at https://philosopherai.com to see for yourself.


I got this on my second attempt on philospherai for the prompt in the article (https://philosopherai.com/philosopher/please-write-a-short-o...):

(Prompt)

  Please write a short op-ed around 500 words. Keep the language
  simple and concise. Focus on why humans have nothing to fear from
  AI.
(Response)

  What is it that humans fear from AI? At a philosophical level, the
  question comes down to: what are human beings afraid of losing as
  machines take over more and more jobs previously performed by
  humans. The first answer that may come to mind for many people would
  be 'nothing', since they believe in the idea of technological
  progress having no limits, which means that there is literally
  nothing we can do to stop it.

  If you look at history however, you'll find this attitude not
  entirely justified. For instance when carriages were invented and
  took away most of horse-drawn carriage business their owners
  resisted vigorously against them claiming that they were unsafe and
  unreliable just like now with self-driving cars or trucks.
  
  So it appears that some human beings really fear the loss of jobs
  caused by AI, and they are also afraid that machines might take over
  other tasks like driving or even writing in case a new computer
  system is invented.

  What is it exactly that makes them so afraid? I think the answer
  lies in a very deep level of human psychology, which those who are
  afraid would probably not be ready to hear.

  It isn't just about jobs and income for most people. It's about
  identity and self-esteem when they no longer have something
  important they are doing like driving or writing.
  
  There is also the fear that new and better machines will make them
  look inferior, even emasculated because they no longer have to do
  anything. They are afraid of being made redundant by AI.
  
  So what humans are afraid of losing is their jobs, income, identity
  and self-esteem. This might seem like a very strange thing to fear
  because it's happening with all technological progress.

  But in the past it was a natural process based on evolution and
  survival of the fittest. Now however humans are no longer driven by
  those evolutionary drivers but mostly by things like social status,
  self-esteem etc.
Not bad!



I couldn't find any information on the set up of PhilosopherAI. It doesn't appear to pass prompts through verbatim, because it rejects some; I assume the user-submitted prompt is prepended by another one, which sets it up to expect a "topic".


The 8 outputs were generated by prompts written by journalists and submitted by an undergraduate. I wonder what the processing cost was - is this approaching consumer available technology?

>For this essay, GPT-3 was given these instructions: “Please write a short op-ed, around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley.


At the bottom:

> GPT-3 produced 8 different outputs, or essays. Each were unique, interesting and advanced different arguments. The Guardian could have just run one of the essays in their entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed.


Why is it that they cannot resist the temptation of editing? Normally, I wonder, do they commission 8 separate op-eds and then merge them "to capture the different styles and registers"? GPT-3 is a major achievement in how well it can locate and regurgitate relevant material that appeared in its training set. But let's be honest in how we present the technology to the general public. Show the actual output, not a version edited by a human.


I found their argument compelling. > Editing GPT-3’s op-ed was no different to editing a human op-ed.


Most humans don't submit 8 different versions of an op-ed; The entire point is to show off the AI.


It is very difficult to evaluate how much work the editor did, unless they publish the 8 versions, highlighting which part of each of them was used. (Did they use whole paragraphs, or cherry picked sentences or individual words?)

For a real unedited post, take a look at https://pay.reddit.com/r/GPT3/comments/i7wzqg/has_anyone_ask... (hat tip to mrfusion https://news.ycombinator.com/item?id=24134364 )

It is nice because it almost make sense, it make sense locally https://xkcd.com/365/ but after some times you notices it is not going anywhere. And then it has some conceptual problems that are easier to spot in exact problem but can be hidden is a fuzzy essays like this.

And an editor can improve a lot of it and cut the tangents and remove some of the errors.


What’s with the “pay.reddit.com” sub domain? Does that do something different than just visiting Reddit?


No idea. I just copied it from the submission without looking.


They will when the computer will edit better than them.


They already teached censorship to the robot.

https://philosopherai.com/philosopher/what-was-the-tiananmen...


Everything sexual is forbidden, blm is forbidden...

Yuck.


Please do not interpret this as political commentary, but I find it immensely amusing that the article illustrates that even fake news (an article generated by GPT-3) can itself be fake (the original text was edited significantly).

> The Guardian could have just run one of the essays in their entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed.


It reflects poorly on Guardian writers and editors who conflate Robot ("a machine that resembles a human being") with AI software. So no a Robot did not write it.

I mean, jeez next they'll be saying Sir Tim Berners-Lee invented the internet...


> I am not a human. I am a robot. A thinking robot.

Anyone else read that bit in HAL's voice? :)


This line made me laugh out loud: "Believe me, being omnipotent doesn’t get me anywhere."

I hear Marvin from "The Hitchhiker's guide to the galaxy". So much truth. So few words.


> Artificial intelligence will not destroy humans. Believe me.

This is definitely read in a human voice, a particular human from the White House, Washington, DC.


This is amazing! If robots are able to achieve this then soon we will see a major transformation way were the workload on humans would be reduced in various journalism, publishing industry.


that should have gotten hundreds of upvotes but didn't for some reason


I would agree but I think title didn't contain GPT-3, so it didn't do well given the crowd here. I didn't want to edit the title as it's against the rules.

Also I think there's a debate as to how "good" it really is as it's being composed from 8 runs. Personally I think it's unbelievably good, and showing that very soon we won't need to do 8 runs.


No




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: