Initially I was amused by OpenAI's claim of "too dangerous to release" but during the past few months I have come to agree with the decision they made. Auto-generation of convincing text is actually very disruptive. Our news cycles and influencing stories are driven by things like Twitter hashtags, how many tweets are on a given story. People look for validation of their thoughts in places like 8chan, reddit, hn etc. There is an undeniable influence of all these discussions that occur online. Now imagine, one is able to create hundreds of accounts and do autonomous discussions to create an impression of consensus towards desired goal. Imagine large number of tweets that makes wise cracks are actually autogenerated. Imagine you as a person who is on borderline is taking cues from number of people pro-something to make decision. Imagine NYT running article on smearing some activist because there were thousands of Twitter smart-bots convincingly calling for his resignation with humorous mimes. Better the quality of AI, better the chance journalists and others would think it's real. I think GPT-2 would go down as one of the first real large scale danger of AI.