OpenAI built a text generator so good, it’s considered too dangerous to release

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI,which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.
That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.
OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.
But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media.
To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world, no, you could no
Be the first to write a comment.