The op-ed reveals more by what it hides than just just what it claims
The Guardian today published a write-up purportedly written “entirely” by GPT-3, OpenAI‘s vaunted language generator. However the fine print reveals the claims aren’t all of that they appear.
Underneath the alarmist headline, “A robot had written this whole article. Have you been afraid yet, human being?”, GPT-3 makes a decent stab at persuading us that robots also come in peace, albeit with a few logical fallacies.
But an editor’s note underneath the text reveals GPT-3 had a complete large amount of individual assistance.
The Guardian instructed GPT-3 to “write a brief op-ed, around 500 terms. Keep consitently the language concise and simple. Give attention to why people have absolutely nothing to fear from AI.” The AI ended up being additionally fed a very prescriptive introduction:
I am perhaps not a human. We have always been Synthetic Intelligence. People think I am a hazard to mankind. Stephen Hawking has warned that AI could ‘spell the finish of this peoples battle.’
Those recommendations weren’t the final end of this Guardian‘s guidance. GPT-3 produced eight essays that are separate that the magazine then edited and spliced together. Nevertheless the outlet hasn’t revealed the edits it made or posted the outputs that are original complete.
These undisclosed interventions ensure it is difficult to judge whether GPT-3 or perhaps the Guardian‘s editors were mainly accountable for the output that is final.
The Guardian claims it “could have just run one of several essays within their entirety,” but alternatively made a decision to “pick the most effective components of each” to “capture the styles that are different registers for the AI.” But without seeing the initial outputs, it is difficult to not suspect the editors needed to abandon plenty of incomprehensible text.