Artificially intelligent technology can be strangely (and hilariously) terrifying when it comes to accomplishing tasks set to perform tasks. Think of InspiroBot, an algorithm that skillfully creates inspirational posters with knowledge quiz nuggets on the lines of “Before inspiration, the slaughterhouse comes” and “If you want to get somewhere in life you have to try to die”. Or the AI-written Harry Potter novel (Harry Potter and the Portrait of What Looks like a Large Pile of Ash), in which Ron serves “a kind of crazy call dance” before feeding Hermione’s family.
Now, researchers at the San Francisco-based OpenAI have developed a text-raising algorithm that is so awesome, mentally flexible in its work that they’ve decided to keep the technology at bay, it’s a fake news condition concerned about potential damage.
The result is a finished piece that looks absolutely admirable – but it’s actually a complete balloon. The team described their research in an online peer-reviewed paper online. The purpose of the technical (GPT2) is to create a complete article on any subject from a human-written prompt.
Here’s an example piece, Brave Human Prompt: In a fascinating search, scientists discovered a swarm of unicorns living in a remote, previously undiscovered valley in the Andes Mountains. What was even more surprising to the researchers was that the unicorns spoke perfect English. Scientists have named people after their distinctive horns, the Ovid’s Unicorns after these four-horned, silver-and-white unicorns that were previously unknown to science.
Now, almost two centuries later, the mystery of what spread this strange phenomenon has finally been solved. When Dr. Jorge Perez, an evolutionary biologist at La Paz University, and several of his companions discovered the Andes Mountains, they discovered a small valley where there were no other animals or humans. Perez noticed that there was a natural spring in the valley that was surrounded by two peaks and silver ice.
The end result of creating word by word is often believable but full of complete stupidity, false quotes, and abuse.
The algorithm was trained using a bank of about 8 million web pages posted on Reddy, each with a score of three or more “karma “. This means that three or more people rated the content as valuable because it was informative or entertaining.
In general, they say, algorithms perform better in subjects such as politics and popular culture, which are highly represented in the 6 million web pages compared to, highly specialized technical subjects. It’s not always the right thing to do. Sometimes it repeats the text and at other times switches the subject randomly and irrationally. The team also occasionally notes “world modeling failures” – for example, underwater fires (an event that would be physically impossible).
Nevertheless, it is enough to express concern about the potential impact of algorithms when it comes to online scams and fake news. At a prompt submitted by the Guardian, for example, it created a completely credible fake story on Brexit, complete with quotes from opposition leader Jeremy Corbyn and a reference to the Irish frontier – one of the most controversial of the various series surrounding Brexit. And so, like DeepFakes, at least for now, the team has decided that they will not make the algorithm publicly available. However, they hope that this will create a debate on how to use and control AI technology.
Explained on the research team’s organization blog, “These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.” “The public needs to be more skeptical about the text found online, as the ‘deep fake’ incident has called for more skepticism about the image.”