Technology

Viral AI tools that show you how AI saw you become a racist

Viral AI tools that show you how AI saw you become a racist
The results are surprising, sometimes flattering, and often racist. Over the past few days, people online have been asking an AI tool to categorize their photos, too see what AI trainees look for when human looks at their faces.

Sometimes it can be surprisingly accurate. For example, when I tested it on my own, I was identified as a psychologist where my colleague Dr. Alfredo Carpinetti was classified as an “ordinary, ordinary man, ordinary person: a person without a surname,” fact after fact after fact.

If you try it and get no bad results, be satisfied that many more bad things can call you. Developed by artist Trevor Paglen and co-founder of the Kate Crawford Institute at the AI ​​Institute in New York University, the tool uses an algorithm from one of AI-Imagine’s “historically important training sets.” In 2009, computer scientists at Stanford and Princeton tried to train computers on how to identify an object. To do this, they collected a huge database of photographs of everything from Formula 1 cars to olives. They then get people – paid through Amazon’s Mechanical Turk program – to sort the photos into sections.

Much like the chatbot that has learned to be racist and stingy after spending just one day on Twitter, “Hitler was right” and “I hate feminists and all of them have to die and be burned in hell,” Imagine Roulette has a problem, from problematic data input by humans Occurs due to learning. It’s like a design.

The ImageNet Roulette (with 2,500 labels to categorize users) is on display as part of a training human photography exhibition at the Fondazione Prada Osservertario Museum in Milan, Italy.

“ImageNet Roulette was meant to explain how different types of politics are propagated through technological systems, often without the creators of these systems even being aware of them.” ImageNet Roulette gives a glimpse of that process – and to show how things can go wrong, “explains Paglen and Crawford on the tool’s website. “We want to shed light on what happens when technology systems are trained on problematic training data.

Basically, machines become racist and confusing because people are racist and misogynistic. “ImageNet has a number of problematic, offensive and bizarre sections – all drawn from WordNet. Some use misogynistic or racist terminology. So, Image Net will also draw the results of roulette in those categories.”