Adversarial Turtle? AI Image Recognition Flaw Is Troubling

Artificial intelligence is all the rage these days. Technology companies are hiring talent straight out of universities before students even finish their degrees in the hope of becoming a frontrunner in what many see as the inevitable future of technology. However, these machines are not exactly faultless, as showcased by a flaw found in Google’s own AI system.

The image you’re looking at above is obviously a turtle; so why does Google’s AI register it as a gun? Researchers from MIT achieved this trick through something called adversarial image, which are images that have been purposely designed to fool image recognition software through the use of special patterns. This in turn makes an AI system confused into declaring that it’s seeing something completely different.

It’s alarming that Google’s image recognition AI can be tricked into believing a 3D printed turtle is a rifle. Why? Well, if artificial intelligence progresses to the level the industry sees it going, ranging from self-driving cars to even protecting human beings, an error may lead to severe consequences. For example, an autonomous car relies on machine intelligence, but if it doesn’t qualify the sidewalk as part of the road, civilians are prone to serious injuries, as you can imagine.

“In this work, we definitively show that adversarial examples pose a real threat in the physical world. We propose a general-purpose algorithm for reliably constructing adversarial examples robust over any chosen distribution of transformations, and we demonstrate the efficacy of this algorithm in both the 2D and 3D case,” MIT researchers stated. “We succeed in producing physical-world 3D adversarial objects that are robust over a large, realistic distribution of 3D viewpoints, proving that the algorithm produces adversarial three-dimensional objects that are adversarial in the physical world.”

Google and Facebook are fighting back, however. The tech giants have released their own research that indicates they’re looking into MIT’s adversarial image technique to discover methods of securing their AI systems.

Although society as a whole may look to completely put their faith into AI because of the progress that has been made in the field, to completely trust AI over human eyes is a troubling thought to entertain when you take this study by MIT into context.

“This work shows that adversarial examples pose a practical concern to neural network-based image classifiers,” they concluded.

Contenuto non disponibile
Consenti i cookie cliccando su "Accetta" nel banner"