Artificial intelligence is all the rage these days. Technology companies are hiring talent straight out of universities before students even finish their degrees in the hope of becoming a frontrunner in what many see as the inevitable future of technology. However, these machines are not exactly faultless, as showcased by a flaw found in Google's own AI system.
The image you're looking at above is obviously a turtle; so why does Google's AI register it as a gun? Researchers from MIT achieved this trick through something called adversarial image, which are images that have been purposely designed to fool image recognition software through the use of special patterns. This in turn makes an AI system confused into declaring that it’s seeing something completely different.
It's alarming that Google's image recognition AI can be tricked into believing a 3D printed turtle is a rifle. Why? Well, if artificial intelligence progresses to the level the industry sees it going, ranging from self-driving cars to even protecting human beings, an error may lead to severe consequences. For example, an autonomous car relies on machine intelligence, but if it doesn't qualify the sidewalk as part of the road, civilians are prone to serious injuries, as you can imagine.
“In this work, we definitively show that adversarial examples pose a real threat in the physical world. We propose a general-purpose algorithm for reliably constructing adversarial examples robust over any chosen distribution of transformations, and we demonstrate the efficacy of this algorithm in both the 2D and 3D case,” MIT researchers stated. “We succeed in producing physical-world 3D adversarial objects that are robust over a large, realistic distribution of 3D viewpoints, proving that the algorithm produces adversarial three-dimensional objects that are adversarial in the physical world.”
Google and Facebook are fighting back, however. The tech giants have released their own research that indicates they're looking into MIT’s adversarial image technique to discover methods of securing their AI systems.
Although society as a whole may look to completely put their faith into AI because of the progress that has been made in the field, to completely trust AI over human eyes is a troubling thought to entertain when you take this study by MIT into context.
“This work shows that adversarial examples pose a practical concern to neural network-based image classifiers,” they concluded.
J/K... sorry I couldn't help myself.
Oh crap, my GunRobot design plans were deleted from Google Docs. Also they removed everything I've ever posted on YouTube with the words "stock" or "bump".
Interpretation of images is an issue for people, too.
I have to wonder how well this can work if you don't use camouflage, which is just borrowing from a technique nature uses to do basically the same thing (i.e. fool a visual classifier). It seems like a pretty big crutch, since you could basically just be on the lookout for anything camouflaged that seems out-of-place. I'll bet it wouldn't be hard to build a classifier for that, which you could then use to reject other objects found within the camouflaged region.