One pixel can be enough to fool AI image recognition

Posted on Monday, Oct 30 2017 @ 10:38 CET by Thomas De Maesschalck
A group of researchers from the Japanese Kyushu University discovered AI-based image recognition tools can be really dumb and easily exploited. By altering a relatively small number of pixels, which would not be noticeable to a human observer, the AI-based systems can be tricked into classifying a photo of a car as a dog, or a dog as a cat:
As explained in a paper, the researchers came up with the startling conclusion that a one-pixel attack worked on nearly three-quarters of standard training images.

Not only that, but the boffins didn't need to know anything about the inside of the DDN – as they put it, they only needed its “black box” output of probability labels to function.

The attack was based on a technique called “differential evolution” (DE), an optimisation method which in this case identified the best target for their attack (the paper tested attacks against one, three, and five pixels).
INTC logo

By changing five pixels in a 1024 pixels photo, the researchers achieved a success rate of 87.3 percent. The big caveat here is that they used very small images. On a photo of 280,000 pixels, which is about 530 x 530 pixels, it would require the alteration of 273 pixels, which is still relatively little. Full details at The Register.

About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.

Loading Comments