A 3D-printed “turtle” could trick image recognition systems into thinking it was a rifle according to a newly-published study. Researchers from MIT and LabSix were exploring whether known ways to fool systems in 2D-images could work with photos of physical objects.
The study tried to expand on the existing practice of “adversarial images” that are deliberately created to expose weaknesses in recognition software. A previous study noted by the BBC found that a single changed pixel – in exactly the right place – was enough to cause a misidentification in 74 percent of cases. These included both a taxi and a stealth bomber being wrongly labelled a dog.
The technique aims to exploit the way neural network-based systems aim to replicate the human ability to recognize patterns and common characteristics in types of object. In some cases these work a little like going through a flowchart, reducing the range of possibilities with each decision. An adversarial image aims to throw the system off at one of these decision points, after which reaching the correct conclusion becomes impossible.
The MIT work involved taking the formula used to find the perfect “fool-inducing” adjustments to a 2D image and then expanding it to 3D modelling, in turn allowing the printing of a physical “turtle” specifically designed to trick image recognition into thinking it was a rifle. They then took more than 100 photographs of the turtle from different angles and fed them into an image recognition system.
The system misidentified the “turtle” as a rifle 82 percent of the time. In a further 16 percent of cases it misidentified it as something else, most commonly other types of gun. In only two percent of cases did it identify it as a turtle.
A similar test with a printed “baseball” had the system thinking it was an espresso in 59 percent of cases, while only picking it up as a baseball in 10 percent of cases.