While you may be happy with social sites identifying what’s in your photo, there are people that go to lengths to ensure that the images posted online are not identifiable as to what the image contains. This means for automated systems scanning for specific types of image, they are fooled into believing that an elephant is a bowl of guacamole. The BBC reported last year ( http://www.bbc.co.uk/news/technology-41845878 ) that some images with a single pixel change could fool image classification software.
Thanks to research by Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo and James Storer of Brandeis University , their new published paper (Protecting JPEG Images Against Adversarial Attacks, 2018) identifies new algorithms which could be used to protect the images.
To give a visual, here’s the elephant example that the team worked on, among others.
As can be seen to the human eye, the 4 pictures look extremely similar. Working from left to right, the team start with the original image, then the image that has had the attack applied, this shows the classification as guacamole. The third photo is the image first being compressed to JPEG and then classified, this shows that it correctly classifies the elephant, but with low confidence, and considers it may be a chimpanzee. The final image uses the new Aug-MSROI method, which renders an image that is classified correctly with good confidence, and can still be decoded by normal JPEG routines.
So if you were thinking that by over/underlaying hidden images you could turn the photo of your pet kitten to be identified as a bowl of chilli, then think again, the likelihood of it still seeing a purring kitten is high.