Pictures Fight Back: MIT’s Photoguard AI Image Protection

·

·

Pulling the Plug on AI Image Manipulation: Meet PhotoGuard

Believe it or not, the mighty Dall-E and Stable Diffusion were only opening salvos in the AI artillery. As we wade deeper into the pool of generative AI, it’s evident that the race isn’t for the swift, but for the innovative. As businesses scramble to offer more than their rivals, we’re seeing chatbots flex their muscles in not just creating, but editing images. Leading this vanguard are the likes of Shutterstock and Adobe.

But here’s the dark underbelly — the amplified power of AI opens a Pandora’s box of potential misconduct. The unscrupulous could manipulate or even pilfer existing artwork online. Sure, watermarking techniques can throw a spanner in the works for some wrongdoers, but there’s a more formidable weapon in the arsenal now. Enter “PhotoGuard”, an innovation from the brains at MIT CSAIL.

AI-Proofing Art: The Genius Behind PhotoGuard

PhotoGuard operates on a devilishly simple principle: it meddles with specific pixels in a picture, throwing off AI’s comprehension of the image. To our humble human eyes, these disturbances are undetectable, but for machines, it’s a different story altogether.

The “encoder” method of infusing these anomalies zeroes in on the algorithmic model’s latent representation of the target image. Simply put, it’s like smudging the math homework of AI, making it impossible to decipher the position and color of every pixel in an image. The more demanding “diffusion” method takes subterfuge to a whole new level. It dresses one image as another in the eyes of AI, throwing it off the scent.

MIT doctoral student and lead author of the paper, Hadi Salman, elucidates, “The encoder attack tricks the model into believing that the input image is an entirely different image. The diffusion attack, on the other hand, nudges the diffusion model to make edits towards some target image.”

However, it’s not all sunshine and rainbows. There’s a risk of malevolent entities reverse-engineering the safeguarded image, possibly by adding digital noise or altering the image’s layout.

A Call to Arms

Salman is adamant about the collective responsibility of model developers, social media platforms, and policymakers to armor against unauthorized image tampering. He opines, “Addressing this critical issue is essential today. And while I’m gratified to contribute to this solution, the journey towards practical protection is still a long one. Companies developing these models need to significantly invest in devising robust shields against the potential hazards posed by these AI tools.”

Source:https://arxiv.org/abs/2302.06588