MIT has created “PhotoGuard” – protecting images from malicious editing using AI

PhotoGuard

If you want to preserve the authorship and integrity of your photos online, then you have a chance to try a new technology from MIT – “PhotoGuard“. It is a method that prevents artificial intelligence (AI) from editing your images.

PhotoGuard works by making imperceptible changes to individual pixels in an image, creating “interference” that machines can read but which is invisible to the human eye. This “interference” disrupts the AI’s ability to understand what the photo depicts and makes it immune to malicious editing.

PhotoGuard can protect your images from two types of AI attacks: the “encoder attack” and the “diffusion attack.” An encoder attack makes the AI think that your image is some other image (such as a grey or random image). A diffusion attack causes the AI to edit your image towards some target image (which could also be grey or random). In both cases, any attempt by the AI to alter your image will result in an unrealistic and distorted result.

PhotoGuard was developed by a team of researchers led by computer science professor Alexander Madri at MIT’s CSAIL lab. They published their work last month and demonstrated how PhotoGuard can “immunise” photos against AI editing. They also provided an interactive demonstration and code for their method.

PhotoGuard could be useful for photographers, artists and any users who want to protect their images from unauthorised use and manipulation by AI. However, the method is not flawless and attackers can try to reverse a protected image by adding digital noise, cropping or flipping it.

“A collaborative approach involving model developers, social media platforms and policy makers is a strong defence against unauthorised image editing. Working on this pressing issue is paramount today,” Salman said in a press release. “While I’m happy to contribute to this solution, there is a lot of work to be done to make this protection practical. The companies that develop these models must invest in building robust immunisations against the potential threats these AI tools pose.”

About the author

JOIN THE DISCUSSION