The tool is currently in the research stage, but the team plans to integrate it with its existing artist protection tools.
Researchers at the University of Chicago have developed a tool that gives artists the ability to “poison” their digital art in order to stop developers from training artificial intelligence (AI) systems on their work.
Called “Nightshade,” named after the family of plants — some of which are known for their poisonous berries — the tool modifies images in such a way that their inclusion contaminates the data sets used to train AI with incorrect information.
According to a report from MIT’s Technology Review, Nightshade changes the pixels of a digital image in order to trick an AI system into misinterpreting it. As examples, Tech Review mentions convincing the AI that an image of a cat is a dog and vice versa.
One expert who viewed the work, Vitaly Shmatikov, a professor at Cornell University, opined that researchers “don’t yet know of robust defenses against these attacks” — the implication being that even robust models such as OpenAI’s ChatGPT could be at risk.
The research team behind Nightshade is led by Ben Zhao, a professor at the University of Chicago. The new tool is actually an expansion of their existing artist protection software called Glaze. In their previous work, they designed a method by which an artist could obfuscate, or “glaze,” the style of their artwork.
Risk Disclaimer: Although Sponsored Trading can be profitable, it is associated with a significant risk of losing your investment. The risks will increase when trading on margin companies. Traders must exercise due diligence and be careful when making their trading decisions. It is the sole responsibility of the Trader to learn and acquire the knowledge and experience required to use the Trading Platform and anything that will be required to trade properly.