Minja’s Sneak Attack: How It Poisons AI Models for Chatbot Users and Threatens AI Integrity
Researchers from Michigan State University, the University of Georgia, and Singapore Management University have discovered a new method to manipulate AI models with memory, called MINJA (Memory INJection Attack). Unlike previous threats that required backend access, this attack can be executed simply by interacting with an AI agent like a regular user. This means any ...