A researcher revealed that GPT-4o can generate exploit code by bypassing security using hex-encoded instructions, highlighting AI vulnerabilities.
OpenAI’s GPT-4o language model can be manipulated to produce exploit code by using hexadecimal encoding, according to researcher Marco Figueroa from Mozilla’s 0Din AI bug bounty platform. This technique bypasses the model’s built-in safety features, enabling harmful content generation. Figueroa demonstrated this vulnerability, revealing a major flaw that allowed the creation of functional Python exploit ...