Purdue Researchers Unveil ETA: A Two-Phase AI Framework for Safer Vision-Language Model Inference and Enhanced Safety
Vision-language models (VLMs) combine computer vision and natural language processing to analyze both images and text simultaneously. They are crucial for applications like medical imaging and automated systems, but pose safety challenges due to potential malicious visual inputs. Current safety methods often overlook visual content, making it hard to ensure reliable outputs. Researchers from Purdue ...