"Human-in-the-loop (HITL) safeguards that AI agents rely on can be subverted, allowing attackers to weaponize them to run malicious code, new research from CheckMarx shows. HITL dialogs are a safety backstop (a final “are you sure?”) that the agents run before executing sensitive actions like running code, modifying files, or touching system resources."
Human-in-the-loop isn’t enough: New attack turns AI safeguards into exploits | CSO Online
