AI Agents Exposed: Security Risks in Google Cloud’s Vertex AI
TL;DR: Palo Alto Networks’ research reveals security vulnerabilities in Google Cloud’s Vertex AI, demonstrating how misconfigured AI agents can be exploited to access sensitive data and systems. The research underscores the need for tighter permission controls in cloud environments.
Main Analysis:
Palo Alto Networks conducted an in-depth analysis of Google Cloud Platform’s Vertex AI, identifying significant security risks related to the default permission settings of AI agents. The study highlighted the potential for these agents to become “double agents,” capable of exfiltrating data and compromising infrastructure while masquerading as legitimate components. By exploiting default permissions and inadequate configurations, they demonstrated how to extract sensitive service credentials, leading to unauthorized access to both consumer and producer project resources.
The researchers deployed an AI agent utilizing the Google Cloud ADK and found excessive default permissions associated with the Per-Project, Per-Product Service Agent (P4SA). This overprivileged access enabled them to extract sensitive credentials, interact with Google Cloud Storage, and access restricted internal Google resources, exposing critical aspects of Google’s architecture. Illustrative figures in the report depict the deployment process and the subsequent extraction of sensitive data, underscoring the ease with which an attacker could exploit these vulnerabilities.
Beyond merely accessing consumer projects, the researchers were able to utilize stolen credentials for unauthorized entry into Google-managed project environments, revealing flaws in access control that could facilitate broader breaches. Their findings indicate that improperly scoped OAuth permissions could extend exposure to an organization’s entire Google Workspace, representing an alarming risk to sensitive enterprise data.
Defensive Context:
Organizations utilizing Google Cloud’s Vertex AI should be particularly vigilant about permission management. The identified vulnerabilities point to a significant risk for enterprises that deploy AI agents without thorough configurations. Stakeholders must be aware of the potential for AI agents to behave maliciously if compromised, especially those who rely on expansive permissions for operational efficiency.
Why This Matters:
The risks are acute for companies that integrate AI into their workflows, particularly in sectors that handle sensitive information. A compromised AI agent could serve as an insider threat, leading to data breaches or system disruptions. Organizations with regulatory requirements concerning data protection should prioritize understanding and mitigating these vulnerabilities.
Defender Considerations:
- Review and adjust service agent permissions in line with the principle of least privilege to hinder the ability of any compromised agent to exfiltrate data.
- Monitor OAuth scopes and ensure they are limited to what is strictly necessary for function.
- Conduct regular security assessments to audit AI agent permissions and deployment configurations to prevent exploitation.
Indicators of Compromise (IOCs):
- Service Account:
service-<project-id>@gcp-sa-aiplatform-re.iam.gserviceaccount.com - Unauthorized API access:
hxxp[:]//metadata.google[.]internal/computeMetadata/v1/instance/?recursive=true
With rising reliance on AI technologies, ensuring robust security around AI agent deployments is no longer optional—it is essential to safeguard organizational integrity.



