Artificial intelligence systems are rapidly becoming integral to our world, but with increased reliance comes heightened risk. This tracker compiles incidents of AI-related breaches, leaks, and vulnerabilities to raise awareness and encourage proactive security measures. From model exploits to data leaks involving AI systems, stay informed about the latest risks and lessons learned in this evolving landscape.
This tracker is maintained as a resource for researchers, developers, and organizations to understand the security challenges associated with AI. Each entry includes details about the incident, its impact, and key takeaways. The information is sourced from verified reports and will be updated regularly.
If you want to add a known AI breach, leak or vulnerability to our tracker, you can send us the information via our 'AI Breach Tracker Submission Form'. We'll validate and confirm the event before adding to the database.
Submit a BreachThere is no AI without APIs. AI is enabling the next wave of digital innovation. APIs are the only way to drive AI adoption scalably and sustainably. Don't let a lack of visibility, monitoring and security controls hold your organization back.
Learn MoreChef's Kiss - AI Companion Site Breach
A platform offering AI-generated "girlfriends" suffered a significant data breach, exposing private user data, including personal conversations and fantasies. The attackers exploited a weak authentication mechanism, allowing them to access and exfiltrate sensitive data stored in the system. The breach raises concerns about the privacy implications of AI systems designed for intimate interactions and underscores the need for stricter safeguards in handling user-generated content.
Mudler Time - AI Vulnerability
A timing attack vulnerability was discovered in Mudler’s LocalAI version 2.17.1. This exploit allows attackers to deduce sensitive information, such as valid API keys or passwords, by analyzing the response times of the server during authentication attempts. The flaw stems from discrepancies in the time taken to validate incorrect versus correct credentials, enabling attackers to iteratively guess the correct inputs. If exploited, it could lead to unauthorized access to systems relying on this AI tool for authentication.
IDOR in Lunary - AI Vulnerability
In version 1.3.2 of Lunary AI, an Insecure Direct Object Reference (IDOR) vulnerability (CVE-2024-7474) was identified, allowing unauthorized access to sensitive user data. By manipulating the id
parameter in the request URL, attackers could view or delete external user accounts without authorization. This flaw resulted from inadequate validation of the id
parameter, leaving external user information exposed. The issue has been categorized as critical, with a CVSS score of 9.1, underlining the potential for severe impact if exploited.
Ollama Drama - Multiple AI Vulnerabilities
A vulnerability in Ollama, detailed in Oligo Security's blog, highlighted the risks of exposing AI model prompts and context data during multi-model integrations. This issue arises when poorly configured models inadvertently leak sensitive user data through unencrypted channels or logging mechanisms. Specifically, in Ollama's case, data from prompt engineering experiments was accessible without adequate restrictions, enabling unauthorized users to extract sensitive information or manipulate outcomes. This incident underscores the need for robust access controls and encryption when deploying interconnected AI models
Syntax Error - AI Crypto Breach
TA vulnerability in the Syntax smart contract platform, identified by Spectral Labs, allowed attackers to exploit logic errors in the code, enabling unauthorized transactions. In response, the platform was forced to pause its operations, temporarily freezing users' assets. The vulnerability allowed hackers to make off with $200K in crypto transactions. This incident highlights the critical importance of security audits in AI-driven blockchain platforms to prevent catastrophic misuse.
You Only Load Once - AI Supply Chain Attack
Hackers hijacked an Ultralytics AI model by injecting malicious payloads during a public model-sharing session. The compromised model was downloaded by unsuspecting developers, infecting thousands of systems with cryptomining malware. This breach underscores the danger of downloading unverified machine learning models and highlights the need for secure distribution channels in the AI ecosystem.