We’ve talked a bit before about how AI/LLMs are a “double edged sword” in the cybersecurity space, bringing both new, advanced security abilities and vulnerabilities. Today, we’re going to be talking about the latter.
It is virtually impossible to go even a day without hearing about AI in 2025, and of these AI models, OpenAI’s ChatGPT is the most popular and widely recognized name. People are using it for everything from writing emails to deciding what to make for lunch. However, there is a small percentage of Internet users dedicated to finding more sinister uses for the new technology, as well as dedicated groups of security researchers examining platforms for new risks.
Security researcher Benjamin Flesch revealed that ChatGPT could be manipulated by bad actors to launch DDoS attacks on other platforms and sites.
The vulnerability was in ChatGPT’s API, which showed a defect in handling HTTP POST requests to https://chatgpt.com/backend-api/attributions.
This defect would mean that if an attacker fed a long list of URLs pointing to the same site at the API, the crawler would go off and hit each one, since hyperlinks can be written in multiple ways.
ChatGPT’s crawler would even send each request from a separate IP address, proxied by Cloudflare. This proxy will distribute traffic and show different source IP addresses, making it hard for defenses like WAFs to keep up with blocking.
"So one failed/blocked request would not prevent the ChatGPT bot from requesting the victim website again in the next millisecond."
In this way, ChatGPT could amplify a single API request into potentially thousands directed to the same website, thus flooding the target and causing a DDoS (Distributed Denial of Service attack). This is a particularly interesting vulnerability because it highlights a vulnerability that often gets less attention - abuse of API functionality, or BFLA on the OWASP API Top 10, with a little bit of Unrestricted Resource Consumption to complicate matters. In fact, there is arguably also an aspect of OWASP LLM10:2025 Unbounded Consumption in triggering the attack. While this is such an easy attack to trigger (a simple JSON-formatted text list of URLs sent to an API endpoint that doesn’t require authentication), it ticks a lot of the risk boxes around both AI and APIs.
This, in the year of record AI and API risk levels, seems like an oversight to say the least, especially for a tech giant such as OpenAI. But in fact, it’s typical of fast-growing technology companies that often prioritize release schedules and new functionality over security.
Most similar technologies and crawlers, such as Google’s crawler, have practices in place to prevent so many nearly identical requests being executed. Benjamin Flesch points out,
"Shouldn't it have recognized that victim.com/1 and victim.com/2 point to the same website victim.com and if the victim.com/1 request is failing, why would it send a request to victim.com/2 immediately afterwards?”
But with AI developing so quickly, many security teams are struggling to keep up. Bridging the gap between developers and security teams is a huge challenge both in AI and API security. And with the rates of API and AI attacks on the rise, vulnerabilities like these could open sites up to more risks than ever before.
Navigating AI and API security can be challenging, but FireTail attempts to make it easier. Get started for free here today, to see how it can work for you.