DeepSeek’s new model rode on the heels of OpenAI’s ChatGPT and other similar, popular platforms. Immediately after its release, it began gaining traction and a worldwide user base. But despite this early success, a recent cyber attack has caused them to close their doors to new users temporarily.
Artificial Intelligence is not a new concept, but the Large Language Models (LLMs) that power it are now developing beyond our wildest imaginations, making this technology the talk of 2025.
AI opens up a whole host of possibilities and are accessible even to the average person. This makes them wildly popular for generating everything from emails to shopping lists and writing blog posts such as this one (A/N: this blog post was NOT generated using AI tools).
Following in the footsteps of existing models, DeepSeek’s new AI model was supposed to be China’s ticket into the “AI race.” But things didn’t exactly go as planned when a huge cyber attack hit the new system at the beginning of last week.
The attack was believed to be a DDoS (Distributed Denial of Service) attack that targeted DeepSeek’s API and web chat platform. The scale of the attack is not currently known, but DeepSeek has since blocked new users from creating accounts in order to try and mitigate the risks. However, existing users still have access as of right now.
Even before the cyber attack, researchers had identified vulnerabilities in DeepSeek’s model. Mainly, that it could be jailbroken to produce malicious outputs including ransomware, sensitive information, and more. This is almost identical to the recent OpenAI vulnerabilities that led to a recent cyberattack on their ChatGPT platform- read our recent blog about this to learn more.
However, these vulnerabilities are not where the similarities between the platforms end.
Microsoft is currently investigating claims that DeepSeek used OpenAI’s ChatGPT to train its own R1 reasoning AI model. Security researchers believe DeepSeek may have exfiltrated large amounts of information from OpenAI’s API in the fall of 2024.
This is against OpenAI’s terms of use, which states that you cannot use ChatGPT’s “output to develop models that compete with OpenAI.”
And to make matters worse, security researchers found vulnerabilities and exposed data in DeepSeek.
One additional point that has surfaced is concerns about data collection and data sharing, as explicitly defined in DeepSeek’s terms of service. This has real privacy risks for many companies, where data is being sent offshore, potentially violating privacy guidelines and data sovereignty.
Artificial Intelligence is completely revolutionizing the cyber world as we know it. As companies all race to create AI models the fastest, a lot of the security gets pushed to the back burner and we’ve already seen the consequences of that from OpenAI and DeepSeek respectively.
More than ever before, it is vital to bridge the gap between developers and security teams so that plans don’t get pushed to production before they have been adequately tested for security risks. If you or your team needs help with your AI security or APIs, FireTail has you covered. Get a free demo or try out our free tier for yourself, no strings attached.