Recently announced OpenAI changes mean users will be able to call any API. That’s great for business and the economy but a potential headache for security teams.
On Monday November 6th 2023, OpenAI hosted the company’s first ‘Developer Day’ in downtown San Francisco. At the start, the event was shaping up to be a bit of a victory lap. The company proudly highlighted the surge in ChatGPT's popularity, cheering the fact it now boasts over 100 million weekly users. Then came a flurry of exciting announcements aimed squarely at the assembled technologists. GPT-4 Turbo was a centerpiece, offering improved text analysis and image understanding at competitive pricing. Then it was onto Custom GPTs which will allow users to craft personalized GPT versions effortlessly with no code. This was followed by plans for a GPT Store so developers can profit from their creations. The introduction of Assistants API enables developers to craft diverse agent-like experiences. Additional highlights on the day included DALL-E 3 API, new text-to-speech APIs, and a Copyright Shield program safeguarding businesses from copyright claims. These announcements were enthusiastically welcomed by the developer crowd. For security teams, however, there was an implication just beneath the surface that might have raised some concerns - The massive expansion of API calling capabilities.
We’ve already looked at the complex connections between generative AI and API security. Our previous blog identified that AI looked set to become a double-edged sword when it comes to protecting APIs. The ‘CustomGPT’ and ‘Assistant API’ announcements from OpenAI will give users the ability to call APIs easily and at scale. This is big and really underlines the dual impacts that AI is set to have when it comes to security. The expansion of API calling capabilities with the introduction of custom GPTs and Assistants, is nothing short of revolutionary. It will democratize access to APIs, further accelerating the proliferation of this powerful technology which already accounts for more than 83% of web requests. However, for security teams, this development presents a potential problem — an unprecedented surge in risks, from attack methods like prompt injections, across the internet.
The exponential growth of AI, particularly Large Language Model (LLM) AI, has already fueled a surge in API consumption. AI's reliance on diverse and vast datasets for learning processes necessitates seamless data integration, normally enabled by APIs. Furthermore, organizations integrating third-party AI models into their operations depend on APIs for user interaction with the models, increasing the volume of API calls, as well as the number of people using APIs for the first time. Now that OpenAI is giving users everywhere, and of any technical ability, the power to call APIs from CustomGPTs and via the front-end Assistant API, that growth curve is only going to get steeper.
This expansion of API calling capabilities is great news for businesses and the economy. It will allow even more people than ever to create and innovate, bringing together different systems to create powerful solutions that will undoubtedly improve all of our lives. It will remove engineering bottlenecks associated with API development, deployment and management. On the face of things, it’s revolutionary and widely beneficial. However, the security implications need to be understood.
APIs are already the number one attack surface. In 2021, IBM X-force reported that more than two thirds of breaches involved the exploitation of API vulnerabilities. The ability of AI to call APIs only exacerbates the problem. Now, OpenAI has opened up that ability to everyone. Previously, attackers needed a level of knowledge, sophistication and perseverance in order to successfully find, understand and exploit API vulnerabilities. Now everyone, everywhere, regardless of expertise will have the ability to prod and probe APIs across the globe, at pace and at scale. This will be a game-changer for those charged with protecting APIs.
AI makes it so much cheaper and more efficient to stage attacks. The pool of people with the technical ability to successfully breach an API has just grown exponentially. That means a ton of hitherto unattacked sites and systems will start to see increased attempts, all day, every day and everywhere. And that’s in addition to the current norm - in our own testing labs, we see our APIs probed within 5 minutes of going online. The normal calculus used by attackers which weighs up the possibility of a payout against the time and cost of conducting an attack has been turned on its head. Attacks are set to explode. Get ready.
Before these announcements, API security should already have been high on the list of priorities for CISOs around the world. After these announcements from OpenAI, it should be sitting squarely in the top spot. Every company now needs to significantly improve their API security posture, as well as general awareness around security best practices for API usage. Time is of the essence and it’s never too soon to make your APIs safe. On the bright side, this announcement from OpenAI is a great thing for the world. APIs are a real force for good and making it easier for everyone to leverage them is a positive thing. The volume and frequency of API attacks will certainly ramp up but the nature of these attacks is unlikely to change. There are already clear and practical steps you can take as well as cost effective tools you can use to strengthen your defenses ahead of the coming onslaught.
If you are looking for a highly automated, easy to integrate, easy to monitor, and easy to verify tool to bolster your API defenses, schedule a demo with FireTail.