The Complex Connections between Generative AI and API Security

Application Programming Interfaces (APIs) and Artificial Intelligence (AI) are two of the most important developments in tech of the last 10 years. While AI has grabbed the majority of headlines lately, APIs are the unsung heroes, the “connective tissue” that ensures seamless integration across our digital lives. But how will the more recent emergence of AI affect API security, and how do APIs affect the security of AI? In this post, we connect these two hot topics that make our online worlds work.

The Complex Connections between Generative AI and API Security

The Popularity of APIs with Developers and Attackers

APIs are the workhorses of the modern internet. They allow different programs to communicate seamlessly, powering the connections we take for granted across platforms. They facilitate the exchange of data, enabling digital innovation, streamlining processes, and enhancing user experiences. Over the last decade, with the widespread adoption of cloud and microservice-based architectures, APIs have become a cornerstone of modern software development. It is now estimated that more than 83% of all web requests are API calls. And as APIs rise in popularity among developers, it’s only natural that they have become an increasingly attractive target for attackers. The 2021 IBM Report on X-Force Cloud Threats found that two-thirds of cybersecurity incidents in their data sample involved APIs as a key attack surface in the breach.

The Importance of API Security

APIs, by design, serve as a gateway to your data. This makes them extremely attractive to attackers. APIs are also often the interface where business functions can be invoked. For that reason, APIs are the only construct with both data access and transactional capabilities, making them a doubly enticing target. Our research shows that API breach incidents are accelerating at a rate of 227% year-on-year and the average volume of records exposed is close to 3M per event. Securing your APIs is more important than ever and security teams are already lagging behind. And now, with the emergence of AI, there’s a chance that gap will grow.

The Emergence of AI

AI is everywhere now. No longer confined to the realms of sci-fi or just a favorite buzzword of startups and VCs, AI is finally starting to deliver on its promise. There has been an explosion of AI-powered tools for writing, translation, imagery, coding, video, transcription and much more. Projects like ChatGPT, Bard and Midjourney are now household names well beyond the tech sector. In fact, ChatGPT is estimated to have reached 100 million monthly active users in February 2023, just two months after launch. This made it the fastest growing web application of all time. AI, and particularly Large Language Model (LLM) AI, is set to impact every aspect of our lives and API security is no exception.

The Exponential Use Case: AI Driving API Consumption

The massive popularity of AI is fueling the exponential growth in API consumption that we have seen in the last decade. Here’s how:

  • Data Integration: AI relies on data. Lots of data. Data that comes from many different sources. AI systems thrive on diverse and vast datasets that enable them to learn, adapt, and make informed decisions. These datasets require seamless data integration, which these days happens via APIs. These connectors enable disparate systems and data sources to communicate, facilitating the flow of information essential for AI's learning processes. APIs serve as the “connective tissue” that powers AI platforms.
  • User Interaction: Most organizations that want to harness the potential of AI won’t start from scratch. They will use existing platforms as a starting point, integrating third-party models into their own operations. Whether it's for natural language processing, image processing, or predictive analytics, most companies will rely on external AI services to augment their applications. How will they integrate with these third-party platforms? You guessed it. APIs. 

So the first major impact that the explosive growth of AI will have on APIs is a further increase in consumption. APIs calls already account for the vast majority of all web requests and that proposition will only increase as AI adoption grows. But what about API security?

The Double-Edged Sword: AI and API Security

As access to powerful AI tools becomes more universal, the impacts for API security will cut both ways. On one hand, AI has the potential to enhance API security significantly. For example, it can play a key role in threat detection and prevention by identifying and reacting to complex threats in real-time. The flipside however is that AI can be an effective aid to attackers. While most AI tools incorporate ethical controls against generating malicious code, these can be bypassed and the tools can be used to churn out an almost infinite number of attempted attack permutations. Essentially, AI will bring benefits and challenges when it comes to API security.

API Security Benefits of AI

Let’s start with the positives. AI's potential to bolster API security is undeniable. By leveraging AI, organizations can strengthen their API security measures in numerous ways. Here are some of the most obvious:

  • Threat Detection and Prevention: AI can identify and respond to complex threats in real-time, providing a proactive defense against potential breaches.
  • Anomaly Detection: AI's ability to detect unusual patterns aids in the rapid identification of attacks that may otherwise go unnoticed.
  • Predictive Analysis: By analyzing historical data, AI can predict potential security breaches, enabling organizations to take preemptive action.
  • Incident summarization: Most API security incidents involve a combination of multiple things that have “gone wrong,” and understanding the correlation and relationship between those things is challenging. AI can summarize the signals to tell the story.

AI Challenges for API Security

While AI can bolster API security, it simultaneously opens up new avenues for malicious actors. In the wrong hands, AI can be harnessed to exploit APIs. Here are some of the ways that AI may heighten risks to your API security.

  • AI-Enhanced Attacks: This involves attackers making use of AI to increase productivity and better automate the drudge work required to breach an API. Attackers could train generative AI to work around defenses by changing the IP address, making bad requests or compromising credentials by detecting patterns and using common combinations. Or, it could be used for credential stuffing, or making requests at a slow enough rate to stay hidden.
  • AI-Generated Attacks: An AI-generated attack primarily revolves around the use of AI techniques, particularly generative AI models, to create malicious content or actions. These attacks might involve AI-generated malware, generating malicious payloads for API endpoints, content attacks and AI enhanced exploitation. 

AI-Powered Adversaries

The thought of bad actors with access to powerful AI tools is enough to give any CISO pause. Let’s look at the type of scenario that might become much more common in an AI-enabled attack landscape.

Imagine a scenario where an attacker is targeting a financial institution's API to steal sensitive customer data, such as account details and transaction history. This attacker employs AI-based techniques to execute an adversarial attack for data exfiltration.

  • Initial Reconnaissance: The attacker begins by gathering information about the financial institution's API, including its endpoints, authentication mechanisms, and data structures.
  • AI-Powered Data Extraction: The attacker leverages AI techniques to create malicious queries that mimic legitimate API requests. These AI-generated queries are designed to bypass security measures and extract sensitive data.
  • Evasion of Detection: The AI model used by the attacker continually refines the queries to evade detection. It learns from responses received and adjusts the queries to appear less suspicious, mimicking the behavior of legitimate API requests.
  • Exfiltration of Sensitive Data: The AI-generated queries are executed against the API, targeting specific data endpoints. As the queries are designed to mimic legitimate requests, they pass through the API's security layers undetected.
  • Data Storage and Exfiltration: The stolen data is temporarily stored within the attacker's infrastructure. The AI model can preprocess and structure the data for easier exfiltration, ensuring that it remains inconspicuous during data transfer.
  • Data Exfiltration: The attacker uses the AI model to exfiltrate the stolen data in small, inconspicuous portions over an extended period. This avoids triggering any immediate security alarms based on the volume of data transferred.
  • Covering Tracks: To further evade detection, the attacker may use AI to alter log data, making it appear as if the exfiltration attempts were routine and legitimate API interactions.

Balancing Act: Harnessing AI for Secure APIs

To navigate this intricate landscape, organizations must adopt a balanced approach that harnesses AI for secure APIs:

  • Behavior Analysis: Utilizing AI to monitor and analyze user behavior helps identify and mitigate unusual activities that may indicate a security threat.
  • Real-time Response: AI's role in instant threat response minimizes potential damage by rapidly identifying and mitigating security breaches.
  • Continuous Learning: AI's capacity for learning and adapting to new attack patterns ensures that security measures remain effective in the face of evolving threats.

Embracing the Potential of AI for API Security

The interplay between AI and API security is a complex one that offers both promise and peril. While AI can strengthen API security measures, it also introduces new challenges and attack vectors. On the bright side, the risks remain the same. AI will simply improve attackers’ ability to exploit already recognized vulnerabilities. But AI will also improve our capabilities when it comes to mitigating those risks and hardening our defenses.

If you already have a comprehensive API security strategy in place today, you are well placed to defend against the AI-enabled API attacks of tomorrow.

How FireTail can help

FireTail has engineered a hybrid approach to API security: an open source library that protects programmable interfaces with inline API call evaluation and blocking, cloud-based API security posture management, centralized audit trail, and detection and response capabilities. FireTail is the only company offering these capabilities together, ultimately helping organizations eliminate API vulnerabilities from their applications and providing runtime API protection.

FireTail is headquartered in Washington, DC, with additional offices in Dublin, Ireland and Helsinki, Finland. FireTail is backed by leading investors, including Paladin Capital, Zscaler, General Advance and SecureOctane.

FireTail. API Security.

Import, Setup, Done.