Using AI in cyber security
May 4, 2023

The Double-Edged Sword of AI in Cybersecurity

Recently we penned a brief article detailing the use of Artificial Intelligence in bolstering cyber defence strategies, illustrating how it greatly accelerates and enhances the efficiency of corporations. However, this coin has a flip side. The very same technology can also be exploited by nefarious individuals in the cyber world, using AI as a tool to perpetrate malicious activities.

Let's delve deeply into the potential scenarios where AI could be turned rogue, serving as an aid to malicious actors. 

AI Powered Phishing Attack

A significant difference in this context is the capability to rapidly train an AI model on text samples from an actual individual. This allows the AI to mimic and impersonate the specific writing style of that person, enhancing the authenticity of the content for the target. This is particularly true for OpenAI’s GPT models, which have used the vast expanse of the internet, as their foundational learning source. As a result, these models are essentially pre-trained to write like many public figures who have a substantial presence and amount of content online.

Additionally, an AI-powered phishing campaign can operate without constant human supervision. Once the communication process is automated, it can be scaled and deployed massively. What's even more concerning is, that the AI could potentially continue the conversation until it successfully 'phishes' out certain data from the target.

Automated Hacking

While the topic of automated hacking with AI is broad enough to warrant its own dedicated blog post, let's provide a summarized overview here:

Numerous automated tools already exist, but many largely depend on a supervised decision tree approach, frequently defaulting to brute force methods for exploitation. However, the introduction of AI into this domain could change the game, accelerating the process exponentially. With AI, decision-making is no longer static, it can adapt and pivot to another approach if one method proves to be nonviable. Also the dynamic approach could help in 'guessing' passwords much faster than a human could. Doing so in a more nuanced manner that circumvents the need for brute force.

Moreover, AI's proficiency in swiftly identifying system vulnerabilities surpasses traditional methods, enhancing the efficiency of cybersecurity measures on one hand, but efficiently finding the way in on the other hand. It's a truly double edged sword.

Data Poisoning

Data poisoning presents a particularly fascinating concern. As an increasing number of systems depend on AI, and AI in turn relies on clean and accurate data, the potential for corruption of this data presents a serious risk. There's a palpable threat that the learning data, which is the bedrock of AI models, could contain deliberately incorrect or misleading elements. These can sabotage the AI learning process, causing far-reaching security implications.

Consider a scenario where an AI is trained on a code sequence, that when deployed inherently generates a back door. This is akin to unwittingly inviting cybercriminals into the system. Or imagine if the AI incorrectly handles data, causing sensitive information to leak out or worse, creates authentication mechanisms that are intentionally flawed. Such threats invert the principle of secure-by-design systems. The manipulated AI, rather than building secure infrastructures, ends up crafting systems that are insecure by design, all while under the guise of legitimacy.

Data poisoning can lead to a range of other potential consequences, such as misclassifications, performance degradation or harmful actions in reinforcement learning scenarios. Therefore, it's crucial for those in the field of cybersecurity to remain vigilant about the integrity of the training data, used to shape our AI models and systems.

A.I. Adversarial Attack and Inversion

In this scenario, adversarial attacks represent a unique type of threat. In these instances, AI is leveraged to generate inputs explicitly designed to deceive another AI system. For example, through adversarial machine learning data instances are created, that closely mimic legitimate data. However, these are engineered to induce errors in the AI system's operations. The deceptive data is so well-crafted, that it could potentially trick AI-based cybersecurity systems into overlooking or misclassifying malicious activities. Furthermore, the concept of inversion poses a significant concern. If sufficient data from a model is obtained, either through illicit means like theft or by exploiting the system's vulnerabilities, the underlying learning model can be reconstructed. With this information, a malicious AI can gain insights into how the targeted AI system would react under specific scenarios. This ability to predict system responses can be used to the attacker's advantage, devising strategies to bypass security measures and gain unauthorized access to protected data.

This tactic takes advantage of the fundamental principles of machine learning. Just as AI systems learn to recognize patterns and make predictions based on training data, a malicious AI can 'learn' the characteristics of the targeted AI system. It can then exploit this understanding to find weak points or blind spots in the system's security defences.