Insights on Mitigation of AI Security Risks in Modern Businesses
Author: Ivan Shyshkou
Introduction
Artificial Intelligence (AI) is a groundbreaking technology that has become integral in various fields. It enables us to offer innovative solutions in software development, decision-making, and other business areas. However, AI use can also bring security risks. In the article, we are analyzing these risks, and their impact on businesses and people who use AI. We will also show how companies can protect themselves from these risks, and keep their AI systems safe and secure.
AI Vulnerabilities and Threat Landscape
The use of AI in different areas has revealed new ways for attacks and weaknesses in the apps and systems where it is used. These weaknesses are real and can damage trust, dependability, and operation of AI systems, affecting both companies and individual users.
Here are some common examples of AI attacks:
- Input Attacks. These attacks manipulate the content fed into the AI system, altering its output to serve the attacker’s objectives. As AI systems operate by receiving inputs, performing calculations, and returning outputs, tweaking the input can lead to disastrous consequences. Imagine the aftermath of altering a physical stop sign to a green light. What would happen to a self-driving car?
- Poisoning Attacks. These corrupt the data that train an AI system, causing it to misinterpret information and act erroneously. Such attacks take advantage of AI’s primary sustenance, namely data. Spoil the data, and you spoil the AI system.
- Risk of AI Theft. AI models may be stolen through various means, including network attacks, exploitation of existing vulnerabilities, and deceptive strategies. Various attackers, from hackers to corporate spies, can carry out such illicit activities. Once they access AI models, they can modify and use them for harmful purposes, hence increasing the overall social risks associated with AI.
In addition, it is crucial not to overlook the security testing of web applications that either operate with proprietary AI or utilize third party APIs. In our testing practice, we discovered vulnerabilities in such applications. To be more exact, there was a case when a client’s application utilized OpenAI, a third party AI, to generate responses. We managed to bypass the limit of free generations. This allowed us to perform numerous generations every second. As a result, the client incurred service payment costs.
In another case, one could view other users’ conversations with AI and the results of their requests by cycling through chat IDs. Therefore, it is imperative to conduct regular security testing of web applications, as well as use DevSecOps solutions working with AI to prevent such vulnerabilities and potential financial losses.
OWASP Machine Learning Security Top Ten List
Considering the topic, it is essential to mention the OWASP Machine Learning Security Top Ten list. The latest OWASP Machine Learning Security Top Ten list, an initiative by the nonprofit OWASP (The Open Web Application Security Project), serves as a valuable resource for developers in the realm of machine learning security. This list delineates the top ten security issues prevalent in machine learning systems. Its primary aim is to provide an overview of these critical security concerns, offering insights into vulnerabilities, their potential impacts, and recommended preventive measures. This essential guide assists in understanding and addressing security challenges in machine learning systems, aligning with the general threat models discussed in our article.
For more detailed information, please refer to OWASP Machine Learning Security Top 10.
Here is the top five from the list:
- Input Manipulation Attack (ML01:2023): This attack type involves the intentional modification of input data with the aim of deceiving models. It leads to incorrect classifications and potentially allows attackers to bypass security measures or inflict damage to the system.
- Data Poisoning Attack (ML02:2023): In these attacks, assailants manipulate training data to provoke models into exhibiting undesirable behavior that causes the model to generate incorrect predictions and make false decisions leading to serious repercussions, including the compromise of sensitive information and system integrity.
- Model Inversion Attack (ML03:2023): This attack involves attackers gaining insights into the training data used by the model, potentially revealing sensitive information on the dataset, thus posing a significant risk to user privacy and data security.
- Membership Inference Attack (ML04:2023): In this attack, a hacker manipulates the training data of a model to expose sensitive information. For example, a malicious actor can train a model on a dataset of financial records and use it to find out whether a specific individual’s record is included in the training data. This allows the hacker to infer sensitive financial information. The attacker can gain insights into financial data, resulting in a loss of confidentiality, and potential legal and reputational damage.
- Model Stealing Attack (ML05:2023): This attack type occurs when an attacker, say a competitor, gains access to the model’s parameters to steal it. For instance, attackers might reverse engineer a company’s valuable machine learning model to recreate and use it for their own purposes, causing significant financial and reputational loss to the original company. The impact of such an attack is substantial, as it affects both the confidentiality of the data used to train the model and the reputation of the organization that developed the model.
Securing AI: Measures and Strategies
To be protected from the multifaceted threats to AI, it is essential to implement comprehensive security measures and strategies. These include close monitoring of AI services, regular checks for any suspicious activity, and addressing any vulnerabilities in the code. To this end, you can use applications for building threat models, such as OWASP Threat Dragon and PYTM, as well as services for working with logs like Zabbix and Logstash.
To prevent undesirable outcomes, it is crucial to ensure that the input and output data be clean and validated. For this reason, it is recommended to implement SAST, DAST, IAST, RASP, and SCA tools like Acunetix, OWASP ZAP, Burp Suite, PagerDuty, BlackDuck. Organizations should also focus on training their staff on the best practices of using AI and create security policies to ensure the secure use of this technology.
Data security is another critical aspect of AI security. It is vital to store consolidated personal data in secure environments to prevent unauthorized access and implement data management strategies to store data without directly associating it with users. Implementation of methods that prevent user data from entering the training model’s data sets, and limiting the volume and duration of the stored data to the minimum are also essential steps in mitigating data leaks. Therefore, there is a need to use tools for secure management, such as Vault, and establish a secure development environment, for example, through Cloudflare.
The quality of AI’s recommendations is largely dependent on the quality of the training data. If AI systems are trained on unreliable or biased data, it may lead to incorrect recommendations that adversely affect various sectors. Organizations must actively focus on the quality of data used for AI training, conducting data analysis to identify errors and biases, and continuously updating and auditing AI algorithms. Implementation of quality control mechanisms for AI outputs contributes to prompt detection and rectification of erroneous decisions.
IBA Group’s Expertise in AI Security
IBA Group is always ready to help you keep your AI applications safe. Our skilled team excels not only in AI protection but also in providing a range of security services. These include helping with secure development, testing for security vulnerabilities, checking for security risks, training your employees in security, and many other aspects. Do not hesitate to contact us, and let’s team up to strengthen your AI projects and keep things safe and secure.