Skip to content

Commit

Permalink
Merge pull request #211 from mik0w/feature/ML06_new_version
Browse files Browse the repository at this point in the history
new version of ML06
  • Loading branch information
sagarbhure authored Sep 6, 2024
2 parents a8bb4fb + 9d0eebe commit c91886f
Showing 1 changed file with 33 additions and 45 deletions.
78 changes: 33 additions & 45 deletions docs/ML06_2023-AI_Supply_Chain_Attacks.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ auto-migrated: 0
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML06:2023 AI Supply Chain Attacks
title: ML06:2023 ML Supply Chain Attacks
lang: en
tags:
[
Expand All @@ -17,74 +17,62 @@ tags:
mltop10,
mlsectop10,
]
exploitability: 5
exploitability: 6
detectability: 5
technical: 4
---

## Description

AI Supply Chain Attacks occur when an attacker modifies or replaces a machine
learning library or model that is used by a system. This can also include the
data associated with the machine learning models.
In ML Supply Chain Attacks threat actors target the supply chain of ML models. This category is broad and important, as software supply chain in Machine Learning includes even more elements than in the case of classic software. It consists of specific elements such as MLOps platforms, data management platforms, model management software, model hubs and other specialized types of software that enable ML engineers to effectively test and deploy software.

## How to Prevent

**Verify Package Signatures:** Before installing any packages, verify the
digital signatures of the packages to ensure that they have not been tampered
with.
**Verify packages integrity:** Before using any packages in your infrastructure or application dependencies, verify the authenticity of the package by checking the digital signature of the package.

**Use Secure Package Repositories:** Use secure package repositories, such as
Anaconda, that enforce strict security measures and have a vetting process for
packages.

**Keep Packages Up-to-date:** Regularly update all packages to ensure that any
vulnerabilities are patched.

**Use Virtual Environments:** Use virtual environments to isolate packages and
libraries from the rest of the system. This makes it easier to detect any
malicious packages and remove them.

**Perform Code Reviews:** Regularly perform code reviews on all packages and
libraries used in a project to detect any malicious code.
**Keep packages versions up-to-date:** Constantly monitor the latest versions of the packages in your software supply chain and update your dependencies if you are using outdated software. Use tools such as OWASP Dependency Check. Refer to [https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/](OWASP Top10 A06:2021 – Vulnerable and Outdated Components) for more details

**Use Package Verification Tools:** Use tools such as PEP 476 and Secure Package
Install to verify the authenticity and integrity of packages before
installation.
**Install packages from secure sources:** Use secure third-party software repositiories, such as
Anaconda or pip, that enforce strict security measures and have a vetting process for
packages.

**Educate Developers:** Educate developers on the risks associated with AI Suppy
Chain Attacks and the importance of verifying packages before installation.
**Deploy ML infrastructure securely:** Follow the vendor's deployment recommendations for MLOps platforms in your stack, limit the access to the web UIs from the Internet, monitor the traffic in the infrastructure for the anomalies and possible attacks. If the infrastructure is deployed in the cloud, ensure to leverage the cloud provider's security features such as Virtual Private Clouds (VPCs), security groups, and identity and access management (IAM) roles to restrict and control access. Implement strict access control measures. Ensure that only authorized personnel have access to the MLOps platforms.

## Risk Factors

| Threat Agents/Attack Vectors | Security Weakness | Impact |
| :-----------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------: | :--------------------------------------------------------------------------------: |
| Exploitability: 5 (Easy) <br><br> _ML Application Specific: 5_ <br> _ML Operations Specific: 3_ | Detectability: 5 (Easy) | Technical: 4 (Moderate) |
| Threat Actor: Malicious attacker. <br><br> Attack Vector: Modifying code of open-source package used by the machine learning project. | Relying on untrusted third-party code. | Compromise of the machine learning project and potential harm to the organization. |
| Exploitability: 4 (Easy) <br><br> _ML Application Specific: 5_ <br> _ML Operations Specific: 3_ | Detectability: 5 (Easy) | Technical: 5 (Moderate) |
| Threat Actor: Cybercrime groups; malicious business competitors. <br><br> Attack Vector: Modifying code of open-source package used by the machine learning project. Exploiting the vulnrability in the MLOps stack. | Relying on untrusted or insecure third-party code or software. | Compromise of the machine learning infrastructure and potential harm to the organization. |

It is important to note that this chart is only a sample based on
[the scenario below](#scenario1) only. The actual risk assessment will depend on
[the scenarios below](#scenario1) only. The actual risk assessment will depend on
the specific circumstances of each machine learning system.

## Example Attack Scenarios

### Scenario \#1: Attack on a machine learning project in an organization {#scenario1}
### Scenario \#1: Attack on a Machine Learning project dependency {#scenario1}

The attacker, that wants to compromise a Machine Learning project, knows that the project relies on several open-source packages and libraries.

During the attack, they modify the code of one of the packages that the project relies on, such as NumPy or Scikit-learn. The modified version of the package is then uploaded to a public repository, such as PyPI, making it available for others to download and use. When the victim organization downloads and installs the package, the malicious code is also installed and can be used to compromise the project.

This type of attack can be particularly dangerous as it can go unnoticed for a long time, since the victim may not realize that the package they are using has been compromised. The attacker's malicious code can be used to steal sensitive information, modify results, or lead the machine learning model to return erroneous predictions.

The attacker targets a Machine Learning project that relies on several open-source packages and libraries.


### Scenario \#2: Attack on a MLOps software used in the organization {#scenario2}

Organization builds a MLOps pipeline that uses multiple instances of the software supporting the deployment. One of the applications, which is an inference platform, is exposed publicly to the Internet.

An attacker finds a web interface of the plaftorm available without authentication and gets access to the models, which weren't meant to be exposed publicly.

A malicious attacker wants to compromise a machine learning project being
developed by a large organization. The attacker knows that the project relies on
several open-source packages and libraries and wants to find a way to compromise
the project.
### Scenario \#3: Attack on a ML model hub used by the organization {#scenario3}

The attacker executed the attack by modifying the code of one of the packages
that the project relies on, such as NumPy or Scikit-learn. The attacker then
uploads this modified version of the package to a public repository, such as
PyPI, making it available for others to download and use. When the victim
organization downloads and installs the package, the attacker's malicious code
is also installed and can be used to compromise the project.
An organization decides to use a model from the public model hub. An attacker finds a way to impersonate the organization account in the model hub and then deploys malicious model to the model hub. The organization employees then download the malicious model and the malicious code is ran in the organization's environment.

This type of attack can be particularly dangerous as it can go unnoticed for a
long time, since the victim may not realize that the package they are using has
been compromised. The attacker's malicious code could be used to steal sensitive
information, modify results, or even cause the machine learning model to fail.
## References

## References
[https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/](OWASP Top10 A06:2021 – Vulnerable and Outdated Components)
[https://5stars217.github.io/2023-08-08-red-teaming-with-ml-models/](Model Confusion - Weaponizing ML models for red teams and bounty hunters)

0 comments on commit c91886f

Please sign in to comment.