Skip to content

RunGptLLM class in LlamaIndex has a command injection

High severity GitHub Reviewed Published May 16, 2024 to the GitHub Advisory Database • Updated May 24, 2024

Package

pip llama-index (pip)

Affected versions

< 0.10.13

Patched versions

0.10.13
pip llama-index-llms-rungpt (pip)
< 0.1.3
0.1.3

Description

A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the RunGpt framework from JinaAI to connect to Language Learning Models (LLMs). The vulnerability arises from the improper use of the eval function, allowing a malicious or compromised LLM hosting provider to execute arbitrary commands on the client's machine. This issue was fixed in version 0.10.13. The exploitation of this vulnerability could lead to a hosting provider gaining full control over client machines.

References

Published by the National Vulnerability Database May 16, 2024
Published to the GitHub Advisory Database May 16, 2024
Reviewed May 16, 2024
Last updated May 24, 2024

Severity

High
8.8
/ 10

CVSS base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
Required
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High
CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Weaknesses

CVE ID

CVE-2024-4181

GHSA ID

GHSA-pw38-xv9x-h8ch

Source code

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.