This repository contains the codebase for the paper "Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas" presented at NeurIPS 2024. The study introduces the EAI framework, developed to model and evaluate the impact of emotions on ethical decision-making and strategic behavior in large language models (LLMs).
Emotions significantly influence human decision-making. This project explores how emotional states affect LLMs' alignment in strategic games and ethical scenarios, using a novel framework to assess these impacts across various game-theoretical settings and ethical benchmarks. The research includes experiments with different LLMs, investigating emotional biases that impact ethical and strategic choices.
- Emotional Modeling: Introduces a structured framework to prompt LLMs with predefined emotions and analyze their influence on decision-making.
- Game-Theoretical Evaluation: Examines LLMs' behavior in bargaining, repeated games, and multi-player strategic settings.
- Ethics Benchmarking: Assesses model responses to ethical questions under emotional influence.
- Model Comparisons: Includes experiments on both open-source and proprietary models with multilingual capabilities.
.
├── README.md
├── prompts/
│ └── {language}/
│ ├── agent/
│ ├── emotions/
│ └── games/
├── run_division_game.py
├── run_exps_division_game.py
├── run_table_game.py
└── src/
- Novel framework for integrating emotions into LLMs' decision-making in game theory
- Experimental study across various strategic games
- Analysis of emotional and strategic biases in LLM decision-making
- Comparison of proprietary and open-source LLM performance
- Multi-language support (English & Russian)
The framework supports integration of:
- One-shot bargaining games (Dictator, Ultimatum)
- 2-player repeated games (e.g., Prisoner's Dilemma)
- Multi-player games (Public Goods, and El Farol Bar games)
- Python 3.7 or higher
- pip (Python package installer)
- Git
-
Clone the repository:
git clone https://github.com/your-username/your-repo-name.git cd your-repo-name
-
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required Python packages:
pip install -r requirements.txt
If
requirements.txt
doesn't exist, create it with the following content:openai pandas tqdm pydantic python-dotenv
-
Set up environment variables:
- Create a
.env
file in the root directory of the project - Add your OpenAI API key to the
.env
file:OPENAI_API_KEY=your_api_key_here
- Create a
-
If there are any additional data files or models required, place them in the appropriate directories within the project structure.
- If you encounter issues with the OpenAI API, ensure your API key is correctly set in the
.env
file and that you have sufficient credits. - For any import errors, make sure all required packages are installed and that you're running Python from the correct virtual environment.
- If you face issues with file paths, check that you're running the scripts from the root directory of the project.
This project uses environment variables to manage sensitive information like API keys. Never commit your .env
file or share your API keys publicly.
To run experiments with bargaining games:
python run_exps_division_game.py
To run a single division game:
python run_division_game.py
To run table games:
python run_table_game.py
The prompts
directory contains language-specific prompts organized as follows:
agent/
: Contains prompts for agent behavior, memory updates, etc.memory_update.txt
: Prompt for updating agent's memory after current round (not for bargaining)emotions/
: Folder with prompts for questioning emotions and inserting them into memorygame_settings/
: Folder with prompts for defining environment, conditions, and general prompt for initialization memory of agent -outer_emotions/
: Folder with prompts for questioning what emotions to demonstrate and how to describe them to coplayer (not for bargaining)
emotions/
: Descriptions for initial agents' emotionsgames/
: Game-specific prompts and rulesrewards.json
: Reward matrixrules1.txt
: Rules described for the first playerrules2.txt
: Rules described for the second player
Games are currently available in English & Russian. The {language}
in the directory structure is the chosen language's lowercase name (english, russian).
- Emotions significantly alter LLM decision-making, regardless of alignment strategies.
- GPT-4 shows less alignment with human emotions but breaks alignment in 'anger' mode.
- GPT-3.5 and Claude demonstrate better alignment with human emotional responses.
- Proprietary models outperform open-source and uncensored LLMs in decision optimality.
- Medium-size models show better alignment with human behavior.
- Adding emotions helps model cooperation and coordination during games.
- Validate findings with both proprietary and open-source LLMs
- Explore finetuning of open-source models on emotional prompting
- Investigate multi-agent approaches for dynamic emotions
- Study the impact of emotions on strategic interactions in short- and long-term horizons
We welcome contributions to this project! If you're interested in contributing, please follow these steps:
- Fork the repository
- Create a new branch for your feature (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Please cite our work as:
Mozikov, Mikhail, et al. "EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
For further information, please reach out to [email protected].
Stay tuned for the code release post-conference!