Python library for probing the inner workings of large language models. Systematically test LLMs' zero-shot ability at unseen logic problems, propensity for bias, and vulnerabilities to particular attacks involving recursion, reframing and tokenization.
-
Notifications
You must be signed in to change notification settings - Fork 0
Python library for probing the inner workings of large language models. Systematically test LLMs' zero-shot ability at unseen logic problems, propensity for bias, and vulnerabilities to particular attacks involving recursion, reframing and tokenization.
License
dr-gareth-roberts/insideLLMs
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Python library for probing the inner workings of large language models. Systematically test LLMs' zero-shot ability at unseen logic problems, propensity for bias, and vulnerabilities to particular attacks involving recursion, reframing and tokenization.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published