Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to help improve readability #1

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 3 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,26 +7,7 @@ GPT Takes the Bar - Supplementary Information
* __Publication Date__: 2022-12-29

## Abstract
```
Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as
“the Bar Exam,” as a precondition for law practice. To even sit for the exam, most jurisdictions require
that an applicant completes at least seven years of post-secondary education, including three years at an
accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific
preparation. Despite this significant investment of time and capital, approximately one in five test-takers
still score under the rate required to pass the exam on their first try. In the face of a complex task that
requires such depth of knowledge, what, then, should we expect of the state of the art in “AI?” In this
research, we document our experimental evaluation of the performance of OpenAI’s text-davinci-003 model,
often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no
benefit in fine-tuning over GPT-3.5’s zero-shot performance at the scale of our training data, we do find that
hyperparameter optimization and prompt engineering positively impacted GPT-3.5’s zero-shot performance. For
best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE
practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate
for both Evidence and Torts. GPT-3.5’s ranking of responses is also highly correlated with correctness;
its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong
non-entailment performance. While our ability to interpret these results is limited by nascent scientific
understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that
an LLM will pass the MBE component of the Bar Exam in the near future.
```
Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as “the Bar Exam,” as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in “AI?” In this research, we document our experimental evaluation of the performance of OpenAI’s text-davinci-003 model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5’s zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5’s zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5’s ranking of responses is also highly correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.

### Table of Contents

Expand All @@ -35,15 +16,11 @@ an LLM will pass the MBE component of the Bar Exam in the near future.
* [Example Session Log](sample_session_log.html)

## Progression of Models over Time
<picture>
<img src="https://github.com/mjbommar/gpt-takes-the-bar-exam/blob/main/accuracy_bar_chart_progression.png?raw=true" />
</picture>
![Bar chart showing the progression of GPT models over time](accuracy_bar_chart_progression.png)


## `text-davinci-003` Performance by Question Category
<picture>
<img src="https://github.com/mjbommar/gpt-takes-the-bar-exam/blob/main/accuracy_bar_chart.png?raw=true" />
</picture>
![Bar chart displaying performance by question category](accuracy_bar_chart.png)



Binary file modified accuracy_bar_chart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.