Skip to content

Commit

Permalink
add citation
Browse files Browse the repository at this point in the history
  • Loading branch information
Cyrilvallez committed Sep 4, 2024
1 parent 6352e30 commit 00de6fc
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 1 deletion.
7 changes: 7 additions & 0 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,11 @@ @article{gradio
author = {Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfozan, Abdulrahman and Zou, James},
journal = {arXiv preprint arXiv:1906.02569},
year = {2019},
}

@article{llama3,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
2 changes: 1 addition & 1 deletion paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ textwiz-memory llama3-8B 1122 512

# State of the field

The most commonly used Python package for using LLMs is `transformers` [@transformers]. However, as explained, every model usually has its own implementation/usage details, making it hard to run the same text generation pipeline out-of-the-box for different models. Common pitfalls are also the responsability of the users to navigate. `TextWiz` goes one big step beyond, in order to provide one common interface and alleviate all common issues. It is also much simpler to use for beginners. Here is a very simple code snippet required to have a conversation with using `transformers`:
The most commonly used Python package for using LLMs is `transformers` [@transformers]. However, as explained, every model usually has its own implementation/usage details, making it hard to run the same text generation pipeline out-of-the-box for different models. Common pitfalls are also the responsability of the users to navigate. `TextWiz` goes one big step beyond, in order to provide one common interface and alleviate all common issues. It is also much simpler to use for beginners. Here is a very simple code snippet required to have a conversation with Llama3 [@llama3] using `transformers`:

```py
from transformers import AutoTokenizer, AutoModelForCausalLM
Expand Down

0 comments on commit 00de6fc

Please sign in to comment.