Skip to content

Commit

Permalink
Merge pull request #280 from stochasticai/glenn/mixtral
Browse files Browse the repository at this point in the history
docs: update README.md
  • Loading branch information
glennko committed Mar 31, 2024
2 parents afc00ba + 9e7cd2c commit 6a0c18d
Showing 1 changed file with 37 additions and 36 deletions.
73 changes: 37 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<img src=".github/stochastic_logo_light.svg#gh-light-mode-only" width="250" alt="Stochastic.ai"/>
<img src=".github/stochastic_logo_dark.svg#gh-dark-mode-only" width="250" alt="Stochastic.ai"/>
</p>
<h3 align="center">Build, customize and control your own personal LLMs</h3>
<h3 align="center">Build, modify, and control your own personalized LLMs</h3>

<p align="center">
<a href="https://pypi.org/project/xturing/">
Expand All @@ -15,13 +15,14 @@
<img src="https://img.shields.io/badge/Chat-FFFFFF?logo=discord&style=for-the-badge"/>
</a>
</p>

<br>

___

`xTuring` provides fast, efficient and simple fine-tuning of LLMs, such as LLaMA, GPT-J, Galactica, and more.
`xTuring` provides fast, efficient and simple fine-tuning of open-source LLMs, such as Mistral, LLaMA, GPT-J, and more.
By providing an easy-to-use interface for fine-tuning LLMs to your own data and application, xTuring makes it
simple to build, customize and control LLMs. The entire process can be done inside your computer or in your
simple to build, modify, and control LLMs. The entire process can be done inside your computer or in your
private cloud, ensuring data privacy and security.

With `xTuring` you can,
Expand All @@ -33,6 +34,38 @@ With `xTuring` you can,

<br>

## ⚙️ Installation
```bash
pip install xturing
```

<br>

## 🚀 Quickstart

```python
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the dataset
instruction_dataset = InstructionDataset("./examples/models/llama/alpaca_data")

# Initialize the model
model = BaseModel.create("llama_lora")

# Finetune the model
model.finetune(dataset=instruction_dataset)

# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])

print("Generated output by the model: {}".format(output))
```

You can find the data folder [here](examples/models/llama/alpaca_data).

<br>

## 🌟 What's new?
We are excited to announce the latest enhancements to our `xTuring` library:
1. __`LLaMA 2` integration__ - You can use and fine-tune the _`LLaMA 2`_ model in different configurations: _off-the-shelf_, _off-the-shelf with INT8 precision_, _LoRA fine-tuning_, _LoRA fine-tuning with INT8 precision_ and _LoRA fine-tuning with INT4 precision_ using the `GenericModel` wrapper and/or you can use the `Llama2` class from `xturing.models` to test and finetune the model.
Expand All @@ -45,7 +78,7 @@ from xturing.models import BaseModel
model = BaseModel.create('llama2')

```
2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://towardsdatascience.com/perplexity-in-language-models-87a196019a94).
2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://en.wikipedia.org/wiki/Perplexity).
```python
# Make the necessary imports
from xturing.datasets import InstructionDataset
Expand Down Expand Up @@ -118,38 +151,6 @@ For an extended insight, consider examining the [GenericModel working example](e

<br>

## ⚙️ Installation
```bash
pip install xturing
```

<br>

## 🚀 Quickstart

```python
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the dataset
instruction_dataset = InstructionDataset("./alpaca_data")

# Initialize the model
model = BaseModel.create("llama_lora")

# Finetune the model
model.finetune(dataset=instruction_dataset)

# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])

print("Generated output by the model: {}".format(output))
```

You can find the data folder [here](examples/models/llama/alpaca_data).

<br>

## CLI playground
<img src=".github/cli-playground.gif" width="80%" style="margin: 0 1%;"/>

Expand Down

0 comments on commit 6a0c18d

Please sign in to comment.