Skip to content

Commit f5b298a

Browse files
committed
Merge branch 'main' of github.com:frankaging/pyvene into main
2 parents 9004868 + aa586a8 commit f5b298a

File tree

1 file changed

+16
-18
lines changed

1 file changed

+16
-18
lines changed

README.md

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -12,26 +12,10 @@
1212
**Getting Started:** [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/basic_tutorials/Basic_Intervention.ipynb) [**_pyvene_ 101**]
1313

1414
## Installation
15-
Install with pip on stable releases,
1615
```bash
1716
pip install pyvene
1817
```
1918

20-
or with our dev repo directly,
21-
```bash
22-
pip install git+https://github.com/frankaging/pyvene.git
23-
```
24-
25-
or you can clone our repo,
26-
```bash
27-
git clone https://github.com/frankaging/pyvene.git
28-
```
29-
and import to your project as,
30-
```python
31-
from pyvene import pyvene
32-
_, tokenizer, gpt2 = pyvene.create_gpt2()
33-
```
34-
3519
## _Wrap_ , _Intervene_ and _Share_
3620
You can intervene with supported models as,
3721
```python
@@ -93,7 +77,6 @@ We see interventions are knobs that can mount on models. And people can share th
9377
| Intermediate | [**Intervene Your Local Models**](tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | Illustrates how to run this library with your own models |
9478
| Advanced | [**Trainable Interventions for Causal Abstraction**](tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | Illustrates how to train an intervention to discover causal mechanisms of a neural model |
9579

96-
9780
## Causal Abstraction: From Interventions to Gain Interpretability Insights
9881
Basic interventions are fun but we cannot make any causal claim systematically. To gain actual interpretability insights, we want to measure the counterfactual behaviors of a model in a data-driven fashion. In other words, if the model responds systematically to your interventions, then you start to associate certain regions in the network with a high-level concept. We also call this alignment search process with model internals.
9982

@@ -152,12 +135,27 @@ intervenable.train(
152135
```
153136
where you need to pass in a trainable dataset, and your customized loss and metrics function. The trainable interventions can later be saved on to your disk. You can also use `intervenable.evaluate()` your interventions in terms of customized objectives.
154137

155-
156138
## Contributing to This Library
157139
Please see [our guidelines](CONTRIBUTING.md) about how to contribute to this repository.
158140

159141
*Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged!* :octocat:
160142

143+
### Other Ways of Installation
144+
145+
**Method 2: Install from the Repo**
146+
```bash
147+
pip install git+https://github.com/frankaging/pyvene.git
148+
```
149+
150+
**Method 3: Clone and Import**
151+
```bash
152+
git clone https://github.com/frankaging/pyvene.git
153+
```
154+
and in parallel folder, import to your project as,
155+
```python
156+
from pyvene import pyvene
157+
_, tokenizer, gpt2 = pyvene.create_gpt2()
158+
```
161159

162160
## Related Works in Discovering Causal Mechanism of LLMs
163161
If you would like to read more works on this area, here is a list of papers that try to align or discover the causal mechanisms of LLMs.

0 commit comments

Comments
 (0)