|
12 | 12 | **Getting Started:** [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/basic_tutorials/Basic_Intervention.ipynb) [**_pyvene_ 101**] |
13 | 13 |
|
14 | 14 | ## Installation |
15 | | -Install with pip on stable releases, |
16 | 15 | ```bash |
17 | 16 | pip install pyvene |
18 | 17 | ``` |
19 | 18 |
|
20 | | -or with our dev repo directly, |
21 | | -```bash |
22 | | -pip install git+https://github.com/frankaging/pyvene.git |
23 | | -``` |
24 | | - |
25 | | -or you can clone our repo, |
26 | | -```bash |
27 | | -git clone https://github.com/frankaging/pyvene.git |
28 | | -``` |
29 | | -and import to your project as, |
30 | | -```python |
31 | | -from pyvene import pyvene |
32 | | -_, tokenizer, gpt2 = pyvene.create_gpt2() |
33 | | -``` |
34 | | - |
35 | 19 | ## _Wrap_ , _Intervene_ and _Share_ |
36 | 20 | You can intervene with supported models as, |
37 | 21 | ```python |
@@ -93,7 +77,6 @@ We see interventions are knobs that can mount on models. And people can share th |
93 | 77 | | Intermediate | [**Intervene Your Local Models**](tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | Illustrates how to run this library with your own models | |
94 | 78 | | Advanced | [**Trainable Interventions for Causal Abstraction**](tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/frankaging/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | Illustrates how to train an intervention to discover causal mechanisms of a neural model | |
95 | 79 |
|
96 | | - |
97 | 80 | ## Causal Abstraction: From Interventions to Gain Interpretability Insights |
98 | 81 | Basic interventions are fun but we cannot make any causal claim systematically. To gain actual interpretability insights, we want to measure the counterfactual behaviors of a model in a data-driven fashion. In other words, if the model responds systematically to your interventions, then you start to associate certain regions in the network with a high-level concept. We also call this alignment search process with model internals. |
99 | 82 |
|
@@ -152,12 +135,27 @@ intervenable.train( |
152 | 135 | ``` |
153 | 136 | where you need to pass in a trainable dataset, and your customized loss and metrics function. The trainable interventions can later be saved on to your disk. You can also use `intervenable.evaluate()` your interventions in terms of customized objectives. |
154 | 137 |
|
155 | | - |
156 | 138 | ## Contributing to This Library |
157 | 139 | Please see [our guidelines](CONTRIBUTING.md) about how to contribute to this repository. |
158 | 140 |
|
159 | 141 | *Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged!* :octocat: |
160 | 142 |
|
| 143 | +### Other Ways of Installation |
| 144 | + |
| 145 | +**Method 2: Install from the Repo** |
| 146 | +```bash |
| 147 | +pip install git+https://github.com/frankaging/pyvene.git |
| 148 | +``` |
| 149 | + |
| 150 | +**Method 3: Clone and Import** |
| 151 | +```bash |
| 152 | +git clone https://github.com/frankaging/pyvene.git |
| 153 | +``` |
| 154 | +and in parallel folder, import to your project as, |
| 155 | +```python |
| 156 | +from pyvene import pyvene |
| 157 | +_, tokenizer, gpt2 = pyvene.create_gpt2() |
| 158 | +``` |
161 | 159 |
|
162 | 160 | ## Related Works in Discovering Causal Mechanism of LLMs |
163 | 161 | If you would like to read more works on this area, here is a list of papers that try to align or discover the causal mechanisms of LLMs. |
|
0 commit comments