|
1 |
| -## StarK <img src="./avengers_ironman01_org.png" width="22" height="22" alt="stark" align=center/> |
| 1 | +## StarK <img src="./ironman.png" width="22" height="22" alt="stark" align=center/> |
2 | 2 |
|
3 |
| -This repository contains code for EMNLP 2022 paper titled [Sparse Teachers Can Be Dense with Knowledge](). |
| 3 | +This repository contains code for EMNLP 2022 paper titled [Sparse Teachers Can Be Dense with Knowledge](https://arxiv.org/abs/2210.03923). |
4 | 4 |
|
5 | 5 | **************************** **Updates** ****************************
|
6 | 6 |
|
7 | 7 | <!-- Thanks for your interest in our repo! -->
|
8 | 8 |
|
9 |
| -<!-- Probably you will think this as another *"empty"* repo of a preprint paper 🥱. |
10 |
| -Wait a minute! The authors are working day and night 💪, to make the code and models available. |
11 |
| -We anticipate the code will be out * **in one week** *. --> |
| 9 | +* 10/19/22: We released our paper, code, and data. Check it out! |
| 10 | + |
| 11 | +## Quick Links |
| 12 | + |
| 13 | + - [Overview](#overview) |
| 14 | + - [Getting Started](#getting-started) |
| 15 | + - [Requirements](#requirements) |
| 16 | + - [GLUE Data](#glue-data) |
| 17 | + - [Training & Evaluation](#training&evaluation) |
| 18 | + - [Bugs or Questions?](#bugs-or-questions) |
| 19 | + - [Citation](#citation) |
| 20 | + |
| 21 | +## Overview |
| 22 | + |
| 23 | +Recent advances in distilling pretrained language models have discovered that, besides the expressiveness of knowledge, the student-friendliness should be taken into consideration to realize a truly knowledgeable teacher. Based on a pilot study, we find that over-parameterized teachers can produce expressive yet student-unfriendly knowledge and are thus limited in overall knowledgeableness. To remove the parameters that result in student-unfriendliness, we propose a sparse teacher trick under the guidance of an overall knowledgeable score for each teacher parameter. The knowledgeable score is essentially an interpolation of the expressiveness and student-friendliness scores. The aim is to ensure that the expressive parameters are retained while the student-unfriendly ones are removed. Extensive experiments on the GLUE benchmark show that the proposed sparse teachers can be dense with knowledge and lead to students with compelling performance in comparison with a series of competitive baselines. |
| 24 | + |
| 25 | +## Getting Started |
| 26 | + |
| 27 | +### Requirements |
| 28 | + |
| 29 | +- PyTorch |
| 30 | +- Numpy |
| 31 | +- Transformers |
| 32 | + |
| 33 | +### GLUE Data |
| 34 | + |
| 35 | +Get GLUE data through the [link](https://github.com/nyu-mll/jiant/blob/master/scripts/download_glue_data.py) and put it to the corresponding directory. For example, MRPC dataset should be placed into `datasets/mrpc`. |
| 36 | + |
| 37 | +### Training & Evaluation |
| 38 | + |
| 39 | +The training and evaluation are achieved in several scripts. We provide example scripts as follows. |
| 40 | + |
| 41 | +**Finetuning** |
| 42 | + |
| 43 | +We provide an example of finetuning `bert-base-uncased` on RTE in `scripts/run_finetuning_rte.sh`. We explain some important arguments in following: |
| 44 | +* `--model_type`: Variant to use, should be `ft` in the case. |
| 45 | +* `--model_path`: Pretrained language models to start with, should be `bert-base-uncased` in the case and can be others as you like. |
| 46 | +* `--task_name`: Task to use, should be chosen from `rte`, `mrpc`, `stsb`, `sst2`, `qnli`, `qqp`, `mnli`, and `mnlimm`. |
| 47 | +* `--data_type`: Input format to use, default to `combined`. |
| 48 | + |
| 49 | +**Pruning** |
| 50 | + |
| 51 | +We provide and example of pruning a finetuned checkpoint on RTE in `scripts/run_pruning_rte.sh`. The arguments should be self-contained. |
| 52 | + |
| 53 | +**Distillation** |
| 54 | + |
| 55 | +We provide an example of distilling a finetuned teacher to a layer-dropped or parameter-pruned student on RTE in `scripts/run_distillation_rte.sh`. We explain some important arguments in following: |
| 56 | +* `--model_type`: Variant to use, should be `kd` in the case. |
| 57 | +* `--teacher_model_path`: Teacher models to use, should be the path to the finetuned teacher checkpoint. |
| 58 | +* `--student_model_path`: Student models to initialize, should be the path to the pruned/finetuned teacher checkpoint depending on the way you would like to initialize the student. |
| 59 | +* `--student_sparsity`: Student sparsity, should be set if you would like to use parameter-pruned student, e.g., 70. Otherwise, this argument should be left blank. |
| 60 | +* `--student_layer`: Student layer, should be set if you would like to use layer-dropped student, e.g., 4. |
| 61 | + |
| 62 | +**Teacher Sparsification** |
| 63 | + |
| 64 | +We provide an example of sparsfying the teacher based on the student on RTE in `scripts/run_sparsification_rte.sh`. We explain some important arguments in following: |
| 65 | +* `--model_type`: Variant to use, should be `kd` in the case. |
| 66 | +* `--teacher_model_path`: Teacher models to use, should be the path to the finetuned teacher checkpoint. |
| 67 | +* `--student_model_path`: Student models to use, should be the path to the distilled student checkpoint. |
| 68 | +* `--student_sparsity`: Student sparsity, should be set if you would like to use parameter-pruned student, e.g., 70. Otherwise, this argument should be left blank. |
| 69 | +* `--student_layer`: Student layer, should be set if you would like to use layer-dropped student, e.g., 4. |
| 70 | +* `--lam`: the knowledgeableness tradeoff term to keep a balance between expressiveness and student-friendliness. |
| 71 | + |
| 72 | +**Rewinding** |
| 73 | + |
| 74 | +We provide an example of rewinding the student on RTE in `scripts/run_rewinding_rte.sh`. We explain some important arguments in following: |
| 75 | +* `--model_type`: Variant to use, should be `kd` in the case. |
| 76 | +* `--teacher_model_path`: Teacher models to use, should be the path to the sparsified teacher checkpoint. |
| 77 | +* `--student_model_path`: Student models to initialize, should be the path to the pruned/finetuned teacher checkpoint depending on the way you would like to initialize the student. |
| 78 | +* `--student_sparsity`: Student sparsity, should be set if you would like to use parameter-pruned student, e.g., 70. Otherwise, this argument should be left blank. |
| 79 | +* `--student_layer`: Student layer, should be set if you would like to use layer-dropped student, e.g., 4. |
| 80 | +* `--lam`: the knowledgeableness tradeoff term to keep a balance between expressiveness and student-friendliness. Here, it is just used for folder names. |
| 81 | + |
| 82 | +## Bugs or Questions? |
| 83 | + |
| 84 | +If you have any questions related to the code or the paper, feel free to email Chen ( `[email protected]`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker! |
| 85 | + |
| 86 | +## Citation |
| 87 | + |
| 88 | +Please cite our paper if you use the code in your work: |
| 89 | + |
| 90 | +```bibtex |
| 91 | +@inproceedings{yang2022sparse, |
| 92 | + title={Sparse Teachers Can Be Dense with Knowledge}, |
| 93 | + author={Yang, Yi and Zhang, Chen and Song, Dawei}, |
| 94 | + booktitle={EMNLP}, |
| 95 | + year={2022} |
| 96 | +} |
| 97 | +``` |
12 | 98 |
|
13 |
| -* Coming soon. |
|
0 commit comments