Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
bminixhofer authored Sep 15, 2021
1 parent 88cdff1 commit be9cf89
Showing 1 changed file with 26 additions and 13 deletions.
39 changes: 26 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,37 @@
# GerPT2

A small German GPT2.
German large and small versions of GPT2:

- https://huggingface.co/benjamin/gerpt2
- https://huggingface.co/benjamin/gerpt2-large

See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.

## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)

I evaluated both GerPT2 and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:

| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | __24.78__ | __35.33__ |
| | | |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | __16.08__ | __23.26__ |

See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.

## Usage

![GerPT2 usage](https://user-images.githubusercontent.com/13353204/100330362-288d0b80-2fcf-11eb-82e7-ed71c7140a88.png)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")

prompt = "<your prompt>"

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```

Also, two tricks might improve the generated text:

Expand All @@ -41,17 +54,17 @@ print(tokenizer.decode(output))

## Training details

GerPT2 is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2).
GerPT2 was trained with:
GerPT2-large is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:

- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 7 epochs
- for 2 epochs

Training took roughly 6 days on 8 TPUv3 cores.
Training took roughly 12 days on 8 TPUv3 cores.

To train GerPT2, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):

0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
Expand Down Expand Up @@ -80,13 +93,13 @@ mitarbeiter -> staff
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.

3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu.json`.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.

## License

GerPT2 is licensed under the MIT License.
GerPT2-large is licensed under the MIT License.

## Acknowledgements

Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Special thanks to [PetFinder.my](https://www.petfinder.my/) for generously sponsoring the resources used for training.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.

0 comments on commit be9cf89

Please sign in to comment.