forked from EleutherAI/lm-evaluation-harness
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add Latxa paper evaluation tasks for Basque (EleutherAI#1654)
* add basqueglue * add eus_exams * add eus_proficiency * add eus_reading * add eus_trivia * run pre-commit
- Loading branch information
Showing
85 changed files
with
933 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
# BasqueGLUE | ||
|
||
### Paper | ||
|
||
Title: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque` | ||
|
||
Abstract: `https://aclanthology.org/2022.lrec-1.172/` | ||
|
||
Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license. | ||
|
||
Homepage: `https://github.com/orai-nlp/BasqueGLUE` | ||
|
||
Title: `Latxa: An Open Language Model and Evaluation Suite for Basque` | ||
|
||
Abstract: `https://arxiv.org/abs/2403.20266` | ||
|
||
The use of BasqueGLUE for evaluating the performance of decoder models in Basque is presented in this paper. | ||
|
||
Homepage: `https://github.com/hitz-zentroa/latxa` | ||
|
||
### Citation | ||
|
||
``` | ||
@InProceedings{urbizu2022basqueglue, | ||
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor}, | ||
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque}, | ||
booktitle = {Proceedings of the Language Resources and Evaluation Conference}, | ||
month = {June}, | ||
year = {2022}, | ||
address = {Marseille, France}, | ||
publisher = {European Language Resources Association}, | ||
pages = {1603--1612}, | ||
url = {https://aclanthology.org/2022.lrec-1.172} | ||
} | ||
@misc{etxaniz2024latxa, | ||
title={Latxa: An Open Language Model and Evaluation Suite for Basque}, | ||
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, | ||
year={2024}, | ||
eprint={2403.20266}, | ||
archivePrefix={arXiv}, | ||
primaryClass={cs.CL} | ||
} | ||
``` | ||
|
||
### Groups and Tasks | ||
|
||
#### Groups | ||
|
||
* `basque-glue`: First version of the implementation | ||
|
||
#### Tasks | ||
|
||
* `bhtc_v2`: Topic classification of news extracts with 12 categories. | ||
* `bec`: Sentiment analysis on tweets about the campaign for the 2016 Basque elections. | ||
* `vaxx_stance`: Stance detection on tweets around the anti-vaccine movement. | ||
* `qnlieu`: Q&A NLI as in [glue/qnli](../glue/qnli). | ||
* `wiceu`: Word-in-Context as in [super_glue/wic](../super_glue/wic). | ||
* `epec_korref_bin`: Correference detection as in [super_glue/wsc](../super_glue/wsc). | ||
|
||
### Checklist | ||
|
||
For adding novel benchmarks/datasets to the library: | ||
* [ ] Is the task an existing benchmark in the literature? | ||
* [ ] Have you referenced the original paper that introduced the task? | ||
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | ||
|
||
|
||
If other tasks on this dataset are already supported: | ||
* [ ] Is the "Main" variant of this task clearly denoted? | ||
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | ||
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
group: basque-glue | ||
task: bec2016eu | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: bec | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak?\nErantzuna:" | ||
doc_to_target: label | ||
doc_to_choice: ['negatiboa', 'neutrala', 'positiboa'] | ||
metric_list: | ||
- metric: f1 | ||
aggregation: !function utils.micro_f1_score | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
group: basque-glue | ||
task: bhtc_v2 | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: bhtc | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
doc_to_text: "Testua: {{text}}\nGaldera: Zein da aurreko testuaren gaia?\nErantzuna:" | ||
doc_to_target: label | ||
doc_to_choice: ['Ekonomia', 'Euskal Herria', 'Euskara', 'Gizartea', 'Historia', 'Ingurumena', 'Iritzia', 'Komunikazioa', 'Kultura', 'Nazioartea', 'Politika', 'Zientzia'] | ||
metric_list: | ||
- metric: f1 | ||
aggregation: !function utils.micro_f1_score | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
group: basque-glue | ||
task: epec_koref_bin | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: coref | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
doc_to_text: !function utils.coref_doc_to_text | ||
doc_to_target: label | ||
doc_to_choice: ['ez', 'bai'] | ||
metric_list: | ||
- metric: acc | ||
aggregation: mean | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
group: basque-glue | ||
task: qnlieu | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: qnli | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
doc_to_text: "{{question}}\n{{sentence}}\nGaldera: aurreko galderari erantzuten al dio emandako testuak?\nErantzuna:" | ||
doc_to_target: label | ||
doc_to_choice: ['bai', 'ez'] | ||
metric_list: | ||
- metric: acc | ||
aggregation: mean | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
import html | ||
import re | ||
|
||
from datasets import load_metric | ||
|
||
|
||
def general_detokenize(string): | ||
string = re.sub(r"\s+([.,;:!?)])", r"\1", string) | ||
string = re.sub(r"(\s+|^)\(\s+([^)]+)\s+\)", r"\1(\2)", string) | ||
string = re.sub(r"(\s+|^)\[\s+([^)]+)\s+\]", r"\1[\2]", string) | ||
string = re.sub(r'(\s+|^)"\s+([^"]+)\s+"', r'\1"\2"', string) | ||
string = re.sub(r"(\s+|^)'\s+([^']+)\s+'", r"\1'\2'", string) | ||
return string | ||
|
||
|
||
def process_doc(string): | ||
string = html.unescape(string) | ||
string = general_detokenize(string) | ||
return string | ||
|
||
|
||
def process_wic_docs(dataset): | ||
def _helper(doc): | ||
# there's some issues with the encoding on this one | ||
doc["sentence1"] = ( | ||
process_doc(doc["sentence1"]).encode("latin-1").decode("utf-8") | ||
) | ||
doc["sentence2"] = ( | ||
process_doc(doc["sentence2"]).encode("latin-1").decode("utf-8") | ||
) | ||
return doc | ||
|
||
return dataset.map(_helper) | ||
|
||
|
||
def coref_doc_to_text(x): | ||
def _span_in_context(span_index, span_text): | ||
span_start = span_index | ||
span_end = span_start + len(span_text.split(" ")) - 1 | ||
tokens[span_start] = f"*{tokens[span_start]}" | ||
tokens[span_end] = f"{tokens[span_end]}*" | ||
|
||
tokens = x["text"].split(" ") | ||
_span_in_context(x["span1_index"], x["span1_text"]) | ||
_span_in_context( | ||
x["span2_index"] - 1, x["span2_text"] | ||
) # span1_index is 0-based but span2_index is 1-based ?? | ||
context = process_doc(" ".join(tokens)) | ||
span_1 = process_doc(x["span1_text"]) | ||
span_2 = process_doc(x["span2_text"]) | ||
text = ( | ||
f"Testua: {context}\n" | ||
+ f'Galdera: Aurreko testuan, "*{span_1}*" eta "*{span_2}*" gauza bera dira?\n' | ||
+ "Erantzuna:" | ||
) | ||
return text | ||
|
||
|
||
# Measure F1 as in the benchmark repo: https://github.com/orai-nlp/BasqueGLUE/blob/main/eval_basqueglue.py | ||
|
||
|
||
def micro_f1_score(items): | ||
f1_metric = load_metric("f1") | ||
golds, preds = list(zip(*items)) | ||
f1_score = f1_metric.compute(references=golds, predictions=preds, average="micro")[ | ||
"f1" | ||
] | ||
return f1_score | ||
|
||
|
||
def vaxx_f1_score(items): | ||
f1_metric = load_metric("f1") | ||
golds, preds = list(zip(*items)) | ||
f1_class = f1_metric.compute( | ||
references=golds, predictions=preds, labels=[0, 2], average=None | ||
)["f1"] | ||
f1_score = sum(f1_class) / len(f1_class) | ||
return f1_score |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
group: basque-glue | ||
task: vaxx_stance | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: vaxx | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak txertoei buruz?\nErantzuna:" | ||
doc_to_target: label | ||
doc_to_choice: ['aurka', 'neutrala', 'alde'] | ||
metric_list: | ||
- metric: f1 | ||
aggregation: !function utils.vaxx_f1_score | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
group: basque-glue | ||
task: wiceu | ||
dataset_path: orai-nlp/basqueGLUE | ||
dataset_name: wic | ||
output_type: multiple_choice | ||
validation_split: validation | ||
test_split: test | ||
process_docs: !function utils.process_wic_docs | ||
doc_to_text: "1. esaldia: {{sentence1}}\n2. esaldia: {{sentence2}}\nGaldera: Aurreko bi esaldietan, \"{{word}}\" hitzak esanahi berdina du?\nErantzuna:" | ||
doc_to_target: label | ||
doc_to_choice: ['ez', 'bai'] | ||
metric_list: | ||
- metric: acc | ||
aggregation: mean | ||
higher_is_better: true | ||
metadata: | ||
- version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# EusExams | ||
|
||
### Paper | ||
|
||
Title: Latxa: An Open Language Model and Evaluation Suite for Basque | ||
|
||
Abstract: https://arxiv.org/abs/2403.20266 | ||
|
||
EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU). Within each of these groups, there are different exams for public positions, such as administrative and assistant roles. Each multiple-choice question contains 2 to 4 choices (3.90 on average) and one correct answer. The dataset is mostly parallel with 16k questions in Basque and 18k in Spanish. | ||
|
||
Homepage: https://github.com/hitz-zentroa/latxa | ||
|
||
|
||
### Citation | ||
|
||
``` | ||
@misc{etxaniz2024latxa, | ||
title={Latxa: An Open Language Model and Evaluation Suite for Basque}, | ||
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, | ||
year={2024}, | ||
eprint={2403.20266}, | ||
archivePrefix={arXiv}, | ||
primaryClass={cs.CL} | ||
} | ||
``` | ||
|
||
### Groups and Tasks | ||
|
||
#### Groups | ||
|
||
* `eus_exams_eu`: The Basque version of the exams. | ||
* `eus_exams_es`: The Spanish version of the exams. | ||
|
||
#### Tasks | ||
|
||
Basque and Spanish versions of the exams are available as separate tasks starting with `eus_exams_eu` and `eus_exams_es` respectively. | ||
|
||
### Checklist | ||
|
||
For adding novel benchmarks/datasets to the library: | ||
* [ ] Is the task an existing benchmark in the literature? | ||
* [ ] Have you referenced the original paper that introduced the task? | ||
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | ||
|
||
|
||
If other tasks on this dataset are already supported: | ||
* [ ] Is the "Main" variant of this task clearly denoted? | ||
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | ||
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
import argparse | ||
import json | ||
|
||
import requests | ||
import yaml | ||
|
||
|
||
# get configs from huggingface datasets server by doing a request | ||
response = requests.get( | ||
"https://datasets-server.huggingface.co/splits?dataset=HiTZ%2FEusExams", timeout=5 | ||
) | ||
response_json = json.loads(response.text) | ||
CONFIGS = [split["config"] for split in response_json["splits"]] | ||
|
||
|
||
def gen_config_yamls(output_dir: str, overwrite: bool) -> None: | ||
""" | ||
Generate a yaml file for each configuage. | ||
:param output_dir: The directory to output the files to. | ||
:param overwrite: Whether to overwrite files if they already exist. | ||
""" | ||
err = [] | ||
for config in CONFIGS: | ||
file_name = f"eus_exams_{config}.yaml" | ||
try: | ||
with open(f"{output_dir}/{file_name}", "w" if overwrite else "x") as f: | ||
f.write("# Generated by utils.py\n") | ||
yaml.dump( | ||
{ | ||
"include": "eus_exams_es" | ||
if "eus_exams_es" in config | ||
else "eus_exams_eu", | ||
"dataset_name": config, | ||
"task": f"eus_exams_{config}", | ||
}, | ||
f, | ||
) | ||
except FileExistsError: | ||
err.append(file_name) | ||
|
||
if len(err) > 0: | ||
raise FileExistsError( | ||
"Files were not created because they already exist (use --overwrite flag):" | ||
f" {', '.join(err)}" | ||
) | ||
|
||
|
||
def main() -> None: | ||
"""Parse CLI args and generate configuage-specific yaml files.""" | ||
parser = argparse.ArgumentParser() | ||
parser.add_argument( | ||
"--overwrite", | ||
default=False, | ||
action="store_true", | ||
help="Overwrite files if they already exist", | ||
) | ||
parser.add_argument( | ||
"--output-dir", default=".", help="Directory to write yaml files to" | ||
) | ||
args = parser.parse_args() | ||
|
||
gen_config_yamls(output_dir=args.output_dir, overwrite=args.overwrite) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
dataset_path: HiTZ/EusExams | ||
dataset_name: null | ||
validation_split: null | ||
test_split: test | ||
fewshot_split: test | ||
process_docs: !function utils.process_docs | ||
output_type: multiple_choice | ||
doc_to_choice: ["A", "B", "C", "D"] | ||
doc_to_target: answer | ||
metric_list: | ||
- metric: acc | ||
aggregation: mean | ||
higher_is_better: true | ||
- metric: acc_norm | ||
aggregation: mean | ||
higher_is_better: true | ||
metadata: | ||
version: 0.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
include: eus_exams | ||
group: | ||
- eus_exams_es | ||
doc_to_text: "Pregunta: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nD: {{candidates[3]}}\nRespuesta:" |
Oops, something went wrong.