Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the version of seqeval I should use? #183

Open
Michael-Evergreen opened this issue Mar 17, 2024 · 3 comments
Open

What is the version of seqeval I should use? #183

Michael-Evergreen opened this issue Mar 17, 2024 · 3 comments

Comments

@Michael-Evergreen
Copy link

Michael-Evergreen commented Mar 17, 2024

Hello, thanks for the work!
I'm having trouble running your example to reproduce the checkpoint result using this command:
python examples/ner/evaluate_transformers_checkpoint.py data/ner_conll/en/test.txt studio-ousia/luke-large-finetuned-conll-2003 --cuda-device 0
It gave me this error:

File "/usr/local/lib/python3.10/dist-packages/seqeval/scheme.py", line 55, in __init__
    self.prefix = Prefixes[token[-1]] if suffix else Prefixes[token[0]]
KeyError: 'r'

I wonder what's wrong here? Possbily my version of seqeval didn't match yours? It's not listed in your requirements.txt
Also could you give a finetune example using huggingface? I'm aware of an example they gave here https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke but its' quite bad on Conll2003 (0.5 F1 score)

@ryokan0123
Copy link
Contributor

You could try the version 1.2.2 as listed in pyproject.toml?

seqeval = "1.2.2"

@Michael-Evergreen
Copy link
Author

Hi ryokan, I see it now, thanks for answering. As for the second part of my question. Do you have any ideas why it is the case that the notebook only achieve an F1 of 0.5?

@ryokan0123
Copy link
Contributor

I am not familiar with the implementation but I think that some format issues cause the bad performance.

A common pitfall is that there are multiple formats for NER such as BIO, IOB, IOB2...
If any of the model outputs, evaluation data and the evaluation script mismatch, it could lead to unexpected results.

For example, out script assues iob1 format by default.

@click.option("--iob-scheme", type=str, default="iob1")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants