Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Found an edge case where flawed generation of RDF triplets can break the evaluation script #8

Open
zachares opened this issue Mar 8, 2023 · 0 comments

Comments

@zachares
Copy link

zachares commented Mar 8, 2023

In the file Evaluation_script_json.py in the function evaluaterefcand() on line 224 the inputs are supposed to rdf triples i.e. when you run the lines below the function header newreference = reference.split(' | ') and newcandidate = candidate.split(' | '), newcandidate and newreference should be lists of strings of length 3. If either of them are not, then this can cause downstream functions to fail when instead they should give performance values of 0 for that example; so I propose changing the logic on line 229 to

    if len(newreference) != 3:
        newreference = ['', '', '']
    if len(newcandidate) != 3:
        newcandidate = ['', '', '']

And changing the logic of the for loop below to for idx in range(3):. I would have liked to introduce a PR with these suggested changes so they are easier to review, but I cannot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant