Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set DataLad-next 1.0.0b2 dependency, adjust accordingly #23

Merged
merged 7 commits into from
May 5, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 8 additions & 20 deletions datalad_redcap/export_form.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,10 @@
from typing import (
List,
Optional,
Tuple,
)

from redcap.methods.records import Records

from datalad.distribution.dataset import (
Dataset,
require_dataset,
resolve_path,
)
from datalad.interface.common_opts import (
nosave_opt,
save_message_opt,
Comment on lines 13 to 15
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No related to this PR, but I want to challenge the need for this feature.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The need for importing the parameters from common_opts? Or the need for having --message and --nosave?

My idea for --nosave was that currently the form (and also report) export can write one or multiple forms into one file. --nosave was supposed to allow repeating the command to write several forms into several files, and capturing them in a single, manual, save afterwards. But we could later add a many-forms-to-separate-files variant of the export command, and make the problem go away. Or we could do away with --nosave right now (should --message stay?) - is that what you suggest?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without deeply thinking about it again, I'd say this command should either

  • always save
  • never save

If it always saves, it should support setting a message (see datalad/datalad#3316)

However, this creates complications. See datalad/datalad#3896 for an entrypoint to some. Few more keywords are: recursive saving of superdatasets, etc.

Just saving also does not necessarily provide an indication of the origin of the save information.

Without wanting to propse a full concept: It may just be better to never save for this command.

  • if people want to export multiple, they run multiple command, and save ones at the end -- no need for more code
  • if people want to do things recursively, the combine this command with for_each_dataset, and safe at then end or individually as they desired
  • if people want to have provenance capture of where that information is coming from, they combine any of the above with datalad run

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting - I'll probably take it to a separate issue to think of it a bit more. FTR, the current behavior when saving is to use the provided message or craft one based on the form names. And the command would refuse operation if a file is in a subdataset, if that matters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It matters in that an operation in a hierarchical dataset tree already requires more saving than the command can provide.

Expand All @@ -35,9 +29,9 @@
EnsurePath,
EnsureStr,
EnsureURL,
DatasetParameter,
)
from datalad_next.constraints.dataset import (
DatasetParameter,
EnsureDataset,
)
from datalad_next.utils import CredentialManager
Expand Down Expand Up @@ -115,6 +109,8 @@ class ExportForm(ValidatedInterface):
message=EnsureStr(),
save=EnsureBool(),
),
validate_defaults=("dataset",),
tailor_for_dataset=({"outfile": "dataset"}),
)

@staticmethod
Expand All @@ -131,18 +127,10 @@ def __call__(
save: bool = True,
):

# work with a dataset object
if dataset is None:
# https://github.com/datalad/datalad-next/issues/225
ds = require_dataset(None)
else:
ds = dataset.ds

# Sort out the path in context of the dataset
res_outfile = resolve_path(outfile, ds=ds)
ds = dataset.ds

# refuse to operate if target file is outside the dataset or not clean
ok_to_edit, unlock = check_ok_to_edit(res_outfile, ds)
ok_to_edit, unlock = check_ok_to_edit(outfile, ds)
if not ok_to_edit:
yield get_status_dict(
action="export_redcap_form",
Expand Down Expand Up @@ -185,8 +173,8 @@ def __call__(

# unlock the file if needed, and write contents
if unlock:
ds.unlock(res_outfile)
with open(res_outfile, "wt") as f:
ds.unlock(outfile)
mslw marked this conversation as resolved.
Show resolved Hide resolved
with open(outfile, "wt") as f:
f.write(response)

# save changes in the dataset
Expand All @@ -195,7 +183,7 @@ def __call__(
message=message
if message is not None
else _write_commit_message(forms),
path=res_outfile,
path=outfile,
)

# yield successful result if we made it to here
Expand Down
30 changes: 10 additions & 20 deletions datalad_redcap/export_project_xml.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,6 @@

from redcap.methods.project_info import ProjectInfo

from datalad.distribution.dataset import (
require_dataset,
resolve_path,
)
from datalad.interface.common_opts import (
nosave_opt,
save_message_opt,
Expand Down Expand Up @@ -142,13 +138,15 @@ class ExportProjectXML(ValidatedInterface):
dict(
url=EnsureURL(required=["scheme", "netloc", "path"]),
outfile=EnsurePath(),
dataset=EnsureDataset(installed=True, purpose="export redcap report"),
dataset=EnsureDataset(installed=True, purpose="export REDCap project XML"),
credential=EnsureStr(),
metadata_only=EnsureBool(),
survey_fields=EnsureBool(),
message=EnsureStr(),
save=EnsureBool(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same remark as above. I believe this is a non-feature.

),
validate_defaults=("dataset",),
tailor_for_dataset=({"outfile": "dataset"}),
)

@staticmethod
Expand All @@ -165,22 +163,14 @@ def __call__(
save: bool = True,
):

# work with a dataset object
if dataset is None:
# https://github.com/datalad/datalad-next/issues/225
ds = require_dataset(None)
else:
ds = dataset.ds

# sort out the path in context of the dataset
res_outfile = resolve_path(outfile, ds=ds)
ds = dataset.ds

# refuse to operate if target file is outside the dataset or not clean
ok_to_edit, unlock = check_ok_to_edit(res_outfile, ds)
ok_to_edit, unlock = check_ok_to_edit(outfile, ds)
if not ok_to_edit:
yield get_status_dict(
action="export_redcap_report",
path=res_outfile,
path=outfile,
status="error",
message=(
"Output file status is not clean or the file does not "
Expand Down Expand Up @@ -217,8 +207,8 @@ def __call__(

# unlock the file if needed, and write contents
if unlock:
ds.unlock(res_outfile)
with open(res_outfile, "wt") as f:
ds.unlock(outfile)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above.

with open(outfile, "wt") as f:
f.write(response)

# save changes in the dataset
Expand All @@ -231,13 +221,13 @@ def __call__(
metadata_only=metadata_only,
survey_fields=survey_fields,
),
path=res_outfile,
path=outfile,
)

# yield successful result if we made it to here
yield get_status_dict(
action="export_redcap_project_xml",
path=res_outfile,
path=outfile,
status="ok",
)

Expand Down
32 changes: 11 additions & 21 deletions datalad_redcap/export_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,6 @@

from redcap.methods.reports import Reports

from datalad.distribution.dataset import (
require_dataset,
resolve_path,
)
from datalad.interface.common_opts import (
nosave_opt,
save_message_opt,
Expand Down Expand Up @@ -92,11 +88,13 @@ class ExportReport(ValidatedInterface):
url=EnsureURL(required=["scheme", "netloc", "path"]),
report=EnsureStr(),
outfile=EnsurePath(),
dataset=EnsureDataset(installed=True, purpose="export redcap report"),
dataset=EnsureDataset(installed=True, purpose="export REDCap report"),
credential=EnsureStr(),
message=EnsureStr(),
save=EnsureBool(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above.

)
),
validate_defaults=("dataset",),
tailor_for_dataset=({"outfile": "dataset"}),
)

@staticmethod
Expand All @@ -112,22 +110,14 @@ def __call__(
save: bool = True,
):

# work with a dataset object
if dataset is None:
# https://github.com/datalad/datalad-next/issues/225
ds = require_dataset(None)
else:
ds = dataset.ds

# sort out the path in context of the dataset
res_outfile = resolve_path(outfile, ds=ds)
ds = dataset.ds

# refuse to operate if target file is outside the dataset or not clean
ok_to_edit, unlock = check_ok_to_edit(res_outfile, ds)
ok_to_edit, unlock = check_ok_to_edit(outfile, ds)
if not ok_to_edit:
yield get_status_dict(
action="export_redcap_report",
path=res_outfile,
path=outfile,
status="error",
message=(
"Output file status is not clean or the file does not "
Expand Down Expand Up @@ -163,20 +153,20 @@ def __call__(

# unlock the file if needed, and write contents
if unlock:
ds.unlock(res_outfile)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above.

with open(res_outfile, "wt") as f:
ds.unlock(outfile)
with open(outfile, "wt") as f:
f.write(response)

# save changes in the dataset
if save:
ds.save(
message=message if message is not None else "Export REDCap report",
path=res_outfile,
path=outfile,
)

# yield successful result if we made it to here
yield get_status_dict(
action="export_redcap_report",
path=res_outfile,
path=outfile,
status="ok",
)
7 changes: 4 additions & 3 deletions datalad_redcap/tests/test_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from datalad.api import export_redcap_form
from datalad.distribution.dataset import Dataset

from datalad_next.constraints import ConstraintError
from datalad_next.utils import chpwd

CSV_CONTENT = "foo,bar,baz\nspam,spam,spam"
Expand All @@ -24,7 +25,7 @@ def test_url_rejected(tmp_path, credman_filled, api_url):
credname, _ = credman_filled.query(realm=api_url)[0]
ds = Dataset(tmp_path).create(result_renderer="disabled")

with pytest.raises(ValueError):
with pytest.raises(ConstraintError):
with patch(
"datalad_redcap.export_form.Records.export_records",
return_value=CSV_CONTENT,
Expand All @@ -42,7 +43,7 @@ def test_dataset_not_found(tmp_path, credman_filled, api_url):
"""Test that nonexistent dataset is rejected by validation"""

# explicit path that isn't a dataset
with pytest.raises(ValueError):
with pytest.raises(ConstraintError):
with patch(
"datalad_redcap.export_form.Records.export_records",
return_value=CSV_CONTENT,
Expand All @@ -56,7 +57,7 @@ def test_dataset_not_found(tmp_path, credman_filled, api_url):

# no path given, pwd is not a dataset
with chpwd(tmp_path, mkdir=True):
with pytest.raises(ValueError):
with pytest.raises(ConstraintError):
with patch(
"datalad_redcap.export_form.Records.export_records",
return_value=CSV_CONTENT,
Expand Down
10 changes: 5 additions & 5 deletions datalad_redcap/tests/test_query.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@
from datalad.api import redcap_query
from datalad_next.tests.utils import (
assert_result_count,
with_credential,
)

JSON_CONTENT = {"foo": "bar"}


def test_redcap_query_has_result(credman_filled, api_url):
with patch("datalad_redcap.query.MyInstruments.export_instruments", return_value=JSON_CONTENT):
assert_result_count(
redcap_query(url=api_url, result_renderer="disabled"), 1
)
with patch(
"datalad_redcap.query.MyInstruments.export_instruments",
return_value=JSON_CONTENT,
):
assert_result_count(redcap_query(url=api_url, result_renderer="disabled"), 1)
4 changes: 2 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ project_urls =
[options]
python_requires = >= 3.8
install_requires =
datalad >= 0.18.1
datalad-next @ git+https://github.com/datalad/datalad-next.git@main
datalad >= 0.18.2
datalad-next >= 1.0.0b2
pycap >= 2.3.0
prettytable >= 3.6
packages = find_namespace:
Expand Down