Skip to content
Please note that GitHub no longer supports your web browser.

We recommend upgrading to the latest Google Chrome or Firefox.

Learn more
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine the organization of `get_file()`? #48

Open
wibeasley opened this issue Jan 11, 2020 · 7 comments
Open

Refine the organization of `get_file()`? #48

wibeasley opened this issue Jan 11, 2020 · 7 comments
Assignees

Comments

@wibeasley
Copy link
Contributor

@wibeasley wibeasley commented Jan 11, 2020

@adam3smith, @kuriwaki, @pdurbin, and anyone else,

Should get_file() be refactored into multiple child functions? It seems like we're asking it to do a lot of things, including

  • retrieve by file id or by file name
  • retrieve single file or multiple files
  • in #35, @kuriwaki has the cool suggestion of returning a specific file format (like Stata)
  • in #35, I hijacked @kuriwaki's thread to suggest also returning R's native data.frame or tibble.
  • in PR #47, @adam3smith has a cool solution of downloading multiple files as a single zip (also discussed in #46)

I like all these capabilities, and want to run discuss organizational ideas with people so the package structure is (a) easy for us to develop, test, & maintain, and (b) useful and natural to users to learn and incorporate.

One possible approach:
A foundational functional retrieves the file(s) by ID; it is the workhorse that actually retrieves the file. A second function accepts the file name (not ID); it essentially wraps the first function after calling get_fileid(). Both of these functions deal with a single file at a time.

Another pair of functions deal with multiple files (one by name, one by id). But these return lists, not a single object. They're essentially lapplys/loops around their respective siblings described above.

To avoid breaking the package interface, maybe the existing get_file() keeps its same interface (that ambiguously accepts with file names or id and returns either single files or a list of files), but we soft-deprecate it and encourage new code to use these more explicit functions? The guts of the function is moved out into the four new functions

Maybe the function names are

  1. the existing get_file() with an unchanged interface
  2. get_file_by_id() (the workhorse)
  3. get_file_by_name()
  4. get_files_by_id()
  5. get_files_by_name()
  6. get_tibble_by_id()
  7. get_tibble_by_name()
  8. get_zip_by_id()
  9. get_zip_by_name()

I'm pretty sure it would be easier to write tests that isolate problems. The documentation becomes more verbose, but probably more straight-forward.

You guys have more experience with Dataverse than I do, and better sense of the use cases. Would this reorganization help users? If not, maybe we still split it into multiple functions, but just keep the visibility of functions 2-5 private.

Maybe I'm making this unnecessarily tedious, but I'm thinking that these download functions are the most called by R users, and they're certainly the ones that are called by new users. So if they leave a bad impression, the package is less likely to be used.

@wibeasley wibeasley self-assigned this Jan 11, 2020
@pdurbin

This comment has been minimized.

Copy link
Member

@pdurbin pdurbin commented Jan 11, 2020

To be honest, I don't have a strong opinion. I'd suggest maintaining backward compatibility if you can.

@adam3smith

This comment has been minimized.

Copy link
Contributor

@adam3smith adam3smith commented Jan 11, 2020

Yes, keeping get_file() for backward compatibility makes sense.
I like adding get_zip_by_id() and I like adding a separate function for by_name -- the logic in get_file() is indeed rather tortured for that.

On the other hand, I think keeping get_file* and get_files* together keeps things simple, especially if you agree with my re-write of the multi-file download to use sequential downloads of individual files.

get zip by name() and get_files_by_name() don't make sense -- you can only request multiple files by id I believe.

I don't have a view about the tibble/data frame

So my suggestion then would be to reduce your list to

the existing get_file() with an unchanged interface
get_file_by_id() (the workhorse)
get_file_by_name()
get_tibble_by_id()
get_tibble_by_name()
get_zip_by_id()

Given my re-write o

@adam3smith

This comment has been minimized.

Copy link
Contributor

@adam3smith adam3smith commented Jan 11, 2020

More thoughts after sleeping on it:

  1. I've come around to a separate get_files* function so that the type of output is predictable: get_file* always produces a raw vector, get_files* always a list of raw vectors. Easy enough to implement
  2. As opposed to what I write above, while the API doesn't implement it, we could of course implement get_files and get_zip by name if we wanted to -- not sure if there's demand for that, I think we shouldn't encourage it so I'm still inclined not to do those
  3. What we should implement, however, is get_file_by_doi -- one of the principal reasons for implementing file-level DOIs were reproducibility workflows
  4. I wonder if we could provite a write_files() function. It's a bit tricky to write the file with its right filename (especially when requesting the original) and it's especially tricky to get the folder structure written if you're requesting individual files, so this seems it'd be a huge help.
@kuriwaki

This comment has been minimized.

Copy link
Member

@kuriwaki kuriwaki commented Jan 11, 2020

Thanks for starting this - I like the general idea and would be happy to contribute.

I don't have set thoughts on specific names yet but some general thoughts:

  1. We can start by fewer functions rather than more
  2. Personally I had expected the existing get_dataset to do what is performed by get_file. Instead, get_dataset gets metadata. Probably hard to change for backwards compatibility though.
  3. I think Jenny Bryan has a tweet pointing out that common package abbreviations at the start of a class of functions is good for auto-complete. For example, stringr, purrr, and googlesheets function names are nice because if a user starts typing str_ or map_ or gs_, rstudio auto-complete provides all the relevant functions. Very useful when looking for a function whose exact name you don't remember.
@wibeasley

This comment has been minimized.

Copy link
Contributor Author

@wibeasley wibeasley commented Jan 13, 2020

@pdurbin & @adam3smith

maintaining backward compatibility if you can.
...
keeping get_file() for backward compatibility makes sense.

Cool. Then let's keep this as a priority

@adam3smith

I like adding get_zip_by_id() and I like adding a separate function for by_name -- the logic in get_file() is indeed rather tortured for that.

Thanks for the extra perspective.

...so that the type of output is predictable...

Cool.

...not sure if there's demand for that, I think we shouldn't encourage it so I'm still inclined not to do those...

I think you're in agreement with @kuriwaki's later statement "We can start by fewer functions rather than more". Sounds good to me.

...get_file_by_doi()...

You're right. Good point. That would encourage/emphasize the reproducibility & transparency.

...provide a write_files() function...

One that writes straight to disk, and isn't immediately available as a data.frame ...until you read from the disk? I wouldn't have thought that comes up a lot, but I'll believe you guys since you have a better feel for how people use this package than I do.

Would you be o.k. with a function name like download_files() instead of write_files(). Or maybe some third option is better? When I hear "write" in the context of database servers, I initially think of persisting something on the server's disk, rather than the clients. That's what I used in REDCapR's redcap_write() and what DBI / odbc uses in dbWriteTable().


@kuriwaki

...I like the general idea and would be happy to contribute.

Sweet. Thanks for offering. I'd like us to start once we get some more test coverage. It sounds like the three of us roughly agree on (a) which are the important functions that should be exposed initially, and (b) how the function guts could be reorganized.

...common package abbreviations at the start of a class of functions is good for auto-complete...

I'm on the fence about this. I like Jenny's insight and advice almost all the time. But I don't like putting the package name in the function because I think almost all function calls should be qualified with the package --just like almost every other real programming language. That's what I urge my team to do in our practices

However, I broke this rule ~7 years ago with the REDCapR package because I didn't believe many of the users could keep a function called write() straight without it.

Given that this package's functions aren't already named like that, I'm inclined not to add a prefix like 'dv_'. But I could be convinced otherwise if people felt strongly.

I had expected the existing get_dataset to do what is performed by get_file. Instead, get_dataset gets metadata. Probably hard to change for backwards compatibility though.

I remember the same reaction myself. After a conversation with @pdurbin two weeks ago, I added a paragraph in our OU team's documentation that "dataset" isn't always a rectangle, like I've grown accustomed to in the R world.

Let's make sure the documentation for the two families of functions have a good "See Also" section that points to each other. Any other ideas how to make it more clear to users?

@adam3smith

This comment has been minimized.

Copy link
Contributor

@adam3smith adam3smith commented Jan 13, 2020

I think we're almost at a consensus; nice.

One that writes straight to disk, and isn't immediately available as a data.frame ...until you read from the disk? I wouldn't have thought that comes up a lot, but I'll believe you guys since you have a better feel for how people use this package than I do.

I don't actually know how commonly this would be used, but here are my two use cases:

  1. We use this in my team to download the whole dataset for curation. (I've just published a curation package https://github.com/QualitativeDataRepository/dvcurator if you're curious. The Readme describes the workflow)
  2. In a reproducibility package where the R script loads different files from a folder structure, I don't think there's an alternative to writing them to disk, is there?

Would you be o.k. with a function name like download_files() instead of write_files(). Or maybe some third option is better? When I hear "write" in the context of database servers, I initially think of persisting something on the server's disk, rather than the clients. That's what I used in REDCapR's redcap_write() and what DBI / odbc uses in dbWriteTable().

I have no strong feelings about the function names, but your write vs. download distinction sounds plausible to me. utils:download.file() saves files from the internet and writes them to the disk, so that, too, suggests "download" is the better term. download_dv_files() or download_dataset_files() would be longer but be more distinctive, but again, no strong feelings (and the rest of the package functions don't have particular distinctive names either)

@pdurbin

This comment has been minimized.

Copy link
Member

@pdurbin pdurbin commented Jan 13, 2020

  1. I've just published a curation package https://github.com/QualitativeDataRepository/dvcurator if you're curious. The Readme describes the workflow

@adam3smith I'd be interested in a talk or a screencast about this. We could put it on DataverseTV. 😄

@wibeasley you should come the Dataverse Community Meeting in June: https://projects.iq.harvard.edu/dcm2020 (and @adam3smith should come again). 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.