Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: NiBetaSeries: task related correlations in fMRI #1295

Open
whedon opened this issue Mar 4, 2019 · 10 comments

Comments

Projects
None yet
5 participants
@whedon
Copy link
Collaborator

commented Mar 4, 2019

Submitting author: @jdkent (James Kent)
Repository: https://github.com/HBClab/NiBetaSeries
Version: v0.2.3
Editor: @arokem
Reviewer: @snastase
Archive: Pending

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290"><img src="http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290/status.svg)](http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@snastase, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arokem know.

Please try and complete your review in the next two weeks

Review checklist for @snastase

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.2.3)?
  • Authorship: Has the submitting author (@jdkent) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Mar 4, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @snastase it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Mar 4, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Mar 4, 2019

@arokem

This comment has been minimized.

Copy link

commented Mar 25, 2019

@snastase : have you had a chance to take a look?

@snastase

This comment has been minimized.

Copy link
Collaborator

commented Mar 26, 2019

@arokem @jdkent Sorry for losing track of this! Working through it now...

@snastase

This comment has been minimized.

Copy link
Collaborator

commented Mar 27, 2019

Okay, I think I have a grip on this now—nice contribution and great to see things like this wrapped up into BIDS Apps! To summarize, this project aims to compute beta-series correlations on BIDS-compliant data preprocessed using fMRIPrep. I have some general comments and then a laundry list of smaller-scale suggestions. For the really nit-picky stuff, I can make a PR if you don't mind. Also, bear in mind that I'm more neuroscientist than software developer per se, so apologies if any of these comments are way off the mark!

General comments:
First of all, the PDF and introductory documentation (betaseries.html) could be made a little more clear and concise. For example, it wasn't immediately obvious to me what the actual output of the tool is... can I get the actual series of estimated betas? Or only the inter-ROI beta-series correlation matrices? It might be nice to simply get the beta series and forget the correlations (e.g., for a downstream MVPA analysis). Is it absolutely necessary to provide an atlas to define ROIs? If I don't provide an atlas, will it compute beta-series correlations between all voxels (computationally intensive). Basically what I'm trying to say here is that it wasn't obvious what to expect in terms of input–output (namely output), what moving parts are necessary, and how much flexibility there is. I could figure these things out by trial and error, but it seems useful to lay this out a bit more explicitly in the documentation.

I'm a little bit unsure about the Jupyter Notebook-style tutorial walkthrough in the "How to run" documentation. If I was planning to run this, I'd likely be running it from the Linux command line (maybe via a scheduler like Slurm on my server)—not invoking it via Python's subprocess. You jump through a bunch of hoops with Python just to download and modify the example data and only one cell of the tutorial actually runs nibs. I think this material is useful, particularly for users to see how to modify idiosyncratic OpenNeuro datasets, but I'm not sure there's enough focus on the nibs invocation. An alternative approach would be to upload the minimal dataset with all necessary modifications to figshare or something and download that in the tutorial. This would avoid spending so much of the tutorial on data manipulation and spend a few more cells describing the nibs command-line invocation and various options.

This brings up the point that, if the preprocessed BIDS derivatives (e.g., in *_events.tsv) are fairly standard, should we expect nibs be able to handle them internally? For example, you manually rename some columns and reassign "Yes/No" values to 1 and 0 to satisfy assumptions. Another approach to this would be to build in some optional arguments in the nibs CLI that allows the user to specify column names and acceptable value names (and map to numerical values if need be). For example, when I run nibs, I might have a command-line argument specifying that conditions is the column name indicating trial types and that it should have three possible conditions values (neutral, congruent, and incongruent), and another argument that specifying the correct column and mapping {'Yes': 1, 'No': 0}. I'm not necessarily saying this should be the way things are, just offering this an alternative approach. I'm genuinely curious if this would be feasible and if it's better or worse in terms of software design.

Specific comments and questions:

  • Would it be worth making a Singularity image for this? For example, I'm almost exclusively running fMRIPRep and MRIQC apps on a server via Singularity because I don't have installation privileges. I suppose the alternative is indicating to users to pip install nibetaseries in a conda environment or something along these lines.

  • Speaking of installation, if you're degenerate like me and still have a Python 2.7 installation on your machine, pip install nibetaseries will try to install this in 2.7 and wreck. Something like pip3 install nibetaseries or python3 -m pip install nibetaseries works more reliably. I would slightly expand the installation documentation page, specifying Python version, etc.

  • It might be worth pointing users to Binder (https://mybinder.org) for running the tutorial Jupyter Notebook interactively.

  • I understood that based on the atlas provided, beta series are computed per voxel then averaged across voxels within each ROI? As opposed to averaging time series across voxels then computing the beta series? Is there a reason (or reference) for taking this approach?

  • What are the recommendations for high-/low-pass filtering? Is there a precedent in the literature for any recommended values? In fact, the documentation mentions both low- and high-pass filtering, but I only see an option for supplying a low-pass filter in the usage documentation.

  • Some of the multiword command-line arguments use "_" and some use "-" ...I would just use underscore in e.g. --atlas-img for consistency with other arguments (e.g., --session_label, --hrf_model)

  • Is this backward-compatible with older-style BIDS derivatives from fMRIPrep? E.g., files with and without the "desc-".

  • One issue I've encountered with people running apps like fMRIPrep is confusion about the "work" directory; namely, whether it can be safely deleted, whether it should be deleted if re-running from scratch, etc. Would be good to make a note of this in the documentation.

  • In the documentation and PDF, I would make it a little more explicit that the "beta" is a colloquial term for the parameter estimates (or regression coefficients) in a GLM.

  • It should be made abundantly clear in the documentation that this is running the "LSS" version of the analysis. Are there future plans to allow for optionally running the "LSA" version?

  • The “How to run NiBetaSeries” section of the documentation unpacks strangely and doesn’t allow user to scroll down through headings; at first I thought the download links were the only thing there. Clicking on the subheadings in the table of contents, however, brings you to a separate page with the walkthrough. Is there a way to combine these such that the download links simply appear at the top of the same page as the walkthrough?

  • Under the "References" heading in the betaseries.html documentation, I would include the full reference text and DOI links.

  • Cite the paper for the OpenNeuro dataset you use in the tutorial documentation:
    Verstynen, T. D. (2014). The organization and dynamics of corticostriatal pathways link the medial orbitofrontal cortex to future behavioral responses. Journal of Neurophysiology, 112(10), 2457–2469. https://doi.org/10.1152/jn.00221.2014

  • I would cite Abdulrahman & Henson (2015) in the PDF.

  • I would also cite the BIDS Apps paper in the PDF as this is a BIDS App:
    Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M., ... & Poldrack, R. A. (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLOS Computational Biology, 13(3), e1005209. https://doi.org/10.1371/journal.pcbi.1005209

  • In the PDF, update the fMRIPrep reference by Esteban et al. to the Nature Methods version:
    Esteban, O., Markiewicz, C., Blair, R. W., Moodie, C., Isik, A. I., Erramuzpe, A., Kent, J. D., Goncalves, M., DuPre, E., Snyder, M., Oya, H., Ghosh, S., Wright, J., Durnez, J., Poldrack, R., & Gorgolewski, K. J. (2019). FMRIPrep: a robust preprocessing pipeline for functional MRI. Nature Methods, 16, 111–116.

  • Code coverage is only 70%... might be worth trying to increase this.

@jdkent

This comment has been minimized.

Copy link

commented Apr 1, 2019

Thank you so much for the in-depth review @snastase! I will be working on addressing these comments via issues/pull requests this week.

@danielskatz

This comment has been minimized.

Copy link

commented May 20, 2019

👋 @jdkent - What's going on with this submission? Are you working on the comments? Or maybe you have worked on them already, and just need to tell us here?

@jdkent

This comment has been minimized.

Copy link

commented May 21, 2019

Hi @danielskatz, I am working through the comments still. I should have more time to dedicate to the project this week and the next.

@danielskatz

This comment has been minimized.

Copy link

commented May 21, 2019

Thanks for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.