Skip to content
Please note that GitHub no longer supports your web browser.

We recommend upgrading to the latest Google Chrome or Firefox.

Learn more
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various DataFrame implementation updates for documentation clean-up; Addresses #372. #406

Merged
merged 6 commits into from Jan 17, 2020

Conversation

@ruebot
Copy link
Member

ruebot commented Jan 16, 2020

GitHub issue(s): #372

What does this Pull Request do?

Various DataFrame implementation updates for documentation clean-up.

Adds archive_filename to .all()

Renames .all() column HttpStatus to http_status_code

Big README update to align with documentation, and remove lots of duplication that can be confusing, and difficult to keep up-to-date and in-sync. We can keep it all here in the README now.

See: archivesunleashed/aut-docs#39

How should this be tested?

ruebot added 2 commits Jan 16, 2020
- Looks like repos are forcing https to be used now:
[WARNING] repository metadata for: 'artifact joda-time:joda-time' could not be retrieved from repository: maven due to an error: Failed to transfer file: http://repo.maven.apache.org/maven2/joda-time/joda-time/maven-metadata.xml. Return code is: 501 , ReasonPhrase:HTTPS Required.
@codecov

This comment has been minimized.

Copy link

codecov bot commented Jan 16, 2020

Codecov Report

Merging #406 into master will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master     #406   +/-   ##
=======================================
  Coverage   77.56%   77.56%           
=======================================
  Files          40       40           
  Lines        1542     1542           
  Branches      292      292           
=======================================
  Hits         1196     1196           
  Misses        218      218           
  Partials      128      128
@ruebot ruebot requested a review from ianmilligan1 Jan 17, 2020
@ruebot ruebot marked this pull request as ready for review Jan 17, 2020
Copy link
Member

ianmilligan1 left a comment

Looks great - ran through it and the commands are nice. Big suggestions from my end are (a) adding back in local build instructions; and (b) trying to make the Python commands easier to parse, as I know we've struggled in the past.

Overall this is really great - thanks @ruebot !

### Python

If you would like to use the Archives Unleashed Toolkit with PySpark and Jupyter Notebooks, you'll need to have a modern version of Python installed. We recommend using the [Anaconda Distribution](https://www.anaconda.com/distribution). This _should_ install Jupyter Notebook, as well as the PySpark bindings. If it doesn't, you can install either with `conda install` or `pip install`.

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

I would really recommend keeping build from local instructions here - my rationale is (a) this is a GitHub repo so it'd be nice for people to be able to build themselves; (b) we refer to local builds throughout these docs anyways. I know there's worries around keeping this README.md too long, but I would almost prefer to see the local builds here and if we're worried about length, moving some of the notebooks etc stuff out to the docs.

I think a long README for a GitHub repo is ok, esp. if there's good heading structure.

Some slight adaptation of the following:

Build it Locally

Build it yourself as per the instructions below:

Clone the repo:

$ git clone http://github.com/archivesunleashed/aut.git

You can then build The Archives Unleashed Toolkit.

$ mvn clean install

For the impatient, to skip tests:

$ mvn clean install -DskipTests
README.md Outdated
tar -xvf spark-2.4.4-bin-hadoop2.7.tgz
```

#### Running Apache Spark Shell (Scala)

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

Consider removing this generic instruction? Apart from a sanity test to see that the spark shell is working, I don't know what we gain by getting people to launch it w/o the AUT package or jar, only to have to launch it again.

People who don't read the whole README will also get confused.


If you have Apache Spark ready to go, it's as easy as:
There are a two options for loading the Archives Unleashed Toolkit. The advantages and disadvantages of using either option are going to depend on your setup (single machine vs cluster):

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

There are a two options for loading the Archives Unleashed Toolkit. -> There are a two options for loading the Archives Unleashed Toolkit: using packages to download the Archives Unleashed package, or to build it yourself.

This comment has been minimized.

Copy link
@ruebot

ruebot Jan 17, 2020

Author Member

No. I think this is misleading. That's not what packages does, and you can also download the uberjar. I'd prefer to stick with the existing language here.

This comment has been minimized.

Copy link
@ianmilligan1
```

### A little less easy
HEAD (built locally):

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

you could put the local build instructions here if you don't want them up above? (your call?)

README.md Outdated
HEAD (built locally):

```shell
$ spark-shell --jars /path/to/aut-0.18.2-SNAPSHOT-fatjar.jar

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

--jars /path/to/aut-0.18.2-SNAPSHOT-fatjar.jar
->
--jars /path/to/aut/target/aut-0.18.2-SNAPSHOT-fatjar.jar

This comment has been minimized.

Copy link
@ruebot

ruebot Jan 17, 2020

Author Member

You can also download it, and put it whereever you want.

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

Aren't we saying built locally here?

README.md Outdated

### Archives Unleashed Toolkit with PySpark

To run PySpark with the Archives Unleashed Toolkit loaded, you will need to provide PySpark with the Java/Scala package, and the Python bindings. The Java/Scala packages can be provided with `--packages` or `--jars` as described above. The Python bindings can be [downloaded](https://github.com/archivesunleashed/aut/releases/download/aut-0.18.1/aut-0.18.1.zip), or [built locally](#Introduction) (the zip file will be found in the `target` directory.

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

This is where the local build comes in handy. I would prefer to be able to build locally and just have all this stuff here, rather than futzing with downloads (all of us are different of course).

```shell
$ export PYSPARK_PYTHON=/path/to/python; export PYSPARK_DRIVER_PYTHON=/path/to/python; /path/to/spark/bin/pyspark --py-files aut-0.18.1.zip --packages "io.archivesunleashed:aut:0.18.1"
```

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

I think this command really throws people - @SamFritz might have had trouble with this too (not sure if she ever got it working). Maybe an example command? Or suggesting they run which python to get their Python path?

(putting a lot of this in a README might be a bridge too far)

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

i.e. here's my command

export PYSPARK_PYTHON=/Users/ianmilligan1/anaconda/bin/Python; export PYSPARK_DRIVER_PYTHON=/Users/ianmilligan1/anaconda/bin/Python; ./bin/pyspark --py-files /Users/ianmilligan1/dropbox/git/aut/target/aut.zip --packages "io.archivesunleashed:aut:0.18.1"

(the important thing is I think the which python, as finding a python path can really throw users for a loop)


```shell
$ export PYSPARK_DRIVER_PYTHON=jupyter; export PYSPARK_DRIVER_PYTHON_OPTS=notebook; /path/to/spark/bin/pyspark --py-files aut-0.18.1.zip --packages "io.archivesunleashed:aut:0.18.1"

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

boom, works great

and I think it flows nicely in this order

```

### Even less easy
### Archives Unleashed Toolkit with Jupyter

This comment has been minimized.

Copy link
@ianmilligan1

ianmilligan1 Jan 17, 2020

Member

add a note that they'll need to have Jupyter Notebooks installed? Just link out to https://jupyter.org/install?

@ianmilligan1 ianmilligan1 merged commit 9277e68 into master Jan 17, 2020
3 checks passed
3 checks passed
codecov/patch Coverage not affected when comparing debf5d5...af280b7
Details
codecov/project 77.56% remains the same compared to debf5d5
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@ianmilligan1 ianmilligan1 deleted the issue-372 branch Jan 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.