Skip to content
Please note that GitHub no longer supports your web browser.

We recommend upgrading to the latest Google Chrome or Firefox.

Learn more
Permalink
Tree: 59b6062150
Commits on Jan 17, 2020
Commits on Aug 21, 2019
  1. [maven-release-plugin] prepare for next development iteration

    ruebot committed Aug 21, 2019
  2. Add binary extraction DataFrames to PySpark. (#350)

    ruebot authored and ianmilligan1 committed Aug 21, 2019
    * Add binary extration DataFrames to PySpark.
    - Address #190
    - Address #259
    - Address #302
    - Address #303
    - Address #304
    - Address #305
    - Address #306
    - Address #307
    - Resolves #350 
    - Update README
  3. Update LICENSE and license headers. (#351)

    ruebot authored and ianmilligan1 committed Aug 21, 2019
    - Update LICENSE file to full Apache 2.0 license
    - Reconfigure license-maven-plugin
    - Update all license headers in java and scala files to include
    copyright year, and project name
    - Move LICENSE_HEADER.txt to config
    - Update scalastyle config
Commits on Aug 18, 2019
  1. Add method for determining binary file extension. (#349)

    jrwiebe authored and ruebot committed Aug 18, 2019
    This PR implements the strategy described in the discussion of the above issue to get an extension for a file described by a URL and a MIME type. It creates a GetExtensionMime object in the matchbox.
    
    This PR also removes most of the filtering by URL from the image, audio, video, presentation, spreadsheet, and word processor document extraction methods, since these were returning false positives. (CSV and TSV files are a special case, since Tika detects them as "text/plain" based on content.)
    
    Finally, I have inserted toLowerCase into the getUrl.endsWith() filter tests, which could possibly bring in some more CSV and TSV files
    
    * Adds method for getting a file extension from a MIME type.
    * Add getExtensions method to DetectMimeTypeTika.
    * Matchbox object to get extension of URL
    * Use GetExtensionMime for extraction methods; minor fixes.
    * Remove tika-parsers classifier
    * Remove most filtering by file extension from binary extraction methods; add CSV/TSV special cases.
    * Fix GetExtensionMime case where URL has no extension but a MIME type is detected
    * Insert `toLowerCase` into `getUrl.endsWith()` calls in io.archivesunleashed.packages; apply to `FilenameUtils.getExtension` in `GetExtensionMime`.
    * Remove filtering on URL for audio, video, and images.
    * Remove filtering on URL for images; add DF fields to image extraction
    * Remove saveImageToDisk and its test
    * Remove robots.txt check and extraneous imports
    * Close files so we don't get too many files open again.
    * Add GetExtensionMimeTest
    * Resolve #343
Commits on Aug 17, 2019
  1. Add keep and discard by http status. (#347)

    ruebot authored and ianmilligan1 committed Aug 17, 2019
    - Add keep and discard by http status RecordLoader
    - Add tests
    - Clean up/add doc comments in RecordLoader
    - Resolve #315
Commits on Aug 16, 2019
  1. Add office document binary extraction. (#346)

    ruebot authored and ianmilligan1 committed Aug 16, 2019
    - Add Word Processor DF and binary extraction
    - Add Spreadsheets DF and binary extraction
    - Add Presentation Program DF and binary extraction
    - Add Text files DF and binary extraction
    - Add tests for new DF and binary extractions
    - Add test fixtures for new DF and binary extractions
    - Resolves #303
    - Resolves #304
    - Resolves #305
    - Use aut-resources repo to distribute our shaded tika-parsers 1.22
    - Close TikaInputStream
    - Add RDD filters on MimeTypeTika values
    - Add CodeCov configuration yaml
    - Includes work by @jrwiebe, see #346 for all commits before squash
Commits on Aug 14, 2019
  1. Use version of tika-parsers without a classifier. (#345)

    jrwiebe authored and ruebot committed Aug 14, 2019
    Ivy couldn't handle it, and specifying one for the custom tika-parsers artifact
    was unnecessary.
  2. Use Tika's detected MIME type instead of ArchiveRecord getMimeType. (#…

    ruebot authored and ianmilligan1 committed Aug 14, 2019
    …344)
    
    - Move audio, pdf, and video DF extraction to tuple map
    - Provide two MimeType columns; mime_type_web_server and mime_type_tika
    - Update tests
    - Resolves #342
Commits on Aug 13, 2019
  1. Add audio & video binary extraction (#341)

    ruebot authored and ianmilligan1 committed Aug 13, 2019
    - Add Audio & Video binary extraction.
    - Add filename, and extenstion column to audio, pdf, and video DF
    - Pass binary bytes instread of string to DetectMimeTypeTika in DF (s/getContentString/getBinaryBytes)
    - Updates saveToDisk to use file extension from DF column
    - Adds tests for Audio, PDF, and Video DF extraction
    - Add test fixtures for Audio, PDF, and Video DF extraction
    - Rename SaveBytesTest to SaveImageBytes test
    - Eliminate bytes->string->bytes conversion that was causing data loss in DetectMimeTypeTika
    - Update tika-parsers dep from JitPack
    - Remove tweet cruft
    - Resolves #306
    - Resolves #307
    - Includes work by @jrwiebe, see #341 for all commits before squash
Commits on Aug 12, 2019
  1. Add PDF binary extraction. (#340)

    jrwiebe authored and ruebot committed Aug 12, 2019
    Introduces the new extractPDFDetailsDF() method and brings in changes to make our use of Tika's MIME type detection more efficient, as well as POM updates to use a shaded version of tika-parsers in order to eliminate a dependency version conflict that has long been troublesome.
    
    - Updates getImageBytes to getBinaryBytes
    - Refactor SaveImage class to more general SaveBytes, and saveToDisk to saveImageToDisk
    - Only instantiate Tika when the DetectMimeTypeTika singleton object is first referenced. See https://git.io/fj7g0.
    - Use TikaInputStream to enabler container-aware detection. Until now we were only using the default Mime Magic detection. See https://tika.apache.org/1.22/detection.html#Container_Aware_Detection.
    - Added generic saveToDisk method to save a bytes column of a DataFrame to files
    - Updates tests
    - Resolves #302
    - Further addresses #308
    - Includes work by @ruebot, see #340 for all commits before squash
Commits on Aug 8, 2019
  1. More scalastyle work; addresses #196. (#339)

    ruebot authored and ianmilligan1 committed Aug 8, 2019
    - Remove all underscore imports, except shapeless
    - Address all scalastyle warnings
    - Update scalastyle config for magic numbers, and null (only used in
    tests)
Commits on Aug 7, 2019
  1. Replace computeHash with ComputeMD5; resolves #333. (#338)

    ruebot authored and jrwiebe committed Aug 7, 2019
    * Replace computeHash with ComputeMD5; resolves #333.
    
    * I suppose these are redundant.
Commits on Aug 6, 2019
  1. Make ArchiveRecord.getContentBytes consistent,#334 (#335)

    ianmilligan1 authored and ruebot committed Aug 6, 2019
  2. Update Tika to 1.22; address security alerts. (#337)

    ruebot authored and ianmilligan1 committed Aug 6, 2019
    - Update Tika to 1.22
    - pom.xml surgery to get aut to build again with --packages
Commits on Jul 31, 2019
  1. Update test coverage for data frames (#336).

    ruebot authored and ianmilligan1 committed Jul 31, 2019
    - This commit will fall under @ruebot, but @jrwiebe did the heavy lifting here; see #336 for his commits before they were squashed down.
    - Resolves #265
    - Resolves #263
    - Update Scaladocs
Commits on Jul 25, 2019
  1. Enable S3 access (#332)

    jrwiebe authored and ruebot committed Jul 25, 2019
    * Update POM to access data stored in Amazon S3, per #319
    * In RecordLoader detect FileSystem based on path.
    * Resolves #319
Commits on Jul 23, 2019
  1. Updates to pom following 0e701b2 (#328)

    ruebot authored and ianmilligan1 committed Jul 23, 2019
    - Remove explicit Guava dependency (should have been remove in
    0e701b2)
    - Update Scala to 2.11.12; aligns with Spark 2.4.3
    - Update Scala test
    - Update Shapeless
    - Update Scala lang parsers
    - Fix a typo in a test
Commits on Jul 18, 2019
  1. Python formatting, and gitignore additions. (#326)

    ruebot authored and ianmilligan1 committed Jul 18, 2019
    - Run black and isort on Python files.
    - Move Spark config to example file.
    - Update gitignore for 7a61f0e
    additions.
  2. Move data frame fields names to snake_case. (#327)

    ruebot authored and ianmilligan1 committed Jul 18, 2019
    - Resolves #229
Commits on Jul 17, 2019
  1. Update to Spark 2.4.3 and update Tika to 1.20. (#321)

    ruebot authored and ianmilligan1 committed Jul 17, 2019
    * Update to Spark 2.4.3 and update Tika to 1.20.
    
    - Resolves #295
    - Resolves #308
    - Resolves #286
    - Pulls in unfinished work by @jrwiebe and @borislin.
    
    * Add patched lang-detector
Commits on Jul 15, 2019
  1. Remove Tweet utils. (#323)

    ruebot authored and ianmilligan1 committed Jul 15, 2019
    - Resolves #322
    - Resolves #206
    - Resolves #194
Commits on Jul 8, 2019
  1. Test Java 8 & 11, and remove OracleJDK; resolves #324. (#325)

    ruebot authored and ianmilligan1 committed Jul 8, 2019
Commits on Jul 5, 2019
  1. Add image analysis and extraction w/TensorFlow (#318)

    h324yang authored and ruebot committed Jul 5, 2019
Commits on Apr 22, 2019
  1. Makes ArchiveRecordImpl serializable by removing non-serializable ARC…

    jrwiebe authored and ruebot committed Apr 22, 2019
    …Record and WARCRecord variables. Also removes unused headerResponseFormat variable. (#316)
Commits on Mar 23, 2019
  1. Resolve cobertura-maven-plugin class issue; resolves #313. (#314)

    ruebot authored and jrwiebe committed Mar 23, 2019
    - Exclude slf4j binding logback-classic
    (mojohaus/cobertura-maven-plugin#6 (comment))
Commits on Mar 18, 2019
Commits on Jan 31, 2019
  1. Log closing of ARC and WARC files, resolves #156 (#301)

    jrwiebe authored and ruebot committed Jan 31, 2019
    * Log opening and closing of archive files as per #156
    * Remove redundant log message. Spark already logs the file that is to be read when an executor computes an RDD.
Commits on Jan 24, 2019
  1. Delete saved image file; resolves #299 (#300)

    jrwiebe authored and ruebot committed Jan 24, 2019
Commits on Nov 28, 2018
  1. Remove Deprecated ExtractGraph app; resolves #291. (#293)

    greebie authored and ruebot committed Nov 28, 2018
    * Remove deprecated ExtractGraph.scala file.
    * Remove deprecated ExtractGraphTest.scala file.
  2. Add .getHttpStatus and .getArchiveFile to ArchiveRecordImpl class #198

    greebie authored and ruebot committed Nov 28, 2018
    …& #164 (#292)
    
    * Resolves #198
    * Resolves #164
    * Add getHttpStatus to ArchiveRecord class & trait
      - add .getHttpStatus to potential outputs
      - add tests for .getHttpStatus calls
      - improve ArchiveRecord testing overall.
    * Add .getArchiveFile feature to ArchiveRecordImpl.
      - add getArchiveFile to trait
      - add getArchiveFile for ArchiveRecordImpl
      - add tests for getArchiveFile.
    * Other code style fixes.
    * Include updates to tests.
Commits on Nov 22, 2018
  1. Update license headers for #208. (#290)

    ruebot authored and ianmilligan1 committed Nov 22, 2018
  2. Change Id generation for graphs from using hashes for urls to using .…

    greebie authored and ruebot committed Nov 22, 2018
    …zipWithUniqueIds() (#289)
    
    * Resolves #243 
    * Create GEXF with proper ids instead of hash to avoid collisions.
    * Add WriteGEXF files.
    * Add WriteGraph file and test.
    * Add test for Graphml output.
    * Add xml escaping for edges.
    * Add test case for non-escaped edges.
    * Add additional tests to cover for more potential cases of graphml and gexf files.
    * Coverage for null cases in urls.
Older
You can’t perform that action at this time.