Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark 3.0.0 + Hadoop 3.2 + Java 11 support. #385

Draft
wants to merge 40 commits into
base: master
from
Draft

Spark 3.0.0 + Hadoop 3.2 + Java 11 support. #385

wants to merge 40 commits into from

Conversation

@ruebot
Copy link
Member

ruebot commented Nov 26, 2019

GitHub issue(s):

What does this Pull Request do?

This PR is #375 + Hadoop 3.2 support.

How should this be tested?

Same as #375 + It should be tested with spark-3.0.0-preview-bin-hadoop3.2.

Additional Notes:

Same as #375

ruebot and others added 30 commits Jul 25, 2019
- Add copyMerge implementation in Scala (copyMerge is deprecated in
Hadoop 3)
- Update NERCombinedJson to use new copyMerge implementation
- 40 days and nights wandering the desert of pom.xml
- Some hacks to get a sucessful build
- Definitely need to loop back and clean-up a whole lot!
- Addresses #356
…talled 🤦, and a bunch more pom cleanup.
…ue-329
ruebot added 7 commits Jan 7, 2020
@ruebot ruebot added this to In Progress in DataFrames and PySpark Feb 5, 2020
@ruebot ruebot added this to In Progress in 1.0.0 Release of AUT Feb 5, 2020
ruebot added 3 commits Feb 6, 2020
@ruebot

This comment has been minimized.

Copy link
Member Author

ruebot commented Apr 11, 2020

I believe we're going to continue to fail here until this is resolved. If that is resolved, it should also help out with our other hacks around making Tika work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
DataFrames and PySpark
  
In Progress
1.0.0 Release of AUT
  
In Progress
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants
You can’t perform that action at this time.