Skip to content
Please note that GitHub no longer supports your web browser.

We recommend upgrading to the latest Google Chrome or Firefox.

Learn more
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Java 11 support #356

Open
ruebot opened this issue Aug 27, 2019 · 1 comment
Assignees
Labels

Comments

@ruebot
Copy link
Member

@ruebot ruebot commented Aug 27, 2019

From the Apache Spark mailing list:

Hi, All.

Thanks to your many many contributions,
Apache Spark master branch starts to pass on JDK11 as of today.
(with `hadoop-3.2` profile: Apache Hadoop 3.2 and Hive 2.3.6)

    https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2-jdk-11/326/
    (JDK11 is used for building and testing.)

We already verified all UTs (including PySpark/SparkR) before.

Please feel free to use JDK11 in order to build/test/run `master` branch and
share your experience including any issues. It will help Apache Spark 3.0.0 release.

For the follow-ups, please follow https://issues.apache.org/jira/browse/SPARK-24417 .
The next step is `how to support JDK8/JDK11 together in a single artifact`.

Bests,
Dongjoon.

We'll align with Apache Spark here on Java 11 support. Once we have a Spark release with Java 11, I'll pivot to getting aut stable with Java 11.

@ruebot ruebot added the Java label Aug 27, 2019
@ruebot ruebot self-assigned this Aug 27, 2019
ruebot added a commit that referenced this issue Aug 31, 2019
@ruebot

This comment has been minimized.

Copy link
Member Author

@ruebot ruebot commented Nov 7, 2019

Getting closer to Spark 3.0.0!

Hi all,

To enable wide-scale community testing of the upcoming Spark 3.0 release, the Apache Spark community has posted a preview release of Spark 3.0. This preview is not a stable release in terms of either API or functionality, but it is meant to give the community early access to try the code that will become Spark 3.0. If you would like to test the release, please download it, and send feedback using either the mailing lists or JIRA.

There are a lot of exciting new features added to Spark 3.0, including Dynamic Partition Pruning, Adaptive Query Execution, Accelerator-aware Scheduling, Data Source API with Catalog Supports, Vectorization in SparkR, support of Hadoop 3/JDK 11/Scala 2.12, and many more. For a full list of major features and changes in Spark 3.0.0-preview, please check the thread(http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-3-0-preview-release-feature-list-and-major-changes-td28050.html).

We'd like to thank our contributors and users for their contributions and early feedback to this release. This release would not have been possible without you.

To download Spark 3.0.0-preview, head over to the download page: https://archive.apache.org/dist/spark/spark-3.0.0-preview

Thanks,

Xingbo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant
You can’t perform that action at this time.