Spark Connect interpreter#5225
Open
dhama-shashank-meesho wants to merge 1 commit intoapache:masterfrom
Open
Conversation
Adds a new spark-connect module that enables Apache Zeppelin to connect to remote Spark clusters via the Spark Connect gRPC protocol (Spark 3.5+). This eliminates the need for a local Spark installation on the Zeppelin host. The module includes four interpreters: - %spark-connect: SQL execution (default) - %spark-connect.sql: SQL with optional concurrent scheduler - %spark-connect.pyspark: PySpark via Py4j bridge - %spark-connect.ipyspark: IPython variant of PySpark Key design decisions: - Per-user session quota enforcement to prevent resource exhaustion - Per-notebook fair ReentrantLock for sequential query execution - Dependency shading to avoid Netty/gRPC conflicts - Token-based authentication and SSL support - Python bridge via Py4j for shared Java SparkSession Includes comprehensive unit tests (SparkConnectUtilsTest) and integration tests (gated by SPARK_CONNECT_TEST_REMOTE environment variable) for SparkConnectInterpreter, SparkConnectSqlInterpreter, and PySparkConnectInterpreter.
Member
|
@dhama-shashank-meesho I like this idea, but do we still need our own Python files? I didn't check the PR in detail yet, but I thought we could simplify this feature. WDYT? |
Author
|
@jongyoul they exist to bridge Java↔Python via Py4j to share the same Java SparkSession between interpreters like Python and SQL interpreters to share the same SparkSession, Is it insisted not to use py files ? |
Member
|
@dhama-shashank-meesho Yes, I thought so. I didn't know them well yet, but we don't seem to write our own one if we can follow the new way fully. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces a new Spark Connect interpreter group for Apache Zeppelin, enabling notebooks to execute against an external Spark Connect server via the official SparkSession.remote() gRPC API (Spark 3.4+).
Unlike the classic Spark interpreter, no Spark driver or SparkContext runs inside the Zeppelin JVM/pod. The interpreter operates as a thin client, while all computation is executed remotely on the Spark cluster.
Key Features
Adds support for:
Remote Execution Model (gRPC)
Implements a remote-only execution model over Spark Connect gRPC, similar in deployment spirit to JDBC/Livy interpreters, but using Spark’s native Connect interface.
Session Management
Notebook-Level Fair Locking
Introduces NotebookLockManager to enforce FIFO paragraph execution ordering and prevent concurrent conflicts within the same note.
Optional Concurrent SQL Scheduling
Supports optional concurrent SQL execution via zeppelin.spark.concurrentSQL using a ParallelScheduler, while still enforcing notebook-level locking.
Streaming Result Mode
Adds zeppelin.spark.connect.streamResults to stream large result sets via iterator-based output.
PySpark Support (No Extra Spark Session)
Provides PySpark support using a Python subprocess + Py4j bridge, reusing the same Java Spark Connect session (avoids creating a separate Spark session in Python).
Connection String Handling
Connection URIs are built via SparkConnectUtils.buildConnectionString() and support token + SSL options. Sensitive values (e.g., token, user_id) are redacted in logs.
Cancellation Support
Stopping a paragraph triggers sparkSession.interruptAll(), cancelling active remote operations through Spark Connect.
Configuration Properties
Key configs include: