Skip to content

Spark Connect interpreter#5225

Open
dhama-shashank-meesho wants to merge 1 commit intoapache:masterfrom
Meesho:feature/spark-connect-interpreter
Open

Spark Connect interpreter#5225
dhama-shashank-meesho wants to merge 1 commit intoapache:masterfrom
Meesho:feature/spark-connect-interpreter

Conversation

@dhama-shashank-meesho
Copy link
Copy Markdown

This PR introduces a new Spark Connect interpreter group for Apache Zeppelin, enabling notebooks to execute against an external Spark Connect server via the official SparkSession.remote() gRPC API (Spark 3.4+).

Unlike the classic Spark interpreter, no Spark driver or SparkContext runs inside the Zeppelin JVM/pod. The interpreter operates as a thin client, while all computation is executed remotely on the Spark cluster.


Key Features

  1. New Interpreter Bindings
    Adds support for:
  • %spark-connect (SparkConnectInterpreter)
  • %spark-connect.sql (SparkConnectSqlInterpreter)
  • %spark-connect.pyspark (PySparkConnectInterpreter)
  • %spark-connect.ipyspark (IPySparkConnectInterpreter)
  1. Remote Execution Model (gRPC)
    Implements a remote-only execution model over Spark Connect gRPC, similar in deployment spirit to JDBC/Livy interpreters, but using Spark’s native Connect interface.

  2. Session Management

  • Supports per-user session limiting via zeppelin.spark.connect.maxSessionsPerUser
  • open() is idempotent and safely reuses existing sessions
  • Spark sessions are created using SparkSession.builder().remote(sc://...)
  • close() cleanly terminates the gRPC session and releases user slot counters
  1. Notebook-Level Fair Locking
    Introduces NotebookLockManager to enforce FIFO paragraph execution ordering and prevent concurrent conflicts within the same note.

  2. Optional Concurrent SQL Scheduling
    Supports optional concurrent SQL execution via zeppelin.spark.concurrentSQL using a ParallelScheduler, while still enforcing notebook-level locking.

  3. Streaming Result Mode
    Adds zeppelin.spark.connect.streamResults to stream large result sets via iterator-based output.

  4. PySpark Support (No Extra Spark Session)
    Provides PySpark support using a Python subprocess + Py4j bridge, reusing the same Java Spark Connect session (avoids creating a separate Spark session in Python).


Connection String Handling

Connection URIs are built via SparkConnectUtils.buildConnectionString() and support token + SSL options. Sensitive values (e.g., token, user_id) are redacted in logs.


Cancellation Support

Stopping a paragraph triggers sparkSession.interruptAll(), cancelling active remote operations through Spark Connect.


Configuration Properties

Key configs include:

  • spark.remote
  • spark.connect.token
  • spark.connect.use_ssl
  • zeppelin.spark.connect.maxSessionsPerUser
  • zeppelin.spark.connect.streamResults
  • zeppelin.spark.concurrentSQL
  • zeppelin.spark.concurrentSQL.max
  • spark.connect.grpc.maxMessageSize

Adds a new spark-connect module that enables Apache Zeppelin to connect to
remote Spark clusters via the Spark Connect gRPC protocol (Spark 3.5+).
This eliminates the need for a local Spark installation on the Zeppelin host.

The module includes four interpreters:
- %spark-connect: SQL execution (default)
- %spark-connect.sql: SQL with optional concurrent scheduler
- %spark-connect.pyspark: PySpark via Py4j bridge
- %spark-connect.ipyspark: IPython variant of PySpark

Key design decisions:
- Per-user session quota enforcement to prevent resource exhaustion
- Per-notebook fair ReentrantLock for sequential query execution
- Dependency shading to avoid Netty/gRPC conflicts
- Token-based authentication and SSL support
- Python bridge via Py4j for shared Java SparkSession

Includes comprehensive unit tests (SparkConnectUtilsTest) and integration
tests (gated by SPARK_CONNECT_TEST_REMOTE environment variable) for
SparkConnectInterpreter, SparkConnectSqlInterpreter, and
PySparkConnectInterpreter.
@jongyoul
Copy link
Copy Markdown
Member

jongyoul commented May 4, 2026

@dhama-shashank-meesho I like this idea, but do we still need our own Python files? I didn't check the PR in detail yet, but I thought we could simplify this feature. WDYT?

@dhama-shashank-meesho
Copy link
Copy Markdown
Author

@jongyoul they exist to bridge Java↔Python via Py4j to share the same Java SparkSession between interpreters like Python and SQL interpreters to share the same SparkSession, Is it insisted not to use py files ?

@jongyoul
Copy link
Copy Markdown
Member

jongyoul commented May 5, 2026

@dhama-shashank-meesho Yes, I thought so. I didn't know them well yet, but we don't seem to write our own one if we can follow the new way fully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants