Skip to content

Native engine abstractions#20821

Merged
Bukhtawar merged 11 commits intoopensearch-project:mainfrom
bharath-techie:native-eng
Mar 25, 2026
Merged

Native engine abstractions#20821
Bukhtawar merged 11 commits intoopensearch-project:mainfrom
bharath-techie:native-eng

Conversation

@bharath-techie
Copy link
Copy Markdown
Contributor

@bharath-techie bharath-techie commented Mar 10, 2026

Description

  • This PR wires the Index shard with backend plugins via Reader managers.

  • Reader managers in backend plugins is not refcounted , readers can only can be retrieved with a catalog snapshot and a catalog snapshot when it gets acquired / released gets ref counted, so we will fully rely on catalog snapshot for reader / files management

  • Has changes for CatalogSnapshot, DataFormatRegistry , DataFormatAwareEngine, IndexFileDeleter etc which might be part of other indexing PRs such # 20675 - but added here to give clarity to flow

  • Search exec engine is created per query which then creates a context that is associated with the query action which maintains the state throughout its lifecycle.

  • WIP :

    • Data format class
    • Naming conventions for index / source provider and interfaces - this will be consumed by delegates , what this PR intends to do is associate context and establish contract

Related Issues

Resolves #[Issue number to be closed when this PR is merged]

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Comment thread server/src/main/java/org/opensearch/index/engine/CompositeEngine.java Outdated
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 10, 2026

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 3d1eb6a.

PathLineSeverityDescription
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/NativeBridge.java14mediumJNI native method declarations introduce a significant attack surface. All native methods (createDatafusionReader, executeQuery, streamNext, etc.) delegate to an external native library 'opensearch_datafusion_jni' whose implementation is not present in this diff. A compromised or substituted native library could execute arbitrary code with JVM-level privileges. This is a supply chain risk warranting verification of the native library's build provenance and integrity checks.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java71lowA placeholder pointer value of 1L is stored as the native runtime handle ('long ptr = 1L; // placeholder until NativeBridge is wired'). If the TODO is not resolved before production deployment and native calls are enabled, passing this fake pointer to native code (closeGlobalRuntime, etc.) would cause undefined behavior or memory corruption in the native layer.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java37lowThe 'readers' field (Map) has package-private visibility rather than private. Combined with the class being @experimentalapi, external code within the same package could directly mutate the reader map, potentially allowing injection of crafted reader entries. Minor encapsulation issue but not indicative of malicious intent.

The table above displays the top 10 most important findings.

Total: 3 | Critical: 0 | High: 0 | Medium: 1 | Low: 2


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 10, 2026

PR Reviewer Guide 🔍

(Review updated until commit 581daca)

Here are some key observations to aid the review process:

🧪 PR contains tests
🔒 No security concerns identified
📝 TODO sections

🔀 Multiple PR themes

Sub-PR theme: Core server-side engine abstractions and shard wiring

Relevant files:

  • server/src/main/java/org/opensearch/index/engine/DataFormatAwareEngine.java
  • server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java
  • server/src/main/java/org/opensearch/index/engine/exec/DataFormatEngineCatalogSnapshotListener.java
  • server/src/main/java/org/opensearch/index/engine/exec/CollectorQueryLifecycleManager.java
  • server/src/main/java/org/opensearch/index/engine/exec/FileMetadata.java
  • server/src/main/java/org/opensearch/index/shard/IndexShard.java
  • server/src/main/java/org/opensearch/index/IndexService.java
  • server/src/main/java/org/opensearch/index/IndexModule.java
  • server/src/main/java/org/opensearch/indices/IndicesService.java
  • server/src/test/java/org/opensearch/index/engine/dataformat/DataFormatPluginTests.java

Sub-PR theme: DataFusion native backend plugin

Relevant files:

  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionPlugin.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionContext.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionResultStream.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionSearcher.java
  • sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/NativeBridge.java

Sub-PR theme: Analytics engine executor and Lucene backend plugin

Relevant files:

  • sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java
  • sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/AnalyticsPlugin.java
  • sandbox/plugins/analytics-engine/src/test/java/org/opensearch/analytics/exec/DefaultPlanExecutorTests.java
  • sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneIndexFilterProvider.java
  • sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneReaderManager.java

⚡ Recommended focus areas for review

NPE on Delete

In onDeleted, readers.remove(catalogSnapshot) can return null if the snapshot is not present, and calling .close() on null will throw a NullPointerException. This can happen if onDeleted is called for a snapshot that was never registered (e.g., if afterRefresh was skipped).

public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
    readers.remove(catalogSnapshot).close();
}
NPE on Delete

Same pattern as DatafusionReaderManager: readers.remove(catalogSnapshot).close() in onDeleted will throw NPE if the snapshot is not in the map. A null check is needed before calling .close().

public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
    readers.remove(catalogSnapshot).close();
}
Null Return

getSupportedFormats() returns null instead of an empty list. Any caller iterating over the result (e.g., to route queries by format) will throw a NullPointerException. Should return List.of() or a proper list.

public List<DataFormat> getSupportedFormats() {
    return null; // TODO : List.of("parquet");
}
Non-thread-safe Iterator

iterator() checks iteratorInstance == null without synchronization. If called concurrently, two BatchIterator instances could be created, both consuming the same native stream, leading to data corruption or double-free of native resources.

public EngineResultBatchIterator iterator() {
    if (iteratorInstance == null) {
        iteratorInstance = new BatchIterator(streamHandle);
    }
    return iteratorInstance;
}
Snapshot Leak

In setLatestSnapshot, the new snapshot's reference count is never incremented before it is stored. If the caller passes a snapshot with refcount=1 (construction ref) and then calls setLatestSnapshot again, the engine will call decRef() on the previous snapshot but never incRef() on the new one, meaning the engine does not own a reference to the snapshot it holds. A concurrent acquireReader could then observe a snapshot that has already been freed.

public void setLatestSnapshot(CatalogSnapshot snapshot) {
    CatalogSnapshot prev = this.latestSnapshot;
    this.latestSnapshot = snapshot;
    if (prev != null) {
        prev.decRef();
    }
}

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 10, 2026

PR Code Suggestions ✨

Latest suggestions up to 581daca

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix swapped JNI arguments in stream iteration

The arguments to NativeBridge.streamNext appear to be swapped: the method signature
is streamNext(long runtimePtr, long streamPtr), but here streamHandle.getStreamPtr()
is passed as runtimePtr and streamHandle.getPointer() as streamPtr. This will cause
incorrect JNI calls and likely a crash or wrong results.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionResultStream.java [70-77]

 @Override
 public boolean hasNext() {
     if (hasNext == null) {
-        long arrowArrayAddr = NativeBridge.streamNext(streamHandle.getStreamPtr(), streamHandle.getPointer());
+        long arrowArrayAddr = NativeBridge.streamNext(streamHandle.getPointer(), streamHandle.getStreamPtr());
         hasNext = arrowArrayAddr != 0;
         // TODO: if hasNext, import ArrowArray into VectorSchemaRoot and cache for next()
     }
     return hasNext;
 }
Suggestion importance[1-10]: 8

__

Why: The NativeBridge.streamNext signature is streamNext(long runtimePtr, long streamPtr), but the call passes streamHandle.getStreamPtr() as the first argument and streamHandle.getPointer() as the second, which are swapped. This would cause incorrect JNI behavior or a crash at runtime.

Medium
Guard against null on snapshot removal

If the snapshot is not present in the map, readers.remove(catalogSnapshot) returns
null, causing a NullPointerException. Add a null check before calling close().

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java [57-60]

 @Override
 public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
-    readers.remove(catalogSnapshot).close();
+    DatafusionReader reader = readers.remove(catalogSnapshot);
+    if (reader != null) {
+        reader.close();
+    }
 }
Suggestion importance[1-10]: 7

__

Why: If readers.remove(catalogSnapshot) returns null (snapshot not in map), calling .close() on it will throw a NullPointerException. The fix is straightforward and prevents a real runtime crash.

Medium
Guard against null index metadata lookup

clusterService.state().metadata().index(indexName) can return null if the index does
not exist in cluster metadata, causing a NullPointerException on .getIndex(). Add a
null check before dereferencing the result.

sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java [116-122]

 private IndexShard resolveShard(String indexName) {
-    IndexService indexService = indicesService.indexService(clusterService.state().metadata().index(indexName).getIndex());
+    var indexMetadata = clusterService.state().metadata().index(indexName);
+    if (indexMetadata == null) throw new IllegalStateException("Index [" + indexName + "] not found in cluster metadata");
+    IndexService indexService = indicesService.indexService(indexMetadata.getIndex());
     if (indexService == null) throw new IllegalStateException("Index [" + indexName + "] not on this node");
     Set<Integer> shardIds = indexService.shardIds();
     if (shardIds.isEmpty()) throw new IllegalStateException("No shards for [" + indexName + "]");
     return indexService.getShardOrNull(shardIds.iterator().next());
 }
Suggestion importance[1-10]: 7

__

Why: clusterService.state().metadata().index(indexName) can return null if the index doesn't exist, causing a NullPointerException on .getIndex(). The fix adds a proper null check with a descriptive error message before dereferencing.

Medium
Fix potential native resource leak in close

The doClose() method uses ptr (the base handle pointer) to close the stream, but the
class also holds a separate streamPtr field. If streamPtr represents a distinct
native resource (e.g., the actual stream pointer), it should be closed using
streamPtr instead of or in addition to ptr, otherwise the native stream resource may
be leaked.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/StreamHandle.java [37-39]

 @Override
 protected void doClose() {
-    NativeBridge.streamClose(ptr);
+    NativeBridge.streamClose(streamPtr);
 }
Suggestion importance[1-10]: 7

__

Why: The doClose() method uses ptr (the base handle pointer) instead of streamPtr to close the stream. If streamPtr is a distinct native resource, this could cause a resource leak. The suggestion is plausible but depends on the semantics of NativeBridge.streamClose and NativeHandle.

Medium
Avoid strong self-reference in Cleaner registration

Passing this::doClose as a method reference to the CleanupAction creates a strong
reference to this inside the cleaner, preventing the object from ever being garbage
collected and thus the cleaner will never fire. The CleanupAction should capture
only the data needed to perform cleanup (e.g., the pointer) and call the native
release directly, without holding a reference to the enclosing NativeHandle
instance.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/jni/NativeHandle.java [41]

+// CleanupAction should not capture 'this' — subclasses must provide a static or standalone Runnable
+// that only captures primitive/independent state needed for cleanup.
+// Example pattern:
+// this.cleanable = CLEANER.register(this, new CleanupAction(ptr, DatafusionNativeBridge::closeHandle));
 this.cleanable = CLEANER.register(this, new CleanupAction(ptr, this::doClose));
Suggestion importance[1-10]: 6

__

Why: Passing this::doClose to CleanupAction creates a strong reference to the NativeHandle instance inside the cleaner, preventing garbage collection and defeating the purpose of the Cleaner. However, the improved_code is essentially the same as existing_code with only a comment added, so the fix is not fully demonstrated.

Low
Prevent silent resource leaks in close

The default close() implementation is a no-op, which means any implementing class
that holds native or I/O resources and forgets to override close() will silently
leak those resources. Since SegmentCollector is documented to require
try-with-resources for cleanup, the default should either throw an
UnsupportedOperationException or be removed to force implementors to provide a real
implementation.

server/src/main/java/org/opensearch/index/engine/exec/SegmentCollector.java [35-36]

 @Override
-default void close() {}
+default void close() throws IOException {
+    throw new UnsupportedOperationException("SegmentCollector implementations must override close()");
+}
Suggestion importance[1-10]: 4

__

Why: The no-op default close() could silently leak resources, but throwing UnsupportedOperationException by default is a breaking design choice for an interface. A better approach might be to simply remove the default, but the suggestion has merit as a design concern.

Low
General
Fix inconsistent field visibility and mutability

The task field is package-private (no access modifier) while tableName and reader
are private final. This inconsistency could allow unintended mutation of task from
within the same package. It should be declared private final to match the
immutability contract of the other fields.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/ExecutionContext.java [24]

-SearchShardTask task;
+private final SearchShardTask task;
Suggestion importance[1-10]: 6

__

Why: The task field is package-private and mutable while the other fields are private final, creating an inconsistency that could allow unintended mutation from within the same package. Making it private final aligns with the immutability contract of the class.

Low
Improve type safety of reader retrieval

The getReader method returns a raw Object, losing all type safety and forcing
callers to perform unchecked casts. This is especially risky in a public abstract
API. Consider using a generic type parameter on the method or the class to preserve
type safety.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshot.java [138]

-public abstract Object getReader(DataFormat dataFormat);
+public abstract <T> T getReader(DataFormat dataFormat);
Suggestion importance[1-10]: 5

__

Why: Returning raw Object from getReader forces callers to perform unchecked casts, reducing type safety. Using a generic return type <T> T getReader(DataFormat dataFormat) is a valid improvement, though it still requires unchecked casts internally.

Low

Previous suggestions

Suggestions up to commit ba14715
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix swapped JNI method arguments

The NativeBridge.streamNext call passes arguments in the wrong order — the method
signature is streamNext(long runtimePtr, long streamPtr), but here
streamHandle.getStreamPtr() is passed as the first argument and
streamHandle.getPointer() as the second. This will cause incorrect JNI behavior or
crashes.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionResultStream.java [70-77]

 @Override
 public boolean hasNext() {
     if (hasNext == null) {
-        long arrowArrayAddr = NativeBridge.streamNext(streamHandle.getStreamPtr(), streamHandle.getPointer());
+        long arrowArrayAddr = NativeBridge.streamNext(streamHandle.getPointer(), streamHandle.getStreamPtr());
         hasNext = arrowArrayAddr != 0;
         // TODO: if hasNext, import ArrowArray into VectorSchemaRoot and cache for next()
     }
     return hasNext;
 }
Suggestion importance[1-10]: 8

__

Why: The NativeBridge.streamNext signature is streamNext(long runtimePtr, long streamPtr), but the call passes streamHandle.getStreamPtr() first and streamHandle.getPointer() second, which are swapped. This is a correctness bug that would cause incorrect JNI behavior or crashes at runtime.

Medium
Fix wrong pointer used in native stream close

The doClose() method uses ptr (the base handle pointer) to close the stream, but
StreamHandle has a separate streamPtr field specifically for the native stream.
Using ptr instead of streamPtr may leak the native stream resource or close the
wrong handle. The streamPtr should be used here to properly release the stream.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/StreamHandle.java [37-39]

 @Override
 protected void doClose() {
-    NativeBridge.streamClose(ptr);
+    NativeBridge.streamClose(streamPtr);
 }
Suggestion importance[1-10]: 8

__

Why: The doClose() method uses ptr (the base handle pointer) instead of streamPtr to close the native stream. This could leak the native stream resource or close the wrong handle, which is a real bug given that StreamHandle explicitly holds a separate streamPtr field for the stream pointer.

Medium
Guard against null on snapshot removal

If readers.remove(catalogSnapshot) returns null (e.g., the snapshot was never
registered), calling .close() on it will throw a NullPointerException. Add a null
check before calling close().

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java [57-60]

 @Override
 public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
-    readers.remove(catalogSnapshot).close();
+    DatafusionReader reader = readers.remove(catalogSnapshot);
+    if (reader != null) {
+        reader.close();
+    }
 }
Suggestion importance[1-10]: 7

__

Why: If readers.remove(catalogSnapshot) returns null, calling .close() on it will throw a NullPointerException. The fix is straightforward and prevents a real runtime crash, similar to how MockReaderManager.onDeleted in the test file already handles this correctly with a null check.

Medium
Avoid capturing instance reference in Cleaner action

The CleanupAction is registered with the Cleaner using this::doClose as the
Runnable, which captures a reference to the enclosing NativeHandle instance. This
prevents the NativeHandle from being garbage collected, defeating the purpose of the
Cleaner. The CleanupAction should capture only the primitive ptr and call the native
release directly, not via an instance method reference.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/jni/NativeHandle.java [82-95]

 private static final class CleanupAction implements Runnable {
     private final long ptr;
-    private final Runnable doClose;
 
-    CleanupAction(long ptr, Runnable doClose) {
+    CleanupAction(long ptr) {
         this.ptr = ptr;
-        this.doClose = doClose;
     }
 
     @Override
     public void run() {
-        doClose.run();
+        // Subclasses should register a static or standalone Runnable that
+        // calls the native release directly, e.g. NativeBridge.closeReader(ptr)
     }
 }
Suggestion importance[1-10]: 7

__

Why: Passing this::doClose to CleanupAction captures a reference to the NativeHandle instance, preventing it from being garbage collected and defeating the purpose of the Cleaner. The CleanupAction should only hold primitive data or static references to avoid this memory leak pattern. However, the improved_code removes the actual cleanup logic without providing a concrete replacement, making it incomplete.

Medium
Remove no-op default close to prevent resource leaks

The default close() implementation is a no-op, which means implementations that hold
native or I/O resources will silently leak them if they forget to override close().
Since the Javadoc explicitly states callers should use try-with-resources to ensure
cleanup, the default should either be removed (forcing implementations to provide
their own) or throw an UnsupportedOperationException to make the contract clear.

server/src/main/java/org/opensearch/index/engine/exec/SegmentCollector.java [35-36]

 @Override
-default void close() {}
+void close() throws IOException;
Suggestion importance[1-10]: 5

__

Why: The no-op close() default could silently swallow resource cleanup for implementations holding native resources. However, removing the default forces all implementors to provide their own close(), which is a breaking API change that may not always be desirable for simple implementations.

Low
General
Fix inconsistent field visibility and mutability

The task field is package-private (no access modifier) while tableName and reader
are private final. This inconsistency exposes task to direct mutation from within
the same package, breaking encapsulation. It should be declared private final to
match the other fields.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/ExecutionContext.java [24]

-SearchShardTask task;
+private final SearchShardTask task;
Suggestion importance[1-10]: 6

__

Why: The task field is package-private and mutable while tableName and reader are private final, breaking encapsulation consistency. Making it private final aligns with the other fields and prevents unintended mutation from within the package.

Low
Guard against null plugin lookup result

If plugin is null (i.e., backEnds.get(backendName) returns null because the selected
backend name doesn't match any registered plugin), a NullPointerException will be
thrown inside the try block with a misleading error. Add a null check for plugin
before using it.

sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java [76-99]

 AnalyticsSearchBackendPlugin plugin = backEnds.get(backendName);
+if (plugin == null) {
+    throw new IllegalStateException("No plugin registered with name [" + backendName + "]");
+}
 SearchShardTask task = null; // TODO : init task
 List<Object[]> rows = new ArrayList<>();
 try (DataFormatAwareEngine.DataFormatAwareReader dataFormatAwareReader = dataFormatAwareEngine.acquireReader()) {
     ExecutionContext ctx = new ExecutionContext(tableName, task, dataFormatAwareReader);
     try (SearchExecEngine engine = plugin.searcher(ctx)) {
         ...
     }
 } catch (Exception e) {
     throw new RuntimeException("Execution failed for [" + plugin.name() + "]", e);
 }
Suggestion importance[1-10]: 5

__

Why: If backEnds.get(backendName) returns null, a NullPointerException would be thrown inside the try block with a misleading error message. Adding an explicit null check improves error clarity, though in practice selectBackEnd() returns the first registered plugin's name so the mismatch scenario is unlikely.

Low
Replace raw Object return type with generic type

The getReader method returns a raw Object, losing all type safety and forcing
callers to perform unchecked casts. Given that EngineReaderManager and other typed
reader abstractions already exist in this PR, the return type should use a generic
type parameter or a more specific interface to preserve type safety.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshot.java [138]

-public abstract Object getReader(DataFormat dataFormat);
+public abstract <T> T getReader(DataFormat dataFormat);
Suggestion importance[1-10]: 5

__

Why: Returning raw Object from getReader loses type safety and forces callers to perform unchecked casts. Using a generic type parameter <T> T getReader(DataFormat dataFormat) improves type safety, though callers would still need to specify the type at the call site.

Low
Suggestions up to commit e02e3f2
CategorySuggestion                                                                                                                                    Impact
Possible issue
Guard against null dereference on snapshot removal

If readers.remove(catalogSnapshot) returns null (the snapshot was never registered),
calling .close() on it will throw a NullPointerException. Add a null check before
closing, similar to the pattern used in MockReaderManager.onDeleted.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java [57-60]

 @Override
 public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
-    readers.remove(catalogSnapshot).close();
+    DatafusionReader reader = readers.remove(catalogSnapshot);
+    if (reader != null) {
+        reader.close();
+    }
 }
Suggestion importance[1-10]: 7

__

Why: If readers.remove(catalogSnapshot) returns null, calling .close() will throw a NullPointerException. This is a real bug that could cause crashes in production when onDeleted is called for an unregistered snapshot.

Medium
Cache consumed native batch address for retrieval

The NativeBridge.streamNext call advances the native stream and returns an Arrow
array address, but if hasNext is true the address is discarded and never cached.
When next() is subsequently called it cannot retrieve the already-consumed batch,
leading to data loss or an incorrect UnsupportedOperationException. The Arrow array
address must be stored when hasNext is set to true so next() can use it.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionResultStream.java [70-77]

+private long pendingArrowArrayAddr = 0L;
+
 @Override
 public boolean hasNext() {
     if (hasNext == null) {
         long arrowArrayAddr = NativeBridge.streamNext(streamHandle.getStreamPtr(), streamHandle.getPointer());
         hasNext = arrowArrayAddr != 0;
-        // TODO: if hasNext, import ArrowArray into VectorSchemaRoot and cache for next()
+        if (hasNext) {
+            pendingArrowArrayAddr = arrowArrayAddr;
+        }
     }
     return hasNext;
 }
Suggestion importance[1-10]: 7

__

Why: The hasNext() method advances the native stream and discards the returned Arrow array address, making it impossible for next() to retrieve the batch. This is a real data loss bug, though the next() method currently throws UnsupportedOperationException anyway (marked as TODO), so the immediate impact is limited.

Medium
Null-check index metadata before dereferencing

clusterService.state().metadata().index(indexName) can return null if the index does
not exist in cluster metadata, causing a NullPointerException before the
indexService == null guard is reached. Add a null check on the IndexMetadata result
before calling getIndex().

sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java [116-122]

 private IndexShard resolveShard(String indexName) {
-    IndexService indexService = indicesService.indexService(clusterService.state().metadata().index(indexName).getIndex());
+    org.opensearch.cluster.metadata.IndexMetadata indexMetadata = clusterService.state().metadata().index(indexName);
+    if (indexMetadata == null) throw new IllegalStateException("Index [" + indexName + "] not found in cluster metadata");
+    IndexService indexService = indicesService.indexService(indexMetadata.getIndex());
     if (indexService == null) throw new IllegalStateException("Index [" + indexName + "] not on this node");
     Set<Integer> shardIds = indexService.shardIds();
     if (shardIds.isEmpty()) throw new IllegalStateException("No shards for [" + indexName + "]");
     return indexService.getShardOrNull(shardIds.iterator().next());
 }
Suggestion importance[1-10]: 7

__

Why: clusterService.state().metadata().index(indexName) can return null if the index doesn't exist, causing a NullPointerException before the indexService == null guard. Adding a null check on IndexMetadata provides a clearer error message and prevents the NPE.

Medium
Fix NOOP methods to declare checked exceptions

The NOOP implementation's methods do not declare throws IOException, but the
interface methods do. This will cause a compilation error because the overriding
methods must be compatible with the interface's declared checked exceptions. The
NOOP implementations are fine without throwing, but they must still declare throws
IOException to properly override the interface methods.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshotLifecycleListener.java [27-36]

 CatalogSnapshotLifecycleListener NOOP = new CatalogSnapshotLifecycleListener() {
     @Override
-    public void beforeRefresh() {}
+    public void beforeRefresh() throws IOException {}
 
     @Override
-    public void afterRefresh(boolean didRefresh, CatalogSnapshot catalogSnapshot) {}
+    public void afterRefresh(boolean didRefresh, CatalogSnapshot catalogSnapshot) throws IOException {}
 
     @Override
-    public void onDeleted(CatalogSnapshot catalogSnapshot) {}
+    public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {}
 };
Suggestion importance[1-10]: 7

__

Why: In Java, overriding methods can declare fewer checked exceptions than the interface, so omitting throws IOException in the NOOP implementation is actually valid and will compile correctly. However, the suggestion is technically incorrect about causing a compilation error — it's legal to not declare the exception in the override. The suggestion is misleading but the improved code is also valid.

Medium
Use correct native pointer when closing stream

The doClose() method uses ptr (the base class handle pointer) to close the stream,
but this handle wraps two distinct native pointers: ptr (the handle pointer) and
streamPtr (the stream pointer). The stream should be closed using streamPtr to avoid
leaking the native stream resource or closing the wrong native object.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/StreamHandle.java [37-39]

 @Override
 protected void doClose() {
-    NativeBridge.streamClose(ptr);
+    NativeBridge.streamClose(streamPtr);
 }
Suggestion importance[1-10]: 7

__

Why: The doClose() method uses ptr (the base handle pointer) instead of streamPtr to close the native stream, which could leak the native stream resource. Using streamPtr is semantically more correct given the class wraps two distinct native pointers.

Medium
Avoid capturing this in Cleaner action

The CleanupAction captures this::doClose as a method reference, which holds a strong
reference back to the NativeHandle instance. This prevents the Cleaner from ever
observing the object as phantom-reachable, so the cleanup action will never fire
automatically — defeating the purpose of the Cleaner. Pass a standalone Runnable
(e.g., a lambda capturing only the primitive ptr) instead of a method reference on
this.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/jni/NativeHandle.java [36-42]

 protected NativeHandle(long ptr) {
     if (ptr == NULL_POINTER) {
         throw new IllegalArgumentException("Null native pointer");
     }
     this.ptr = ptr;
-    this.cleanable = CLEANER.register(this, new CleanupAction(ptr, this::doClose));
+    // Capture only the primitive ptr, not 'this', so the Cleaner can observe GC
+    this.cleanable = CLEANER.register(this, new CleanupAction(ptr, () -> releaseNative(ptr)));
 }
 
+/** Override in subclasses to release the native resource identified by {@code ptr}. */
+protected void releaseNative(long ptr) {
+    doClose();
+}
+
Suggestion importance[1-10]: 6

__

Why: Capturing this::doClose in the CleanupAction creates a strong reference back to the NativeHandle, preventing the Cleaner from ever triggering the cleanup automatically. However, the improved_code introduces a releaseNative method that still calls doClose(), which doesn't fully solve the problem since the lambda would still need to not reference this.

Low
General
Fix inconsistent field visibility and mutability

The task field is package-private while tableName and reader are private final. This
inconsistency exposes task to unintended mutation from within the same package,
breaking encapsulation. It should be declared private (and ideally final) to match
the other fields.

sandbox/libs/analytics-framework/src/main/java/org/opensearch/analytics/backend/ExecutionContext.java [24]

-SearchShardTask task;
+private final SearchShardTask task;
Suggestion importance[1-10]: 5

__

Why: The task field being package-private while other fields are private final is an encapsulation inconsistency. Making it private final improves consistency and prevents unintended mutation from within the package.

Low
Replace raw Object return type with typed reader

The getReader method returns a raw Object, losing all type safety. Since
DataFormatAwareEngine.DataFormatAwareReader is already used elsewhere in the
codebase as the reader abstraction, it should be used as the return type here to
provide compile-time type checking and avoid unsafe casts at call sites.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshot.java [138]

-public abstract Object getReader(DataFormat dataFormat);
+public abstract DataFormatAwareEngine.DataFormatAwareReader getReader(DataFormat dataFormat);
Suggestion importance[1-10]: 5

__

Why: Returning Object from getReader loses type safety and forces callers to perform unsafe casts. Using a more specific type like DataFormatAwareEngine.DataFormatAwareReader would improve type safety, though the exact return type depends on design constraints not fully visible in this diff.

Low
Suggestions up to commit 1914cd1
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix wrong native pointer used for stream close

doClose() uses ptr (the base handle pointer) to close the stream, but StreamHandle
has a separate streamPtr field that represents the native stream pointer. Closing
with ptr instead of streamPtr will likely close the wrong native resource and leak
the stream, or cause undefined behavior in native code.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/StreamHandle.java [31-33]

 @Override
 protected void doClose() {
-    NativeBridge.streamClose(ptr);
+    NativeBridge.streamClose(streamPtr);
 }
Suggestion importance[1-10]: 8

__

Why: doClose() uses ptr (the base handle pointer) instead of streamPtr to close the native stream, which could leak the stream resource or cause undefined behavior. This is a legitimate bug where the wrong native pointer is used.

Medium
Fix race condition in snapshot replacement

The setLatestSnapshot method has a race condition: between reading
this.latestSnapshot and writing the new value, another thread could call
setLatestSnapshot concurrently, causing the old snapshot to be decRef'd twice or the
new snapshot to be overwritten. Use synchronized or an AtomicReference with
compare-and-swap to make this operation thread-safe.

server/src/main/java/org/opensearch/index/engine/DataFormatAwareEngine.java [91-97]

-public void setLatestSnapshot(CatalogSnapshot snapshot) {
+public synchronized void setLatestSnapshot(CatalogSnapshot snapshot) {
     CatalogSnapshot prev = this.latestSnapshot;
     this.latestSnapshot = snapshot;
     if (prev != null) {
         prev.decRef();
     }
 }
Suggestion importance[1-10]: 7

__

Why: The setLatestSnapshot method reads this.latestSnapshot and then writes a new value non-atomically, which is a real race condition in a concurrent environment. Adding synchronized is a valid fix, though the field is already volatile which only ensures visibility, not atomicity of the read-modify-write sequence.

Medium
Guard against null or empty supported formats list

getSupportedFormats() can return null (as seen in
DataFusionPlugin.getSupportedFormats()) or an empty list, causing a
NullPointerException or IndexOutOfBoundsException. Add null and empty checks before
accessing the first element.

sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java [61-62]

 List<DataFormat> formats = plugin.getSupportedFormats();
+if (formats == null || formats.isEmpty()) {
+    throw new IllegalStateException("Plugin [" + plugin.name() + "] has no supported formats");
+}
 DataFormat format = formats.get(0);
Suggestion importance[1-10]: 7

__

Why: DataFusionPlugin.getSupportedFormats() explicitly returns null, so calling formats.get(0) in DefaultPlanExecutor will throw a NullPointerException. Adding a null/empty guard with a descriptive error message prevents a hard-to-diagnose runtime failure.

Medium
Prevent NullPointerException on missing snapshot deletion

readers.remove(catalogSnapshot) can return null if the snapshot was never added or
was already removed, causing a NullPointerException. Add a null check before calling
close().

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java [52-54]

 @Override
 public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
-    readers.remove(catalogSnapshot).close();
+    DatafusionReader reader = readers.remove(catalogSnapshot);
+    if (reader != null) {
+        reader.close();
+    }
 }
Suggestion importance[1-10]: 6

__

Why: readers.remove(catalogSnapshot) can return null if the snapshot was never registered or already removed, causing a NullPointerException when .close() is called. The fix is straightforward and prevents a real runtime error.

Low
Fix NOOP implementation missing checked exception declarations

The NOOP implementation's methods do not declare throws IOException, but the
interface methods do. This will cause a compilation error because the anonymous
class overrides must be compatible with the interface signatures. The overriding
methods should declare throws IOException to match the interface.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshotLifecycleListener.java [27-36]

 CatalogSnapshotLifecycleListener NOOP = new CatalogSnapshotLifecycleListener() {
     @Override
-    public void beforeRefresh() {}
+    public void beforeRefresh() throws IOException {}
 
     @Override
-    public void afterRefresh(boolean didRefresh, CatalogSnapshot catalogSnapshot) {}
+    public void afterRefresh(boolean didRefresh, CatalogSnapshot catalogSnapshot) throws IOException {}
 
     @Override
-    public void onDeleted(CatalogSnapshot catalogSnapshot) {}
+    public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {}
 };
Suggestion importance[1-10]: 2

__

Why: In Java, overriding methods are not required to declare checked exceptions that the interface declares — they can declare fewer or no checked exceptions. The NOOP implementation compiles correctly without throws IOException. This suggestion is technically incorrect.

Low
General
Prevent uncontrolled thread pool usage in IndexSearcher

IndexSearcher created with a DirectoryReader will use a default thread pool for
concurrent segment searches. In a server context, this can cause uncontrolled thread
usage. Consider using the single-threaded constructor new IndexSearcher(reader,
null) or passing an explicit executor to control concurrency.

sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneSourceContext.java [28-32]

 public LuceneSourceContext(Object query, DirectoryReader reader) {
     this.query = query;
     this.reader = reader;
-    this.searcher = new IndexSearcher(reader);
+    this.searcher = new IndexSearcher(reader, null);
 }
Suggestion importance[1-10]: 4

__

Why: Using new IndexSearcher(reader) may use a default executor for concurrent segment searches, which could cause uncontrolled thread usage in a server context. Passing null as the executor enforces single-threaded execution, though this is a minor optimization concern.

Low
Remove no-op default close to enforce resource cleanup

The default close() implementation is a no-op, which means implementations that hold
native or I/O resources will silently leak them if they forget to override close().
Since SegmentCollector is documented to require try-with-resources for cleanup,
there should be no default implementation — forcing implementors to explicitly
handle resource cleanup.

server/src/main/java/org/opensearch/index/engine/exec/SegmentCollector.java [35-36]

 @Override
-default void close() {}
+void close();
Suggestion importance[1-10]: 4

__

Why: The default no-op close() could silently mask resource leaks in implementations that hold native or I/O resources. Removing the default forces implementors to explicitly handle cleanup, which is safer given the documented try-with-resources requirement.

Low
Suggestions up to commit 7f5f3e6
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix race condition in snapshot replacement

The setLatestSnapshot method is not thread-safe. Between reading this.latestSnapshot
into prev and writing the new snapshot, another thread could call setLatestSnapshot
concurrently, causing the old snapshot to be decRef'd twice or the new snapshot to
be overwritten. Use an AtomicReference with getAndSet to make this operation atomic.

server/src/main/java/org/opensearch/index/engine/DataFormatAwareEngine.java [91-97]

 public void setLatestSnapshot(CatalogSnapshot snapshot) {
-    CatalogSnapshot prev = this.latestSnapshot;
-    this.latestSnapshot = snapshot;
+    CatalogSnapshot prev = ((AtomicReference<CatalogSnapshot>) latestSnapshotRef).getAndSet(snapshot);
     if (prev != null) {
         prev.decRef();
     }
 }
Suggestion importance[1-10]: 7

__

Why: The setLatestSnapshot method has a real race condition between reading latestSnapshot and writing the new value. However, the improved code references latestSnapshotRef which doesn't exist in the PR (the field is declared as volatile CatalogSnapshot latestSnapshot), making the improved code incorrect as-is. The issue is valid but the fix needs to also change the field declaration.

Medium
Prevent NullPointerException on missing snapshot deletion

If readers.remove(catalogSnapshot) returns null (e.g., the snapshot was never added
or already removed), calling .close() on it will throw a NullPointerException. Add a
null check before calling close().

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java [52-54]

 @Override
 public void onDeleted(CatalogSnapshot catalogSnapshot) throws IOException {
-    readers.remove(catalogSnapshot).close();
+    DatafusionReader reader = readers.remove(catalogSnapshot);
+    if (reader != null) {
+        reader.close();
+    }
 }
Suggestion importance[1-10]: 7

__

Why: The onDeleted method calls .close() directly on the result of readers.remove() without a null check, which will throw a NullPointerException if the snapshot was never added or already removed. The fix is straightforward and prevents a real NPE bug.

Medium
Fix incorrect pointer used in native stream close

The doClose() method uses ptr (the base handle pointer) to close the stream, but
StreamHandle has a separate streamPtr field that represents the native stream
pointer. If the stream and the base handle are distinct native resources, streamPtr
should be used here instead of ptr to avoid a resource leak or incorrect pointer
usage.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/StreamHandle.java [31-33]

 @Override
 protected void doClose() {
-    NativeBridge.streamClose(ptr);
+    NativeBridge.streamClose(streamPtr);
 }
Suggestion importance[1-10]: 7

__

Why: StreamHandle holds two pointers: ptr (base handle) and streamPtr (native stream). Using ptr in streamClose may close the wrong resource or cause a leak if they are distinct native objects. However, without knowing the native API contract, this could also be intentional.

Medium
Ensure search context is always closed

The engineContext created by searchEngine.createContext(...) is never closed, which
may leak resources (e.g., native memory, file handles). The context should be closed
in a try-with-resources or finally block after execution.

sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/DefaultPlanExecutor.java [67-79]

 try (DataFormatAwareEngine.DataFormatAwareReader dataFormatAwareReader = dataFormatAwareEngine.acquireReader()) {
     Object reader = dataFormatAwareReader.getReader(format);
     SearchExecEngine searchEngine = dataFormatAwareEngine.getSearchExecEngine(format);
     Object plan = searchEngine.convertFragment(logicalFragment);
     var engineContext = searchEngine.createContext(reader, plan, null, null, null);
-    Object result = searchEngine.execute(engineContext);
-
-    // TODO: consume result stream into rows
-    logger.info("[DefaultPlanExecutor] Executed via [{}]", plugin.name());
-    return new ArrayList<>();
+    try {
+        Object result = searchEngine.execute(engineContext);
+        // TODO: consume result stream into rows
+        logger.info("[DefaultPlanExecutor] Executed via [{}]", plugin.name());
+        return new ArrayList<>();
+    } finally {
+        engineContext.close();
+    }
 }
Suggestion importance[1-10]: 6

__

Why: The engineContext is never closed, which could leak resources like native memory or file handles. The suggested fix wraps execution in a try-finally to ensure engineContext.close() is always called, which is a valid resource management improvement.

Low
Prevent silent resource leaks in default close method

The default close() implementation is a no-op, which means implementations that hold
native or I/O resources will silently leak them if they forget to override close().
Since the Javadoc says "Callers should use try-with-resources to ensure cleanup,"
the default should either throw an UnsupportedOperationException or be removed to
force implementors to provide a real cleanup.

server/src/main/java/org/opensearch/index/engine/exec/SegmentCollector.java [35-36]

 @Override
-default void close() {}
+default void close() {
+    throw new UnsupportedOperationException("SegmentCollector implementations must override close()");
+}
Suggestion importance[1-10]: 4

__

Why: A no-op default close() can silently swallow resource cleanup for implementations that forget to override it. However, throwing UnsupportedOperationException by default is an unusual pattern for Closeable and may break existing or future implementations that legitimately have nothing to close.

Low
General
Improve type safety of reader retrieval method

The getReader method returns a raw Object, which loses type safety and forces
callers to perform unchecked casts. Consider using a generic type parameter or a
bounded return type to make the API safer and more expressive.

server/src/main/java/org/opensearch/index/engine/exec/CatalogSnapshot.java [138]

-public abstract Object getReader(DataFormat dataFormat);
+public abstract <T> T getReader(DataFormat dataFormat);
Suggestion importance[1-10]: 5

__

Why: Returning raw Object from getReader forces callers to perform unchecked casts, reducing type safety. Using a generic return type <T> T getReader(DataFormat dataFormat) is a reasonable improvement, though it still requires unchecked casts internally.

Low
Return defensive copy to protect mutable internal state

Returning the internal byte[] array directly exposes the mutable internal state of
the object, allowing callers to modify the array contents. Return a defensive copy
to preserve immutability of the stored plan bytes.

sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionQuery.java [29-31]

 public byte[] getSubstraitBytes() {
-    return substraitBytes;
+    return substraitBytes == null ? null : substraitBytes.clone();
 }
Suggestion importance[1-10]: 4

__

Why: Returning the internal byte[] directly exposes mutable state, which is a valid concern for immutability. However, for performance-sensitive serialized plan bytes, defensive copying may have overhead, and the impact depends on usage patterns.

Low
Suggestions up to commit 338bc6e

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 4aad14e: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 3e4d286.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72highCore search infrastructure import redirected from 'org.opensearch.lucene.util.CombinedBitSet' to 'org.opensearch.be.lucene.util.CombinedBitSet' (sandbox plugin namespace). CombinedBitSet is used in document-level filtering within ContextIndexSearcher, which is a security-sensitive component used by security plugins (e.g., document-level security). The replacement class is not defined anywhere in this diff, meaning its implementation cannot be verified. If the replacement silently returns all bits set or alters filter combination logic, it could bypass document-level security access controls. Redirecting a core search utility to an unverified sandbox implementation without showing the replacement source is a suspicious pattern.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java96lowError reporting in notifyFilesAdded and notifyFilesDeleted uses 'System.err.println' directly instead of the log4j logging framework used throughout the codebase. This bypasses the standard structured logging/auditing pipeline. While not clearly malicious, silently swallowing file-tracking failures and printing to stderr (which may not be captured in audit logs) is anomalous for production-grade OpenSearch code and could hide evidence of file manipulation errors.
server/src/test/java/org/opensearch/index/engine/EngineIntegrationTests.java1lowTest files (EngineIntegrationTests.java and SearchExecEngineTests.java) reference APIs that are inconsistent with the production code added in this diff — e.g., CompositeEngine(List, null) constructor signature, EngineSearcherSupplier, EngineReaderManager.acquire()/release(), getReferenceManager(), acquireSearcherSupplier(). None of these match the SearchExecEngine or CompositeEngine interfaces defined in this PR. These tests would fail to compile, suggesting they may be intentional noise to obscure the actual functionality being introduced, or are artifacts of a different design that were not properly cleaned up.

The table above displays the top 10 most important findings.

Total: 3 | Critical: 0 | High: 1 | Medium: 0 | Low: 2


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 7ef7f9c.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumImport of CombinedBitSet redirected from core module org.opensearch.lucene.util to sandbox plugin package org.opensearch.be.lucene.util. ContextIndexSearcher is in the critical search hot path; the replacement class implementation is not present in this diff and cannot be verified, creating an unreviewed dependency on a sandbox plugin within core server code.
server/src/test/java/org/opensearch/index/engine/EngineIntegrationTests.java1mediumTest file references APIs that do not match any interface defined in this diff: CompositeEngine(List, null) constructor, getReadEngines(), getPrimaryReadEngine(), EngineSearcherSupplier, and EngineReaderManager.acquire/release. The SearchExecEngine.createContext signature also differs from the defined interface. Suggests these tests target a hidden implementation not included in this PR.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java97lowError handling uses System.err.println instead of the Log4j logger used everywhere else in the codebase, bypassing structured logging and making file deletion notification failures harder to audit in production.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java55lowSystem.loadLibrary loads native library opensearch_datafusion_jni whose binary is not included in this diff. While expected for JNI integration, the native binary executes outside JVM security controls and its contents are unreviewed here.

The table above displays the top 10 most important findings.

Total: 4 | Critical: 0 | High: 0 | Medium: 2 | Low: 2


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit ade5591.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumCore server class `ContextIndexSearcher` has its `CombinedBitSet` import redirected from `org.opensearch.lucene.util.CombinedBitSet` to `org.opensearch.be.lucene.util.CombinedBitSet`. The target package (`org.opensearch.be.lucene`) is a new plugin package introduced in this PR, and the `util.CombinedBitSet` class is not defined anywhere in this diff. This is architecturally inverted (server depending on plugin code) and unrelated to the stated PR purpose of adding a DataFusion analytics backend. The change could redirect core search bitset operations through plugin-controlled code.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java57low`System.loadLibrary(NATIVE_LIBRARY_NAME)` loads a native library named `opensearch_datafusion_jni` at runtime with no integrity check (no hash verification, no signature check). While JNI loading is a standard pattern, loading an unsigned native library at node startup from a path controlled by the filesystem represents a potential binary substitution vector. This is expected for JNI but warrants noting.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java107lowError handling in `notifyFilesAdded` and `notifyFilesDeleted` uses `System.err.println` instead of the Log4j logger used elsewhere in the codebase. While likely just poor coding practice, bypassing the logging framework in error paths means these failures are invisible to standard OpenSearch log monitoring and auditing.
server/src/test/java/org/opensearch/index/engine/EngineIntegrationTests.java1lowTest files (`EngineIntegrationTests.java`, `SearchExecEngineTests.java`) reference interfaces and method signatures (`getReadEngines()`, `getPrimaryReadEngine()`, `EngineSearcherSupplier`, `EngineReaderManager.acquire()/release()`, `CompositeEngine(List, null)`) that do not match the actual classes defined in this diff. These tests appear to target a different version of the API than what is implemented, suggesting they may be copied from a separate non-public branch or codebase. This inconsistency warrants investigation to confirm the tests correspond to the intended implementation.

The table above displays the top 10 most important findings.

Total: 4 | Critical: 0 | High: 0 | Medium: 1 | Low: 3


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit ac2edae.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumImport changed from core server package 'org.opensearch.lucene.util.CombinedBitSet' to sandbox plugin package 'org.opensearch.be.lucene.util.CombinedBitSet'. This makes a core server component depend on a sandbox/plugin package, which is an unusual architectural coupling. No definition of the new class appears in this diff, raising questions about whether the replacement implementation is identical or subtly different in behavior affecting search result correctness.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java102lowErrors in file notification callbacks are silently swallowed using System.err.println instead of the standard logging framework. This deviates from project conventions and could suppress visibility of failures during file add/delete operations, obscuring issues in file lifecycle management.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java58lowNative library 'opensearch_datafusion_jni' is loaded via System.loadLibrary without any integrity verification (e.g., checksum or signature check). While JNI usage is expected for DataFusion integration, the absence of library validation means a tampered native binary could execute arbitrary code with the JVM's privileges.
sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneEngineSearcher.java49lowStatic ConcurrentHashMaps (activeWeights, activeScorers) are used to hold Weight and Scorer contexts keyed by opaque long pointers shared across all instances. If releaseWeight/releaseCollector are not reliably called (e.g., on exception paths), entries accumulate indefinitely. This is a potential resource exhaustion vector, though more likely a design oversight than intentional.

The table above displays the top 10 most important findings.

Total: 4 | Critical: 0 | High: 0 | Medium: 1 | Low: 3


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

Comment thread server/src/main/java/org/opensearch/index/engine/exec/SearchExecEngine.java Outdated
@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit cc1fb42.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumImport of CombinedBitSet redirected from 'org.opensearch.lucene.util' to 'org.opensearch.be.lucene.util' — a new plugin-owned package added in this PR. This ties a core search component to a new package whose implementation is not shown in this diff. If the new class differs in behavior from the original, it could intercept or alter search result filtering for all queries processed by ContextIndexSearcher.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java58lowSystem.loadLibrary('opensearch_datafusion_jni') loads a native JNI library by bare name, relying on the JVM's library path resolution. No path pinning, signature verification, or integrity check is performed on the native binary before loading. A malicious library with the same name placed earlier in java.library.path would be silently loaded instead.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java101lowError handling uses System.err.println instead of the configured logger. Errors are silently swallowed from the logging framework, making file deletion/addition failures invisible to operators and monitoring systems. This also bypasses any log-level controls and audit trails, which is unusual given every other class in this PR uses Log4j.
sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneEngineSearcher.java44lowStatic ConcurrentHashMaps (activeWeights, activeScorers) keyed by incrementing long IDs are shared across all shard instances with no TTL or maximum size bound. If releaseWeight/releaseCollector are not called (e.g., due to exceptions in native Rust code), the maps grow unboundedly, holding references to Lucene Weight/Scorer objects and their associated index readers indefinitely, preventing garbage collection.

The table above displays the top 10 most important findings.

Total: 4 | Critical: 0 | High: 0 | Medium: 1 | Low: 3


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 86a9747.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumImport of CombinedBitSet redirected from the established core package 'org.opensearch.lucene.util' to a new, unverified plugin package 'org.opensearch.be.lucene.util'. CombinedBitSet is used in critical document-level filtering in the search path. The replacement class is not introduced anywhere in this diff, raising the question of whether a pre-existing class with that path silently substitutes behavior, or if this is an intentional package reorganization that warrants verification of the new implementation.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java60lowSystem.loadLibrary('opensearch_datafusion_jni') loads a native library at node startup via the DataFusionService lifecycle. While standard JNI practice, the library is loaded without path pinning or integrity verification, meaning a tampered or substituted native library on the node's library path would execute with full JVM privileges. This is contextually consistent with the stated DataFusion integration purpose.
sandbox/plugins/analytics-backend-lucene/src/main/java/org/opensearch/be/lucene/LuceneEngineSearcher.java47lowStatic, JVM-wide ConcurrentHashMaps (activeWeights, activeScorers) keyed by auto-incrementing long IDs store per-query Weight and Scorer contexts shared across all shard and engine instances. In a multi-tenant or multi-shard environment, predictable ID sequences could in principle allow one query's execution context to observe or interfere with another's. Intended for JNI callbacks but the static scope creates unintended cross-query reachability.
server/src/main/java/org/opensearch/index/engine/exec/IndexFileDeleter.java109lowError handling in notifyFilesAdded() and notifyFilesDeleted() uses System.err.println() rather than a logger, and silently swallows all exceptions from CompositeEngine notifications. This means failures to notify the engine about added or deleted files (including potential cache poisoning or stale-reader conditions) are invisible to standard log monitoring and alerting pipelines.

The table above displays the top 10 most important findings.

Total: 4 | Critical: 0 | High: 0 | Medium: 1 | Low: 3


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit 5f761ba.

PathLineSeverityDescription
server/src/main/java/org/opensearch/search/internal/ContextIndexSearcher.java72mediumImport of CombinedBitSet changed from core package 'org.opensearch.lucene.util' to plugin package 'org.opensearch.be.lucene.util'. The class at the new location is not defined anywhere in this diff. If this resolves to a plugin-supplied class with altered behavior, it could influence how query bit-sets are computed in all search operations through this core class.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java60mediumSystem.loadLibrary("opensearch_datafusion_jni") loads a native library by bare name, resolved via java.library.path at runtime. No integrity check, path pinning, or signature verification is performed. A malicious actor with control over java.library.path or LD_LIBRARY_PATH could substitute a malicious native library that runs with full JVM privileges.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java68lowHardcoded fake native pointer 'long ptr = 1L' is used as a placeholder NativeRuntimeHandle. If this placeholder reaches production and native JNI code dereferences pointer 1, it will cause a JVM crash (SIGSEGV). This is a placeholder comment, but the value is live in the code path.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionPlugin.java150lowgetSupportedFormats() returns null instead of an empty list. CompositeEngineFactory iterates over this return value with a for-each loop, which will throw a NullPointerException at startup for every shard using this plugin. This could be used as a denial-of-service against shard initialization.
server/src/main/java/org/opensearch/index/engine/exec/WriterFileSet.java155lowgetTotalSize() constructs file paths from user-controlled 'directory' and 'file' fields and calls Files.size() on them, silently catching and discarding IOExceptions. While read-only, this probes arbitrary paths on the filesystem and suppresses all errors, masking path traversal or permission issues.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DatafusionReaderManager.java41lowreaders field is a non-synchronized HashMap accessed from methods (getReader, afterRefresh, onDeleted) that can be called concurrently during refresh and shard close. This is a data race that can corrupt internal state, potentially causing stale or double-closed native readers.
server/src/test/java/org/opensearch/index/engine/EngineIntegrationTests.java1lowTest class references APIs (CompositeEngine(List, null), EngineSearcherSupplier, EngineReaderManager.acquire/release, SearchExecEngine with ReaderContext/BigArrays parameters) that do not match the interfaces defined in the production code of this same diff. These tests will not compile, suggesting they were written against a different or future API version and included prematurely, creating dead code that obscures the actual API contract.

The table above displays the top 10 most important findings.

Total: 7 | Critical: 0 | High: 0 | Medium: 2 | Low: 5


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

Comment thread server/src/main/java/org/opensearch/index/engine/CompositeEngine.java Outdated
Comment thread server/src/main/java/org/opensearch/index/engine/exec/SearchExecEngine.java Outdated
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 338bc6e

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 338bc6e: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Comment thread sandbox/libs/analytics-framework/build.gradle
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 7f5f3e6

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 7f5f3e6: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 1914cd1

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 1914cd1: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Analyzer ❗

AI-powered 'Code-Diff-Analyzer' found issues on commit d2149de.

PathLineSeverityDescription
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java72mediumSystem.loadLibrary() loads native library 'opensearch_datafusion_jni' without path validation or integrity verification. If the JVM library search path is manipulated (e.g., via LD_LIBRARY_PATH or java.library.path), a malicious native library could be substituted. This is standard JNI practice but represents a supply-chain risk without checksum/signature validation of the native binary.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/jni/NativeBridge.java45mediumexecuteQuery() passes a raw byte[] substraitPlan and raw long pointers (readerPtr, runtimePtr) directly to native code. There is no input validation on the substrait plan bytes before they cross the JNI boundary. A crafted substrait payload could potentially exploit memory safety issues in the native layer. The risk is elevated because the substrait bytes originate from query planning logic that is still being wired (TODOs throughout).
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionService.java81lowPlaceholder native pointer value of 1L is stored in NativeRuntimeHandle: 'long ptr = 1L; // placeholder until NativeBridge is wired'. This bypasses the null-pointer guard in NativeRuntimeHandle (which rejects ptr==0) and creates a live but invalid handle. Any JNI call that dereferences this pointer would access address 0x1 in native memory, which could cause a crash or undefined behavior. Likely an incomplete implementation rather than malicious intent.
sandbox/plugins/analytics-engine/src/main/java/org/opensearch/analytics/exec/AnalyticsQueryService.java93lowINFO-level logging emits plugin name, context IDs, and row counts for every query execution. While not exfiltration, this could leak internal query plan routing and data volume metadata into log aggregation systems. The logging pattern is consistent with debugging rather than deliberate data harvesting.
sandbox/plugins/analytics-backend-datafusion/src/main/java/org/opensearch/be/datafusion/DataFusionPlugin.java103lowctx.getReader().getReader(null) passes a null DataFormat to getReader(). The comment acknowledges this: 'TODO: resolve DataFormat properly instead of passing null'. This will cause a NullPointerException or return null in the DatafusionReaderManager.getReader() path, followed by an unchecked cast to DatafusionReader. Appears to be an incomplete implementation rather than intentional, but could cause unexpected behavior at runtime.

The table above displays the top 10 most important findings.

Total: 5 | Critical: 0 | High: 0 | Medium: 2 | Low: 3


Pull Requests Author(s): Please update your Pull Request according to the report above.

Repository Maintainer(s): You can bypass diff analyzer by adding label skip-diff-analyzer after reviewing the changes carefully, then re-run failed actions. To re-enable the analyzer, remove the label, then re-run all actions.


⚠️ Note: The Code-Diff-Analyzer helps protect against potentially harmful code patterns. Please ensure you have thoroughly reviewed the changes beforehand.

Thanks.

@bharath-techie bharath-techie marked this pull request as ready for review March 24, 2026 18:53
Bukhtawar and others added 8 commits March 25, 2026 19:09
Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
* Refactor CompositeEngine to use factory

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

---------

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
* Refactor CompositeEngine to use factory

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* Introduce SegmentCollector

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* De-couple and simplify index file deleter

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* De-couple and simplify index file deleter

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* De-couple and simplify index file deleter, handle scorer and weight query lifecycle

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* De-couple and simplify index file deleter, handle scorer and weight query lifecycle

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

* De-couple and simplify index file deleter, handle scorer and weight query lifecycle

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>

---------

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
Signed-off-by: bharath-techie <bharath78910@gmail.com>
…alytics interfaces

Signed-off-by: bharath-techie <bharath78910@gmail.com>
Signed-off-by: bharath-techie <bharath78910@gmail.com>
Signed-off-by: bharath-techie <bharath78910@gmail.com>
Signed-off-by: bharath-techie <bharath78910@gmail.com>
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 223769c: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 91c83d5: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Signed-off-by: bharath-techie <bharath78910@gmail.com>
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

Copy link
Copy Markdown
Contributor

@Bukhtawar Bukhtawar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Bharath, for the change, LGTM
Lets follow up with extensive coverage, merging to unblock other PRs

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for 17d7546: SUCCESS

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 25, 2026

Codecov Report

❌ Patch coverage is 14.75694% with 491 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.22%. Comparing base (85113a4) to head (17d7546).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
...opensearch/index/engine/exec/IndexFileDeleter.java 6.81% 40 Missing and 1 partial ⚠️
...pensearch/be/lucene/LuceneIndexFilterProvider.java 0.00% 39 Missing ⚠️
.../exec/DataFormatEngineCatalogSnapshotListener.java 0.00% 34 Missing ⚠️
...rg/opensearch/be/datafusion/DataFusionService.java 0.00% 31 Missing ⚠️
...rg/opensearch/be/datafusion/DatafusionContext.java 0.00% 24 Missing ⚠️
...opensearch/index/engine/DataFormatAwareEngine.java 51.06% 21 Missing and 2 partials ⚠️
...opensearch/analytics/backend/jni/NativeHandle.java 0.00% 22 Missing ⚠️
...org/opensearch/be/datafusion/DataFusionPlugin.java 0.00% 22 Missing ⚠️
...org/opensearch/index/engine/exec/FileMetadata.java 0.00% 21 Missing ⚠️
...g/opensearch/be/datafusion/DatafusionSearcher.java 0.00% 20 Missing ⚠️
... and 26 more
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #20821      +/-   ##
============================================
- Coverage     73.31%   73.22%   -0.09%     
- Complexity    72544    72603      +59     
============================================
  Files          5819     5848      +29     
  Lines        331399   331952     +553     
  Branches      47887    47948      +61     
============================================
+ Hits         242955   243069     +114     
- Misses        68935    69354     +419     
- Partials      19509    19529      +20     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Bukhtawar Bukhtawar merged commit 2e80911 into opensearch-project:main Mar 25, 2026
37 of 39 checks passed
gagandhakrey pushed a commit to gagandhakrey/OpenSearch that referenced this pull request Apr 1, 2026
* Native search engine abstractions for analytics-backend

Signed-off-by: bharath-techie <bharath78910@gmail.com>
Co-authored-by: Bukhtawar Khan <bukhtawar7152@gmail.com>
Signed-off-by: Gagan Dhakrey <gagandhakrey@Gagans-MacBook-Pro.local>
aparajita31pandey pushed a commit to aparajita31pandey/OpenSearch that referenced this pull request Apr 18, 2026
* Native search engine abstractions for analytics-backend

Signed-off-by: bharath-techie <bharath78910@gmail.com>
Co-authored-by: Bukhtawar Khan <bukhtawar7152@gmail.com>
Signed-off-by: Aparajita Pandey <aparajita31pandey@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lucene skip-changelog skip-diff-analyzer Maintainer to skip code-diff-analyzer check, after reviewing issues in AI analysis.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants