-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
feat(llm): redo llm feature and tools #7270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @perfectra1n, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant overhaul of the LLM feature set, focusing on simplification, efficiency, and maintainability. It re-architects the core LLM pipeline into a leaner, three-stage process and consolidates numerous specialized tools into a few versatile ones. These changes are supported by a new configuration service, a dynamic tool filtering mechanism, and a pipeline adapter to ensure a smooth transition and improved performance across different LLM providers. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a major and well-executed refactoring of the LLM feature, including a new V2 pipeline, consolidated tools, a feature-flagging mechanism, improved configuration management, and structured logging. The changes are well-tested and significantly improve the modularity and maintainability of the codebase. My feedback focuses on improving type safety, ensuring consistency, and enhancing error logging for better debuggability.
|
||
import { describe, it, expect, beforeEach, vi } from 'vitest'; | ||
import type { PipelineV2Input } from './pipeline_v2.js'; | ||
import type { Message } from '../ai_interface.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test file is missing several type imports, which will cause compilation errors. Specifically, PipelineV2
, AIService
, and ChatResponse
are used without being imported.
Please add the necessary imports to ensure the file is type-safe and compilable.
import type { Message } from '../ai_interface.js'; | |
import type { Message, AIService, ChatResponse } from '../ai_interface.js'; | |
import { PipelineV2 } from './pipeline_v2.js'; |
*/ | ||
private getBoolOption(key: string, defaultValue: boolean): boolean { | ||
try { | ||
const value = (options as any).getOptionBool(key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Casting options
to any
bypasses TypeScript's type safety. While this might be necessary if the options
object is not strongly typed, it's a practice that should be avoided. Consider defining a minimal interface for the parts of options
you are using, or use a more specific type assertion than any
to improve maintainability and prevent potential runtime errors.
private getStringOption(key: string, defaultValue: string): string { | ||
try { | ||
const value = (options as any).getOption(key); | ||
return value !== null && value !== undefined ? String(value) : defaultValue; | ||
} catch { | ||
return defaultValue; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...response, | ||
pipelineVersion: 'v2', | ||
requestId: response.requestId, | ||
processingTime: response.processingTime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an inconsistency in how processingTime
is handled between executeLegacy
and executeV2
. In executeLegacy
, the time measured by the adapter is used for both metrics and the returned output. However, in executeV2
, the adapter-measured time is used for metrics, but the processingTime
from the V2 pipeline's response is returned in the output.
For consistency, it would be better to use the time measured within the adapter for the output in both methods. This ensures that the reported processing time always reflects the total time taken by the adapter's execution path.
processingTime: response.processingTime | |
processingTime: processingTime |
} | ||
|
||
return null; | ||
} catch (error: any) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/** | ||
* Get or create vector search tool | ||
*/ | ||
private async getVectorSearchTool(): Promise<any> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switch (entry.level) { | ||
case LogLevel.ERROR: | ||
if (entry.error) { | ||
log.error(`${formattedMessage}: ${entry.error.message}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When logging an error, only the message
property is being logged. This can make debugging difficult as the stack trace is lost. It would be more helpful to log the full error object or at least its stack trace to provide more context for debugging.
log.error(`${formattedMessage}: ${entry.error.message}`); | |
log.error(`${formattedMessage}: ${entry.error.stack || entry.error.message}`); |
No description provided.