Skip to content

Feature/remote repo support#7

Merged
kratos06 merged 4 commits intomasterfrom
feature/remote-repo-support
Apr 25, 2025
Merged

Feature/remote repo support#7
kratos06 merged 4 commits intomasterfrom
feature/remote-repo-support

Conversation

@kratos06
Copy link
Copy Markdown
Owner

@kratos06 kratos06 commented Apr 25, 2025

Summary by CodeRabbit

  • New Features

    • Added support for reviewing individual commits from both local and remote (GitHub, GitLab) repositories via a new command-line interface.
    • Enhanced model selection flexibility, allowing direct specification of OpenAI model names and customizable defaults via environment variables.
    • Introduced estimated working hours in code evaluation reports for more detailed review insights.
  • Bug Fixes

    • Improved error handling, logging, and recovery mechanisms throughout the evaluation and reporting process.
  • Documentation

    • Updated and expanded English documentation, including detailed usage guides, API docs, and clearer instructions for model configuration and running the tool.
  • Chores

    • Added new dependencies to support remote repository analysis and API interactions.
    • Removed obsolete scripts and test files to streamline the codebase.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 25, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This update introduces extensive enhancements and refactoring across the codebase. Major changes include a unified command-line interface in run_codedog.py supporting both local and remote (GitHub/GitLab) commit reviews, new modules for remote repository analysis, and improved code evaluation logic with estimated working hours and robust JSON handling. Model selection is now fully configurable via environment variables and documented accordingly. Several test and legacy scripts are removed or replaced by integrated functionality. Documentation and environment sample files are updated to reflect new configuration options and usage patterns. Additional dependencies are added to support remote repository interactions.

Changes

File(s) Change Summary
.env.sample, docs/models.md Expanded documentation and sample environment variables for model selection, including support for specifying default model versions and directly using OpenAI model names.
UPDATES.md Replaced Chinese content with a comprehensive English version, reorganizing and expanding update notes, instructions, and future plans.
requirements.txt Added dependencies: PyGithub, python-gitlab, aiohttp, python-dateutil.
codedog/utils/langchain_utils.py Model loading functions now use environment variables for model selection, support pattern matching for OpenAI models, and provide expanded docstrings and error handling.
codedog/utils/code_evaluator.py Major enhancements: added estimated working hours to evaluation results, improved JSON extraction and validation, input sanitization, whole-commit evaluation, and English logging.
codedog/utils/git_log_analyzer.py Added get_commit_diff function to retrieve and filter diffs for a specific commit, supporting extension-based filtering.
codedog/utils/remote_repository_analyzer.py (new) Introduced RemoteRepositoryAnalyzer class for analyzing commits and diffs from remote GitHub/GitLab repositories, with structured commit info and file filtering.
codedog/analyze_code.py (new) Added module for analyzing code changes in repositories, filtering by author and timeframe, generating statistics, and saving results to JSON.
codedog/chains/pr_summary/translate_pr_summary_chain.py Changed import of Field to use the standard pydantic package.
codedog/analysis_results_20250424_095117.json (new) Added empty analysis results JSON file as a placeholder.
run_codedog.py Unified and extended CLI: supports reviewing individual commits from local and remote (GitHub/GitLab) repositories, integrates new evaluation and diff logic, and updates argument parsing.
run_codedog_commit.py, run_codedog_eval.py, test_auto_review.py, test_gpt4o.py, test_grimoire_deepseek_r1_py.md Removed legacy scripts and test files, consolidating their functionality into the main CLI and codebase.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI (run_codedog.py)
    participant Repo Analyzer (local or remote)
    participant DiffEvaluator
    participant Model (LLM)
    participant Report/Email

    User->>CLI (run_codedog.py): Run 'commit' or 'eval' command with options
    CLI (run_codedog.py)->>Repo Analyzer: Fetch commit(s) and diffs (local or remote)
    Repo Analyzer-->>CLI (run_codedog.py): Return commit data and file diffs
    CLI (run_codedog.py)->>DiffEvaluator: Evaluate commit or file diffs
    DiffEvaluator->>Model (LLM): Send prompt(s) for code evaluation
    Model (LLM)-->>DiffEvaluator: Return evaluation results (scores, comments, estimated hours)
    DiffEvaluator-->>CLI (run_codedog.py): Return evaluation summary and details
    CLI (run_codedog.py)->>Report/Email: Generate markdown report and optionally send email
    Report/Email-->>User: Deliver report and/or email
Loading

Possibly related PRs

  • kratos06/codedog#6: Introduced the initial .env.sample file with comprehensive environment variable templates, including model selection; this PR builds directly upon and modifies the same file and configuration.

Poem

Oh, what a hop through the code I see,
New models to summon, both four and three!
Commits from the cloud, or local, no fuss—
Evaluator now measures hours for us.
Docs in English, the samples are clear,
Out with the old, the new tools are here!
🐇✨ CodeDog leaps on, with a carrot-shaped cheer!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9eee5c7 and e1fce3a.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (17)
  • .env.sample (1 hunks)
  • UPDATES.md (1 hunks)
  • codedog/analysis_results_20250424_095117.json (1 hunks)
  • codedog/analyze_code.py (1 hunks)
  • codedog/chains/pr_summary/translate_pr_summary_chain.py (1 hunks)
  • codedog/utils/code_evaluator.py (38 hunks)
  • codedog/utils/git_log_analyzer.py (2 hunks)
  • codedog/utils/langchain_utils.py (2 hunks)
  • codedog/utils/remote_repository_analyzer.py (1 hunks)
  • docs/models.md (1 hunks)
  • requirements.txt (1 hunks)
  • run_codedog.py (8 hunks)
  • run_codedog_commit.py (0 hunks)
  • run_codedog_eval.py (0 hunks)
  • test_auto_review.py (0 hunks)
  • test_gpt4o.py (0 hunks)
  • test_grimoire_deepseek_r1_py.md (0 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@kratos06 kratos06 merged commit 95a63eb into master Apr 25, 2025
1 of 3 checks passed
@github-actions
Copy link
Copy Markdown

Coverage

Coverage Report
FileStmtsMissCoverMissing
codedog
   analyze_code.py19190%6–74
   localization.py17288%17, 30
codedog/actors/reporters
   code_review.py1074360%48–92, 96–111, 116, 144, 146, 148, 150, 157
codedog/chains/code_review
   base.py643152%34, 42, 50, 57–72, 79–94, 100–111, 114–117, 122–125, 135
   translate_code_review_chain.py48480%1–103
codedog/chains/pr_summary
   base.py822076%54, 62, 70, 96–116, 133–136, 178–180, 194–201
   translate_pr_summary_chain.py603542%40–49, 61–69, 77–85, 91–96, 101–117, 120–130, 135–151
codedog/retrievers
   github_retriever.py1004159%79, 87, 94–95, 98–99, 102, 111–114, 130–134, 150–151, 154, 163–167, 172–179, 197, 202, 205–209, 217–222, 225–228, 231, 242, 252
   gitlab_retriever.py1197636%43–56, 62, 66, 70, 74, 78, 81–85, 88–92, 95, 104, 116, 124–141, 146–149, 155–163, 166–176, 179–203, 206–212, 216, 219–226, 234, 237–240, 243, 254–255, 258
codedog/templates
   optimized_code_review_prompt.py990%8–259
codedog/utils
   code_evaluator.py132413240%1–2989
   email_utils.py705916%31–49, 71–113, 134–158
   git_hooks.py45450%1–146
   git_log_analyzer.py1701700%1–448
   langchain_utils.py23318222%25–41, 64, 68, 78–152, 162–299, 309, 316–337, 344–365, 372–393, 399–410, 416–427, 449–479
   remote_repository_analyzer.py1051050%1–248
TOTAL3010220927% 

Tests Skipped Failures Errors Time
46 1 💤 1 ❌ 1 🔥 4.034s ⏱️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant