Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add benchmark doc #2229

Merged
merged 1 commit into from
Jul 17, 2024
Merged

docs: add benchmark doc #2229

merged 1 commit into from
Jul 17, 2024

Conversation

MrKou47
Copy link
Member

@MrKou47 MrKou47 commented Jul 16, 2024

Add benchmark documentation in Chinese

Summary by CodeRabbit

  • Documentation
    • Introduced a benchmark testing document comparing performance of Galacean Engine, Babylon.js, and Three.js.
    • Document includes testing environment details and benchmarking process for rendering performance with glTF models, particle systems, and 2D sprites.

Copy link

coderabbitai bot commented Jul 16, 2024

Walkthrough

The new benchmark.mdx file introduces a comprehensive benchmarking document for evaluating the performance of multiple game engines, specifically Galacean Engine, Babylon.js, and Three.js. It details the testing environment, including hardware specifications, and describes the benchmarking processes for rendering performance using glTF models, particle systems, and 2D sprites. This aims to provide transparent and comparable performance data for the community.

Changes

Files Change Summary
docs/quick-start/... Introduction of benchmark.mdx document outlining benchmark tests for Galacean Engine, Babylon.js, and Three.js, detailing the environment, processes, and metrics.

Poem

In the realm of engines three,
Benchmarks set us free,
Galacean, Babylon, and Three.js,
Rendering their best, no less.
Transparent as can be,
For the community to see,
Performance tests with glee! 🚀🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MrKou47 MrKou47 requested a review from GuoLei1990 July 16, 2024 11:39
Copy link

codecov bot commented Jul 16, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 69.35%. Comparing base (4cc191e) to head (7444f1b).
Report is 4 commits behind head on dev/1.3.

Additional details and impacted files
@@             Coverage Diff             @@
##           dev/1.3    #2229      +/-   ##
===========================================
+ Coverage    68.27%   69.35%   +1.08%     
===========================================
  Files          468      523      +55     
  Lines        24353    27304    +2951     
  Branches      3632     4067     +435     
===========================================
+ Hits         16626    18936    +2310     
- Misses        6400     6891     +491     
- Partials      1327     1477     +150     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range, codebase verification and nitpick comments (1)
docs/quick-start/benchmark.mdx (1)

23-29: Clarify the use of three-nebula for Three.js.

It's mentioned that Three.js does not have a native particle system, and three-nebula is used instead. Clarifying this could help avoid confusion about the capabilities of Three.js.

- > Three.js 自身没有 Particle System,也没有目前仍在维护的首选三方库。类似效果一般通过开发者独力实现。因此我们在此项测试中使用 `three-nebula` 来代指 Three.js 的粒子系统
+ > 注意:Three.js 自身不包含 Particle System。在此基准测试中,我们使用 `three-nebula`,一个三方库,来模拟 Three.js 的粒子系统效果。
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d15dcd3 and 7444f1b.

Files selected for processing (1)
  • docs/quick-start/benchmark.mdx (1 hunks)
Additional comments not posted (3)
docs/quick-start/benchmark.mdx (3)

16-20: Approve the basic rendering test description.

The description of the rendering tests using glTF models is clear and the custom component <BenchmarkDisplay> is used correctly.


31-35: Approve the 2D rendering test description.

The description of the 2D rendering tests is clear and concise, and the custom component <BenchmarkDisplay> is used correctly.


8-14: Verify the benchmark repository link.

The link to the benchmark repository is provided, but it's crucial to ensure that it is accessible and contains the relevant source code.

Verification successful

Verification successful.

The link to the benchmark repository is correct and accessible.

  • The repository title confirms it is the benchmark for Galacean.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if the benchmark repository link is accessible and contains the relevant source code.
curl -s https://github.com/galacean/benchmark | grep -o "<title>.*</title>"

Length of output: 143

Comment on lines +37 to +41
## 总结

我们选择了 Babylon.js,Three.js 以及 PixiJS 这几款引擎进行测试,是因为它们都是当前最受欢迎的游戏引擎,并且在开发的过程中,它们一直是我们努力去赶超的目标。我们希望通过这些基准测试为社区提供透明以及可比较的性能信息。

如果你希望我们包括你的游戏引擎,或者认为这些测试存在改进建议,欢迎随时开启一个 [issue](https://github.com/galacean/benchmark/issues)。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhance the invitation for community engagement.

The invitation to open issues for suggestions or to include other engines is good, but it could be more engaging.

- 如果你希望我们包括你的游戏引擎,或者认为这些测试存在改进建议,欢迎随时开启一个 (https://github.com/galacean/benchmark/issues)。
+ 我们欢迎社区成员提出改进建议或请求包括更多游戏引擎。如果你有任何想法,请通过开启一个 (https://github.com/galacean/benchmark/issues) 与我们分享。
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## 总结
我们选择了 Babylon.js,Three.js 以及 PixiJS 这几款引擎进行测试,是因为它们都是当前最受欢迎的游戏引擎,并且在开发的过程中,它们一直是我们努力去赶超的目标。我们希望通过这些基准测试为社区提供透明以及可比较的性能信息。
如果你希望我们包括你的游戏引擎,或者认为这些测试存在改进建议,欢迎随时开启一个 [issue](https://github.com/galacean/benchmark/issues)
## 总结
我们选择了 Babylon.js,Three.js 以及 PixiJS 这几款引擎进行测试,是因为它们都是当前最受欢迎的游戏引擎,并且在开发的过程中,它们一直是我们努力去赶超的目标。我们希望通过这些基准测试为社区提供透明以及可比较的性能信息。
我们欢迎社区成员提出改进建议或请求包括更多游戏引擎。如果你有任何想法,请通过开启一个 [issue](https://github.com/galacean/benchmark/issues) 与我们分享

@GuoLei1990 GuoLei1990 added documentation Improvements or additions to documentation ignore for release ignore for release labels Jul 17, 2024
@GuoLei1990 GuoLei1990 merged commit 5415f7f into dev/1.3 Jul 17, 2024
8 checks passed
@GuoLei1990 GuoLei1990 deleted the doc/benchmark branch July 19, 2024 05:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ignore for release ignore for release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants