Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add benchmark for graphql-tools-mocking #5

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

alias-mac
Copy link
Contributor

@alias-mac alias-mac commented Sep 14, 2022

This will allow us to test the performance of the system with different
schema sizes.

Benchmarks

Benchmarks attached in the PR for ts-web and v-web on multiple scenarios/environments.

There isn't much difference on booting with GQL server or without, thus it shouldn't be a decision maker.

Documentation added on how to run these benchmarks.

Linux benchmarks:

Command Mean [s] Min [s] Max [s] Relative
ts-node -T graphql-tools-mocking/benchmark.ts graphql-tools-mocking.graphql 2.735 ± 0.071 2.657 2.890 1.00
ts-node -T graphql-tools-mocking/benchmark.ts ts-deco-fe.federated.graphql 3.785 ± 0.052 3.708 3.888 1.38 ± 0.04
ts-node -T graphql-tools-mocking/benchmark.ts voyager-api.federated.graphql 5.238 ± 0.081 5.134 5.392 1.92 ± 0.06

Macbook Pro with M1 Pro CPU:

Command Mean [s] Min [s] Max [s] Relative
ts-node -T graphql-tools-mocking/benchmark.ts graphql-tools-mocking.graphql 1.493 ± 0.068 1.443 1.672 1.00
ts-node -T graphql-tools-mocking/benchmark.ts ts-deco-fe.federated.graphql 2.200 ± 0.082 2.127 2.356 1.47 ± 0.09
ts-node -T graphql-tools-mocking/benchmark.ts voyager-api.federated.graphql 2.813 ± 0.062 2.750 2.949 1.88 ± 0.09

Add information about issues with arguments.
This will allow us to test the performance of the system with different
schema sizes.
This is for graphql-tools-mocking setup.
Specs:

MacBook Pro (16-inch, 2021)
Chip Apple M1 Pro
Memory 32 GB
Specs:

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz

Memory 64 GB
@@ -0,0 +1,5 @@
| Command | Mean [s] | Min [s] | Max [s] | Relative |
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know these generated files will cause issues with linting, but I'm not sure if we want to commit them or not.
On one hand it will be easier to reference/point to the data, on the other it is generated and thus not needed to be part of the repo.

Comment on lines +38 to +42
$ hyperfine --warmup 3 -r 10 \
'ts-node -T graphql-tools-mocking/benchmark.ts graphql-tools-mocking.graphql' \
'ts-node -T graphql-tools-mocking/benchmark.ts <api-1>.graphql' \
'ts-node -T graphql-tools-mocking/benchmark.ts <api-2>.graphql' \
--export-markdown benchmark-result-<architecture>-<with-server/without-server>.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly are we measuring? Is it time to boot + mock 1 query?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, in this case it is warm boot tho. The point here was not to measure the performance of mocking 1 query, but test the performance impact of big schemas and, as they grow, will this cause problems for our development productivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants