Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How easy it is to externalize the visibility of the benchmarks we already run continuously? #10

Open
bhack opened this issue Jul 2, 2020 · 1 comment

Comments

@bhack
Copy link
Contributor

bhack commented Jul 2, 2020

This a pinpoint to not lost the benchmarking subthread started at tensorflow/tensorflow#33945 (comment)

Quoting @zongweiz

We have [TF2 benchmarks] (https://github.com/tensorflow/models/tree/master/official/benchmark) from every official model garden model and they continuously running internally, using PerfZero framework. We should be able to select a set of benchmarks and hook it up with github CI (jenkins). Added @sganeshb to this thread for information and we will follow up.
Another thought is to start CI tests using a selected set of tf_benchmarks

/cc @alextp @naveenjha123 @sganeshb

@bhack
Copy link
Contributor Author

bhack commented Jul 15, 2020

Is it ok to track this here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant