Skip to content

Extend tests to other branches #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
May 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@ concurrency:
cancel-in-progress: true

on:
push:
branches:
- main
pull_request:
branches:
- 'main'
workflow_dispatch:

permissions:
Expand Down
130 changes: 130 additions & 0 deletions .github/workflows/micromamba.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
name: micromamba
# concurrency:
# group: ${{ github.head_ref || github.run_id }}
# cancel-in-progress: true

on:
pull_request:
workflow_dispatch:
schedule:
# - cron: "*/30 * * * *" # Runs every 30 minutes for testing
- cron: "30 1 * * *" # at 1.30am
## these permissions are only for deployment to gh pages
# permissions:
# id-token: write
# pages: write

jobs:
run-benchmark-micromamba:
name: run_clustbench_micromamba
## runs-on: ubuntu-latest
runs-on: self-hosted
strategy:
matrix:
ob_branch: [dev, reduce_install_scope, main]
micromamba-version: ['2.1.1-0', '2.0.5-0', '1.5.12-0', '1.5.8-0']
fail-fast: false
concurrency:
group: micromamba-${{ matrix.micromamba-version }}-${{ matrix.ob_branch }}
cancel-in-progress: false # true
steps:
- name: Check out repository
uses: actions/checkout@v4

- name: Install (with) micromamba
uses: mamba-org/setup-micromamba@v2
with:
cache-environment: false # true
micromamba-version: ${{ matrix.micromamba-version }}
download-micromamba: true
micromamba-binary-path: ${{ runner.temp }}/bin/micromamba-${{ matrix.micromamba-version }}/micromamba
environment-name: test-env-${{matrix.ob_branch }}-${{ matrix.micromamba-version }}
create-args: >-
python=3.12
pip
conda
post-cleanup: environment # all
- name: Overwrite omnibenchmark CLI to branch
shell: bash -l {0}
run: |
micromamba --version
pip install git+https://github.com/omnibenchmark/omnibenchmark.git@${{ matrix.ob_branch }}

# - name: Enable a benchmarking `out` cache
# id: cache-benchmark
# uses: actions/cache@v3
# with:
# path: out/
# key: benchmark-${{ runner.os }}-${{ hashFiles('Clustering.yaml') }}

- name: Run benchmark
shell: bash -l {0}
run: |
env
output=$( echo "y" | ob run benchmark -b Clustering.yaml --local --cores 10 2>&1 )
status=$?
if echo "$output" | grep -i 'Benchmark run has finished successfully'; then
status=0
fi
echo -e $output
sh -c "exit $status"
if: matrix.ob_branch == 'dev' || matrix.ob_branch == 'reduce_install_scope'

- name: Run benchmark
shell: bash -l {0}
run: |
env
output=$( ob run benchmark -b Clustering.yaml --local --threads 10 2>&1 )
status=$?
if echo "$output" | grep -i 'Benchmark run has finished successfully'; then
status=0
fi
echo -e $output
sh -c "exit $status"
if: matrix.ob_branch == 'main'
Comment on lines +73 to +84
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, we would need to make sure to use a benchmark yaml without metric collectors for the main branch.
Or comment the main branch from the matrix.
Keeping it as it is, will just pollute the fail tests with false positive.

Copy link
Member Author

@imallona imallona May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand, but to me these are true positives, clustbench doesn't work on ob main. I'm looking forward to see all tests green, once ob main gets updated to be clustbench compatible. Different perspective I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imallona right, fair enough. We want to provide the nightly tests for all the expected features.


# upload-artifact:
# name: Benchmark Artifact
# runs-on: ubuntu-latest
# ## runs-on: self-hosted
# needs: run-benchmark
# if: always()
# steps:
# - name: Check out repository
# uses: actions/checkout@v4

# - name: Load cached output
# uses: actions/cache@v3
# with:
# path: out/
# key: benchmark-${{ runner.os }}-${{ hashFiles('Clustering.yaml') }}

# - name: Prepare output
# run: |
# zip -r benchmark_output.zip out/
# mkdir -p gh-pages
# cp out/plotting/plotting_report.html gh-pages/index.html

# - name: Upload zipped output
# uses: actions/upload-artifact@v4
# with:
# name: benchmark-output
# path: benchmark_output.zip
# retention-days: 7

# - name: Upload Pages Artifact
# uses: actions/upload-pages-artifact@v3
# with:
# path: gh-pages

# - name: Deploy to GitHub Pages
# uses: actions/deploy-pages@v4

# - name: Create Job Summary
# if: always()
# run: |
# echo "### Reports" >> $GITHUB_STEP_SUMMARY
# echo "- [Plotting Report](https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }})" >> $GITHUB_STEP_SUMMARY
# echo "### All Outputs" >> $GITHUB_STEP_SUMMARY
# echo "- [Complete Benchmark Output](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}#artifacts)" >> $GITHUB_STEP_SUMMARY

139 changes: 139 additions & 0 deletions .github/workflows/miniconda_miniforge.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
name: clustbench_miniforge
# concurrency:
# group: ${{ github.head_ref || github.run_id }}
# cancel-in-progress: true

on:
pull_request:
workflow_dispatch:
schedule:
# - cron: "*/30 * * * *" # Runs every 30 minutes for testing
- cron: "30 1 * * *" # at 1.30am

## these permissions are only for deployment to gh pages
# permissions:
# id-token: write
# pages: write

jobs:
run-benchmark-miniforge:
name: run_clustbench_miniforge
## runs-on: ubuntu-latest
runs-on: self-hosted
strategy:
matrix:
ob_branch: [dev, reduce_install_scope, main]
fail-fast: false
concurrency:
group: mambaforge-${{ matrix.ob_branch }}
cancel-in-progress: false # true
steps:
- name: Check out repository
uses: actions/checkout@v4

- name: Install Mambaforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-variant: Miniforge3
use-mamba: true
activate-environment: test-env-${{matrix.ob_branch }}
python-version: "3.12"
auto-update-conda: true
channels: conda-forge

- name: Cache environment
id: cache-env
uses: actions/cache@v3
with:
path: |
~/.conda/pkgs
~/.conda/envs/omnibenchmark-env
~/.cache/pip
key: ${{ runner.os }}-conda-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-conda-pip-

- name: Install omnibenchmark CLI
shell: bash -l {0}
run: |
mamba install -y pip
pip install git+https://github.com/omnibenchmark/omnibenchmark.git@${{ matrix.ob_branch }}

# - name: Enable a benchmarking `out` cache
# id: cache-benchmark
# uses: actions/cache@v3
# with:
# path: out/
# key: benchmark-${{ runner.os }}-${{ hashFiles('Clustering.yaml') }}

- name: Run benchmark
shell: bash -l {0}
run: |
env
output=$( echo "y" | ob run benchmark -b Clustering.yaml --local --cores 10 2>&1 )
status=$?
if echo "$output" | grep -i 'Benchmark run has finished successfully'; then
status=0
fi
echo -e $output
sh -c "exit $status"
if: matrix.ob_branch == 'dev' || matrix.ob_branch == 'reduce_install_scope'

- name: Run benchmark
shell: bash -l {0}
run: |
env
output=$( ob run benchmark -b Clustering.yaml --local --threads 10 2>&1 )
status=$?
if echo "$output" | grep -i 'Benchmark run has finished successfully'; then
status=0
fi
echo -e $output
sh -c "exit $status"
if: matrix.ob_branch == 'main'
Comment on lines +82 to +93
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same reasoning here.


# upload-artifact:
# name: Benchmark Artifact
# runs-on: ubuntu-latest
# ## runs-on: self-hosted
# needs: run-benchmark
# if: always()
# steps:
# - name: Check out repository
# uses: actions/checkout@v4

# - name: Load cached output
# uses: actions/cache@v3
# with:
# path: out/
# key: benchmark-${{ runner.os }}-${{ hashFiles('Clustering.yaml') }}

# - name: Prepare output
# run: |
# zip -r benchmark_output.zip out/
# mkdir -p gh-pages
# cp out/plotting/plotting_report.html gh-pages/index.html

# - name: Upload zipped output
# uses: actions/upload-artifact@v4
# with:
# name: benchmark-output
# path: benchmark_output.zip
# retention-days: 7

# - name: Upload Pages Artifact
# uses: actions/upload-pages-artifact@v3
# with:
# path: gh-pages

# - name: Deploy to GitHub Pages
# uses: actions/deploy-pages@v4

# - name: Create Job Summary
# if: always()
# run: |
# echo "### Reports" >> $GITHUB_STEP_SUMMARY
# echo "- [Plotting Report](https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }})" >> $GITHUB_STEP_SUMMARY
# echo "### All Outputs" >> $GITHUB_STEP_SUMMARY
# echo "- [Complete Benchmark Output](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}#artifacts)" >> $GITHUB_STEP_SUMMARY

Loading