The CodeFlare-Operator has embedded two controllers, a RayCluster controller which creates resources including secrets, ingress, routes, service, serviceaccounts, clusterrolebinding resources; all needed for the RayClusters created to work as expected.
There's an AppWrapper Controller, which is a flexible and workload-agnostic mechanism to enable Kueue to manage a group of Kubernetes resources as a single logical unit and to provide an additional level of automatic fault detection and recovery.
For each controller, there are webhooks in place that can be found here.
CodeFlare Stack Compatibility Matrix
Component | Version |
---|---|
CodeFlare Operator | v1.15.0 |
CodeFlare-SDK | v0.27.0 |
AppWrapper | v1.0.4 |
KubeRay | v1.2.2 |
Kueue | v0.10.1 |
Requirements:
- GNU sed - sed is used in several Makefile command. Using macOS default sed is incompatible, so GNU sed is needed for correct execution of these commands.
When you have a version of the GNU sed installed on a macOS you may specify the binary using
# brew install gnu-sed make install -e SED=/usr/local/bin/gsed
- Kind - Kind is used in the kind-e2e command in the Makefile. Follow these instructions for the kind setup here
The e2e tests can be executed locally by running the following commands:
-
Use an existing cluster, or set up a test cluster, e.g.:
# Create a KinD cluster make kind-e2e
Note
Some e2e tests cover the access to services via Ingresses, as end-users would do, which requires access to the Ingress controller load balancer by its IP. For it to work on macOS, this requires installing docker-mac-net-connect.
-
Setup the rest of the CodeFlare stack.
make setup-e2e
Note
Kueue will only activate its Ray integration if KubeRay is installed before Kueue (as done by this make target).
Note
In OpenShift the KubeRay operator pod gets random user assigned. This user is then used to run Ray cluster. However the random user assigned by OpenShift doesn't have rights to store dataset downloaded as part of test execution, causing tests to fail. To prevent this failure on OpenShift user should enforce user 1000 for KubeRay and Ray cluster by creating this SCC in KubeRay operator namespace (replace the namespace placeholder):
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: run-as-ray-user
seLinuxContext:
type: MustRunAs
runAsUser:
type: MustRunAs
uid: 1000
users:
- 'system:serviceaccount:$(namespace):kuberay-operator'
-
In the /etc/hosts file add the following lines:
127.0.0.1 ray-dashboard-raycluster-test-ns-1.kind 127.0.0.1 ray-dashboard-raycluster-test-ns-2.kind
-
Build, push and deploy the codeflare-operator image:
make image-push IMG=<full-registry>:<tag> make deploy -e IMG=<full-registry>:<tag> -e ENV="e2e"
-
To run the tests run the command
make test-e2e
Alternatively, You can run the e2e test(s) from your IDE / debugger.
To properly run e2e tests on disconnected cluster user has to provide additional environment variables to properly configure testing environment:
CODEFLARE_TEST_PYTORCH_IMAGE
- image tag for image used to run training jobCODEFLARE_TEST_RAY_IMAGE
- image tag for Ray cluster imageMNIST_DATASET_URL
- URL where MNIST dataset is availablePIP_INDEX_URL
- URL where PyPI server with needed dependencies is runningPIP_TRUSTED_HOST
- PyPI server hostname
For ODH tests additional environment variables are needed:
NOTEBOOK_IMAGE_STREAM_NAME
- name of the ODH Notebook ImageStream to be usedODH_NAMESPACE
- namespace where ODH is installed
- Invoke project-codeflare-release.yaml
- Once all jobs within the action are completed, verify that compatibility matrix in README was properly updated.
- Verify that opened pull request to OpenShift community operators repository has proper content.
- Once PR is merged, announce the new release in slack and mail lists, if any.
- Trigger the auto-merge-sync workflow and verify it ran successfully. This will sync changes to the ODH CodeFlare-Operator repo, and the Red Hat CodeFlare Operator repo. Please review the new merge-commit and commit history, and verify changes are also in the latest
rhoai
release branch. - If the auto-merge fails, conflicts must be resolved and force pushed manually to each downstream repository and release branch. - In ODH/CFO verify that the Build and Push action was triggered and ran successfully.
- Make sure that release automation created a PR updating CodeFlare SDK version in ODH Notebooks repository. Make sure the PR gets merged.
- Run ODH CodeFlare Operator release workflow to produce ODH CodeFlare Operator release.
- Ensure that the version details in the
config/component_metadata.yaml
file are updated to reflect the latest upstream CodeFlare Operator release version
There may be instances in which a new CodeFlare stack release requires releases of only a subset of the stack components. Examples could be hotfixes for a specific component. In these instances:
-
Build updated components as needed:
- Build and release CodeFlare-SDK
-
Invoke tag-and-build.yml GitHub action, this action will create a repository tag, build and push operator image.
-
Check result of tag-and-build.yml GitHub action, it should pass.
-
Verify that compatibility matrix in README was properly updated.
-
Follow the steps 3-6 from the previous section.