You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
summary: This pattern demonstrates how Red Hat OpenShift Data Science and MLFlow can be used together to build an end-to-end MLOps platform. It demonstrates this using a credit card fraud detection use case.
5
+
summary: This pattern demonstrates how Red Hat OpenShift AI can be used to build an end-to-end MLOps platform. It demonstrates this using a credit card fraud detection use case.
Most components installed as part of this pattern are available in the {rhoai} (RHOAI) console. To navigate to this page, click the {rhoai} link in the application launcher of the OpenShift console.
The pattern installation automatically creates and runs a Kubeflow pipeline to build and train the fraud detection model. To view pipeline details in the RHOAI console, select the *Pipelines* tab.
This tab displays the fraud-detection pipeline deployed as part of this pattern. To view the specific run that trained the initial model, select the *Runs* tab and then select the *job-run* item.
The source code for this pipeline run is available in the pattern repository at link:https://github.com/validatedpatterns/mlops-fraud-detection/blob/main/src/kubeflow-pipelines/small-model/train_upload_model.yaml[src/kubeflow-pipelines/small-model].
46
+
====
47
+
48
+
[id="kserve-model-serving"]
49
+
=== Kserve model serving
50
+
51
+
You can view the model deployment in the Model Deployments tab of the RHOAI console.
The pattern installs a simple Gradio front end to communicate with the fraud detection model. To access the application, click the link in the application launcher of the OpenShift console.
You can manually configure transaction details in the form. The application includes two examples: a fraudulent transaction and a non-fraudulent transaction.
Due to the non-deterministic nature of the training process, the model might not always identify these transactions accurately.
72
+
====
73
+
74
+
[NOTE]
75
+
====
76
+
The source code for the inferencing application is available in the pattern repository at link:https://github.com/validatedpatterns/mlops-fraud-detection/blob/main/src/inferencing-app/app.py[src/inferencing-app].
== MLOPS credit card fraud detection on IBM Fusion
11
+
12
+
This pattern is deployed with IBM Fusion. For more details, see the link:https://community.ibm.com/community/user/blogs/saif-adil/2026/01/08/deploying-ai-driven-credit-card-fraud-detection[IBM Community Post].
Copy file name to clipboardExpand all lines: modules/mfd-about-mlops-fraud-detection.adoc
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,15 @@
2
2
:imagesdir: ../../images
3
3
4
4
[id="about-mlops-fraud-detection-pattern"]
5
-
= About the MLOps Fraud Detection
5
+
= About the MLOps Fraud Detection Pattern
6
+
7
+
NOTE: This pattern has been reworked for a modern RHOAI experience. To see the original pattern, check out the link:https://github.com/validatedpatterns/mlops-fraud-detection/tree/legacy[legacy branch].
6
8
7
9
MLOps Credit Card Fraud Detection use case::
8
-
* Buildand train models in RHODS to detect credit card fraud
9
-
* Track and store those models with MLFlow
10
-
* Serve a model stored in MLFlow using RHODS Model Serving (or MLFlow serving)
11
-
* Deploy a model application in OpenShift that runs sends data to the served model and displays the prediction
10
+
* Build, train and serve models in RHOAI to detect credit card fraud
11
+
* Use Kubeflow pipelines in RHOAI for declarative model building workflows
12
+
* Store models in S3-compatible storage with Minio
13
+
* Serve ML models using Kserve on RHOAI
12
14
13
15
+
14
16
Background::
@@ -21,8 +23,9 @@ The model is built on a Credit Card Fraud Detection model, which predicts if a c
21
23
22
24
== Technology Highlights:
23
25
* Event-Driven Architecture
24
-
* Data Science on OpenShift
25
-
* Model registry using MLFlow
26
+
* Data Science on Red Hat OpenShift AI
27
+
* Declarative MLOps pipeline with Kubeflow
28
+
* ML model serving with Kserve
26
29
27
30
== Solution Discussion
28
31
@@ -33,7 +36,7 @@ This architecture pattern demonstrates four strengths:
33
36
* *Cost Efficiency*: By automating the detection process, AI reduces the need for extensive manual review of transactions, which can be time-consuming and costly.
34
37
* *Flexibility and Agility*: An cloud native architecture that supports the use of microservices, containers, and serverless computing, allowing for more flexible and agile development and deployment of AI models. This means faster iteration and deployment of new fraud detection algorithms.
35
38
36
-
== Demo Video
37
-
38
-
.Overview of the solution for credit card fraud detection
39
-
video::9Yx_XUOMMYI[youtube]
39
+
[INFO]
40
+
====
41
+
This pattern is based on the OpenShift AI tutorial for link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html-single/openshift_ai_tutorial_-_fraud_detection_example/index[fraud detection].
Copy file name to clipboardExpand all lines: modules/mfd-architecture.adoc
+5-19Lines changed: 5 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,13 +6,11 @@
6
6
7
7
Description of each component:
8
8
9
-
* *Data Set*: The data set contains the data used for training and evaluating the model we will build in this demo.
10
-
* *RHODS Notebook*: We will build and train the model using a Jupyter Notebook running in RHODS.
11
-
* *MLFlow Experiment tracking*: We use MLFlow to track the parameters and metrics (such as accuracy, loss, etc) of a model training run. These runs can be grouped under different "experiments", making it easy to keep track of the runs.
12
-
* *MLFlow Model registry*: As we track the experiment we also store the trained model through MLFlow so we can easily version it and assign a stage to it (for example Staging, Production, Archive).
13
-
* *S3 (ODF)*: This is where the models are stored and what the MLFlow model registry interfaces with. We use ODF (OpenShift Data Foundation) according to the MLFlow guide, but it can be replaced with another solution.
14
-
* *RHODS Model Serving*: We recommend using RHODS Model Serving for serving the model. It's based on ModelMesh and allows us to easily send requests to an endpoint for getting predictions.
15
-
* *Application interface*: This is the interface used to run predictions with the model. In our case, we will build a visual interface (interactive app) using Gradio and let it load the model from the MLFlow model registry.
9
+
* *Data Set*: The dataset contains the data used for training and evaluating the model built in this tutorial. The dataset is sourced from the link:https://github.com/rh-aiservices-bu/fraud-detection/tree/main/data[github.com/rh-aiservices-bu/fraud-detection]
10
+
* *Kubeflow Pipeline*: The Kubeflow pipeline builds, trains, and uploads the model. The source for this pipeline is in the pattern repository at link:https://github.com/validatedpatterns/mlops-fraud-detection/blob/main/src/kubeflow-pipelines/small-model/train_upload_model.yaml[src/kubeflow-pipelines/small-model/train_upload_model.yaml]. Upon pattern installation, the system automatically runs this pipeline once to train the initial model.
11
+
* *S3 (Minio)*: Minio provides storage for the models and serves as the storage interface for the Kubeflow pipeline. While this pattern uses Minio for parity with the source tutorial, any S3-compatible storage solution is compatible.
12
+
* *Kserve Model Serving*: The pattern uses the Kserve model serving capabilities in Red Hat OpenShift AI (RHOAI) to serve models with an OpenVINO model server.
13
+
* *Application interface*: This interface runs predictions with the model. This pattern includes a visual interface (interactive application) built with Gradio that loads the model from Minio.
16
14
17
15
//figure 1 originally
18
16
.Overview of the solution reference architecture
@@ -39,15 +37,3 @@ Red Hat® OpenShift® AI is an AI-focused portfolio that provides tools to train
39
37
40
38
https://www.redhat.com/en/technologies/cloud-computing/openshift/try-it[Red Hat OpenShift GitOps]::
41
39
A declarative application continuous delivery tool for Kubernetes based on the ArgoCD project. Application definitions, configurations, and environments are declarative and version controlled in Git. It can automatically push the desired application state into a cluster, quickly find out if the application state is in sync with the desired state, and manage applications in multi-cluster environments.
42
-
43
-
https://www.redhat.com/en/technologies/jboss-middleware/amq[Red Hat AMQ Streams]::
44
-
Red Hat AMQ streams is a massively scalable, distributed, and high-performance data streaming platform based on the Apache Kafka project. It offers a distributed backbone that allows microservices and other applications to share data with high throughput and low latency. Red Hat AMQ Streams is available in the Red Hat AMQ product.
45
-
46
-
Hashicorp Vault (community)::
47
-
Provides a secure centralized store for dynamic infrastructure and applications across clusters, including over low-trust networks between clouds and data centers.
48
-
49
-
MLFlow Model Registry (community)::
50
-
A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which MLflow experiment and run produced the model), model versioning, model aliasing, model tagging, and annotations.
51
-
52
-
Other::
53
-
This solution also uses a variety of _observability tools_ including the Prometheus monitoring and Grafana dashboard that are integrated with OpenShift as well as components of the Observatorium meta-project which includes Thanos and the Loki API.
0 commit comments