Skip to content
This repository was archived by the owner on Oct 30, 2025. It is now read-only.

Commit 59195cc

Browse files
authored
less emphasis on guardrails/observability
1 parent 267030b commit 59195cc

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Cleanlab AI Platform demo
22

3-
This repository contains an example of a simple client application, a standard RAG system (OpenAI + LlamaIndex), that is integrated with the [Cleanlab AI Platform](https://codex.cleanlab.ai/). To improve any AI app, Cleanlab provides [observability/logging](#1-observability-and-logging), [real-time guardrails](#2-real-time-guardrails) (including [hallucination detection](#2a-out-of-the-box-guardrails), [semantic guardrails](#2b-semantic-guardrails), and [deterministic guardrails](#2c-deterministic-guardrails)), [offline evals and root-cause analysis](#3-offline-evaluations), and [expert answers](#4-remediations) (remediating AI responses with a human-in-the-loop workflow). This simple self-contained demo illustrates the platform's capabilities; Cleanlab's technology is also used in other LLM applications like [Agents](https://cleanlab.ai/blog/agent-tlm-hallucination-benchmarking/) and Data Extraction.
3+
This repository contains an example of a simple client application, a standard RAG system (OpenAI + LlamaIndex), that is integrated with the [Cleanlab AI Platform](https://codex.cleanlab.ai/). Cleanlab is a trust/control layer for *any* AI app, helping you auotmatically detect/prevent incorrect responses from your AI and remediate them via human-in-the-loop workflows. This self-contained demo illustrates the platform's basic capabilities; Cleanlab's technology is also used in other LLM applications like [Agents](https://github.com/cleanlab/airline-agent) and [Data Extraction](https://help.cleanlab.ai/tlm/use-cases/tlm_data_extraction/).
44

55
This demo is inspired by [Cursor's rogue customer support AI](https://arstechnica.com/ai/2025/04/cursor-ai-support-bot-invents-fake-policy-and-triggers-user-uproar/): it's a generative Q&A system built on top of Cursor's docs, made reliable using Cleanlab's technology to detect and remediate bad outputs.
66

0 commit comments

Comments
 (0)