JavaScript/TypeScript client to interact with the Aquiles-RAG service.
npm i @aquiles-ai/aquiles-rag-client- ✅ Full TypeScript support
- ✅ Asynchronous client based on Promises
- ✅ Automatic text chunking
- ✅ Support for custom metadata
- ✅ Reranking functions
- ✅ Vector index management
import { AsyncAquilesRAG } from '@aquiles-ai/aquiles-rag-client';
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function getEmbedding(text: string): Promise<number[]> {
if (!text) return [];
const resp = await openai.embeddings.create({
model: "text-embedding-3-small",
input: text,
encoding_format: "float",
});
const emb = (resp as any)?.data?.[0]?.embedding;
if (!Array.isArray(emb)) throw new Error("Invalid embedding");
return emb.map(Number);
}
async function main() {
// Initialize the client
const client = new AsyncAquilesRAG({
host: 'http://127.0.0.1:5500',
apiKey: 'dummy-api-key',
timeout: 30000,
});
try {
// Create an index
console.log('Creating index...');
await client.createIndex('my_index', 1536, 'FLOAT32', true);
console.log('✓ Index created successfully');
// Send document to RAG
console.log('Sending document to RAG...');
const text = `
Artificial Intelligence is a field of computer science that focuses on
creating systems capable of performing tasks that normally require human intelligence.
RAG (Retrieval Augmented Generation) is a technique that combines information retrieval
with text generation to produce more accurate and grounded responses.
`.repeat(5);
const results = await client.sendRAG(
getEmbedding,
'my_index',
'ai_document',
text,
{
dtype: 'FLOAT32',
}
);
console.log(`✓ Successfully sent ${results.length} chunks`);
// Show results details
results.forEach((result, idx) => {
if (result.error) {
console.log(` Chunk ${idx + 1}: ❌ Error - ${result.error}`);
} else {
console.log(` Chunk ${idx + 1}: ✓ ${result.status} - Key: ${result.key}`);
}
});
// Perform a query
console.log('\nPerforming query...');
const queryEmbedding = await getEmbedding('RAG (Retrieval Augmented Generation)');
const queryResults = await client.query('my_index', queryEmbedding, {
topK: 5,
cosineDistanceThreshold: 0.5,
});
console.log(`✓ Query results: Found ${queryResults.length} results`);
if (queryResults.length === 0) {
console.log(' No results found. Try adjusting the cosineDistanceThreshold.');
} else {
queryResults.forEach((result, idx) => {
console.log(`\n Result ${idx + 1}:`);
console.log(` Name: ${result.name_chunk}`);
console.log(` Score: ${result.score}`);
console.log(` Text: ${result.raw_text.substring(0, 150)}...`);
if (result.metadata) {
console.log(` Metadata:`, JSON.stringify(result.metadata, null, 6));
}
if (result.embedding_model) {
console.log(` Model: ${result.embedding_model}`);
}
});
}
// Rerank results (if available)
//if (queryResults.length > 0) {
// console.log('\nReranking results...');
// try {
// const reranked = await client.reranker('What is RAG?', queryResults);
// console.log(`✓ ${reranked.length} results reranked`);
// reranked.forEach((result, idx) => {
// console.log(`\n Reranked ${idx + 1}:`);
// console.log(` Score: ${result.score || 'N/A'}`);
// console.log(` Content: ${JSON.stringify(result).substring(0, 100)}...`);
// });
// } catch (error) {
// console.log(' Reranking not available or failed:', (error as Error).message);
// }
//}
// Drop index (optional)
// console.log('\nDropping index...');
// const dropResult = await client.dropIndex('my_index', true);
// console.log('✓ Index deleted:', dropResult);
} catch (error) {
console.error('❌ Error during execution:', error);
if (error instanceof Error) {
console.error(' Message:', error.message);
console.error(' Stack:', error.stack);
}
}
}
main();new AsyncAquilesRAG(options?: AquilesRAGOptions)Options:
host: Base server URL (default:http://127.0.0.1:5500)apiKey: API key for authentication (optional)timeout: Timeout in milliseconds (default:30000)
Creates a new vector index.
Parameters:
indexName(string): Unique index nameembeddingsDim(number): Embedding dimensionality (default: 768)dtype('FLOAT32' | 'FLOAT64' | 'FLOAT16'): Data type (default: 'FLOAT32')deleteIfExists(boolean): Delete existing index (default: false)
Queries the vector index.
Parameters:
index(string): Index nameembedding(number[]): Query embedding vectoroptions(object):dtype: Data typetopK: Number of results (default: 5)cosineDistanceThreshold: Distance threshold (default: 0.6)embeddingModel: Model identifiermetadata: Metadata filters
Sends a document to RAG by splitting it into chunks.
Parameters:
embeddingFunc(function): Function that generates embeddingsindex(string): Index namenameChunk(string): Base name for chunksrawText(string): Full text to processoptions(object):dtype: Data typeembeddingModel: Model identifiermetadata: Document metadata
Deletes an index.
Parameters:
indexName(string): Index namedeleteDocs(boolean): Delete documents (default: false)
Reranks results by relevance.
Parameters:
query(string): Original querydocs(array | object): Results to rerank
interface ChunkMetadata {
author?: string; // Document author
language?: string; // ISO 639-1 code (e.g., "EN", "ES")
topics?: string[]; // List of topics
source?: string; // Content source
created_at?: string; // ISO 8601 date
extra?: Record<string, any>; // Additional metadata
}Splits text into chunks by words.
import { chunkTextByWords } from '@aquiles-ai/aquiles-rag-client';
const chunks = chunkTextByWords('Your long text...', 600);Extracts text from a chunk with different formats.
import { extractTextFromChunk } from '@aquiles-ai/aquiles-rag-client';
const text = extractTextFromChunk(result);# Build
npm run build
# Run example
npm testApache 2.0