Skip to content

picahq/pica

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pica Logo

The Complete Agentic Infrastructure

Website · Documentation · Community Hub · Changelog · X · LinkedIn


Build, deploy, and scale your AI agents with ease. With full access to 100+ APIs and tools.

Pica makes it simple to build and manage AI agents with four key products:

  1. OneTool: Connect agents to over 100 APIs and tools with a single SDK.
  2. AuthKit: Securely manage authentication for tool integration.
  3. Agent: Create flexible agents that adapt to your needs (coming soon).
  4. AgentFlow: Enable agents to collaborate and manage tasks automatically (coming soon).

Pica also provides full logging and action traceability, giving developers complete visibility into their agents’ decisions and activities.Our tools simplify building and running AI agents so developers can focus on results.

Getting started

npm install @picahq/ai

Setup

  1. Create a new Pica account
  2. Create a Connection via the Dashboard
  3. Create an API key
  4. Set the API key as an environment variable: PICA_SECRET_KEY=<your-api-key>

Example use cases

Pica provides various SDKs to connect with different LLMs. Below are samples for using the Pica AI SDK designed for the Vercel AI SDK:

Express

  1. Install dependencies
npm install express @ai-sdk/openai ai @picahq/ai dotenv
  1. Create the server
import express from "express";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { Pica } from "@picahq/ai";
import * as dotenv from "dotenv";

dotenv.config();

const app = express();
const port = process.env.PORT || 3000;

app.use(express.json());

app.post("/api/ai", async (req, res) => {
  try {
    const { message } = req.body;

    // Initialize Pica
    const pica = new Pica(process.env.PICA_SECRET_KEY);

    // Generate the system prompt
    const systemPrompt = await pica.generateSystemPrompt();

    // Create the stream
    const { text } = await generateText({
      model: openai("gpt-4o"),
      system: systemPrompt,
      tools: { ...pica.oneTool },
      prompt: message,
      maxSteps: 5,
    });

    res.setHeader("Content-Type", "application/json");

    res.status(200).json({ text });
  } catch (error) {
    console.error("Error processing AI request:", error);

    res.status(500).json({ error: "Internal server error" });
  }
});

app.listen(port, () => {
  console.log(`Server is running on port ${port}`);
});

export default app;
  1. Test the server
curl --location 'http://localhost:3000/api/ai' \
--header 'Content-Type: application/json' \
--data '{
    "message": "What connections do I have access to?"
}'

Next.js

⭐️ You can see a full Next.js demo here

For more examples and detailed documentation, check out our SDK documentation.


Running Pica locally

Important

The Pica dashboard is going open source! Stay tuned for the big release 🚀

Prerequisites

Step 1: Install the Pica CLI

npm install -g @picahq/cli

Step 2: Initialize the Pica CLI

To generate the configuration file, run:

pica init

Step 3: Start the Pica Server

pica start

All the inputs are required. Seeding is optional, but recommended when running the command for the first time.

Example
# To start the docker containers
pica start
Enter the IOS Crypto Secret (32 characters long): xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Do you want to seed? (Y/N) y

The Pica API will be available at http://localhost:3005 🚀

To stop the docker containers, simply run:

pica stop

License

Pica is released under the GPL-3.0 license.