Ultra-fast toolkit crafted within a leading scripting language, capable of generating, archiving, then recalling artificial intelligence recollections through two-dimensional matrix clip sequences. The platform supplies meaning-based lookup spanning countless document fragments, answering in under one second.
- Clips as Storage: Archive vast amounts of textual information in a compact .mp4
- Blazing Access: Retrieve relevant insights within milliseconds using meaning-based queries
- Superior Compression: Frame encoding significantly lowers data requirements
- Serverless Design: Operates entirely via standalone files β no backend needed
- Fully Local: Entire system runs independently once memory footage is created
- Tiny Footprint: Core logic spans fewer than 1,000 lines of code
- Resource-Conscious: Optimised to perform well on standard processors
- Self-Contained: Entire intelligence archive stored in one clip
- Remote-Friendly: Media can be delivered directly from online storage
pip install framerecall
pip install framerecall PyPDF2
# Create a new project directory
mkdir my-framerecall-project
cd my-framerecall-project
# Create virtual environment
python -m venv venv
# Activate it
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activate
# Install framerecall
pip install framerecall
# For PDF support:
pip install PyPDF2
from framerecall import FrameRecallEncoder, FrameRecallChat
# Construct memory sequence using textual inputs
segments = ["Crucial insight 1", "Crucial insight 2", "Contextual knowledge snippet"]
builder = FrameRecallEncoder()
builder.add_chunks(segments)
builder.build_video("archive.mp4", "archive_index.json")
# Interact with stored intelligence
assistant = FrameRecallChat("archive.mp4", "archive_index.json")
assistant.start_session()
output = assistant.chat("Tell me whatβs known about past happenings?")
print(output)
from framerecall import FrameRecallEncoder
import os
# Prepare input texts
assembler = FrameRecallEncoder(chunk_size=512, overlap=50)
# Inject content from directory
for filename in os.listdir("documents"):
with open(f"documents/{filename}", "r") as document:
assembler.add_text(document.read(), metadata={"source": filename})
# Generate compressed video sequence
assembler.build_video(
"knowledge_base.mp4",
"knowledge_index.json",
fps=30, # More chunks processed per second
frame_size=512 # Expanded resolution accommodates extra information
)
from framerecall import FrameRecallRetriever
# Set up fetcher
fetcher = FrameRecallRetriever("knowledge_base.mp4", "knowledge_index.json")
# Contextual discovery
matches = fetcher.search("machine learning algorithms", top_k=5)
for fragment, relevance in matches:
print(f"Score: {relevance:.3f} | {fragment[:100]}...")
# Retrieve neighbouring fragments
window = fetcher.get_context("explain neural networks", max_tokens=2000)
print(window)
from framerecall import FrameRecallInteractive
# Open real-time discussion UI
interactive = FrameRecallInteractive("knowledge_base.mp4", "knowledge_index.json")
interactive.run() # Web panel opens at http://localhost:7860
The examples/file_chat.py
utility enables thorough experimentation with FrameRecall using your personal data files:
# Ingest an entire folder of materials
python examples/file_chat.py --input-dir /path/to/documents --provider google
# Load chosen documents
python examples/file_chat.py --files doc1.txt doc2.pdf --provider openai
# Apply H.265 encoding (Docker required)
python examples/file_chat.py --input-dir docs/ --codec h265 --provider google
# Adjust chunking for lengthy inputs
python examples/file_chat.py --files large.pdf --chunk-size 2048 --overlap 32 --provider google
# Resume from previously saved memory
python examples/file_chat.py --load-existing output/my_memory --provider google
# 1. Prepare project directory and virtual environment
mkdir book-chat-demo
cd book-chat-demo
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 2. Install necessary packages
pip install framerecall PyPDF2
# 3. Build book_chat.py
cat > book_chat.py << 'EOF'
from framerecall import FrameRecallEncoder, chat_with_memory
import os
# Path to your document
book_pdf = "book.pdf" # Replace with your PDF filename
# Encode video from book
encoder = FrameRecallEncoder()
encoder.add_pdf(book_pdf)
encoder.build_video("book_memory.mp4", "book_index.json")
# Initiate interactive Q&A
api_key = os.getenv("OPENAI_API_KEY") # Optional for model output
chat_with_memory("book_memory.mp4", "book_index.json", api_key=api_key)
EOF
# 4. Launch the assistant
export OPENAI_API_KEY="your-api-key" # Optional
python book_chat.py
from sentence_transformers import SentenceTransformer
# Load alternative semantic model
custom_model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
encoder = FrameRecallEncoder(embedding_model=custom_model)
# Accelerate processing with concurrency
encoder = FrameRecallEncoder(n_workers=8)
encoder.add_chunks_parallel(massive_chunk_list)
ModuleNotFoundError: No module named 'framerecall'
# Confirm the correct Python interpreter is being used
which python # Expected to point to your environment
# If incorrect, reactivate the virtual setup:
source venv/bin/activate # On Windows: venv\Scripts\activate
ImportError: PyPDF2 missing for document parsing
pip install PyPDF2
Missing or Invalid OpenAI Token
# Provide your OpenAI credentials (register at https://platform.openai.com)
export OPENAI_API_KEY="sk-..." # macOS/Linux
# On Windows:
set OPENAI_API_KEY=sk-...
Handling Extensive PDFs
# Reduce segment length for better handling
encoder = FrameRecallEncoder()
encoder.add_pdf("large_book.pdf", chunk_size=400, overlap=50)
Weβre excited to collaborate! Refer to our Contribution Manual for full instructions.
# Execute test suite
pytest tests/
# Execute with coverage reporting
pytest --cov=framerecall tests/
# Apply code styling
black framerecall/
Capability | FrameRecall | Embedding Stores | Relational Systems |
---|---|---|---|
Data Compression | βββββ | ββ | βββ |
Configuration Time | Minimal | Advanced | Moderate |
Conceptual Matching | β | β | β |
Disconnected Access | β | β | β |
Mobility | Standalone File | Hosted | Hosted |
Throughput Limits | Multi-million | Multi-million | Multi-billion |
Financial Impact | No Charge | High Fees | Moderate Expense |
- v0.2.0 β International text handling
- v0.3.0 β On-the-fly memory insertion
- v0.4.0 β Parallel video segmentation
- v0.5.0 β Visual and auditory embedding
- v1.0.0 β Enterprise-grade, stable release
Explore the examples/ folder to discover:
- Transforming Wikipedia datasets into searchable memories
- Developing custom insight archives
- Multilingual capabilities
- Live content updates
- Linking with top-tier LLM platforms
Licensed under the MIT agreement, refer to the LICENSE document for specifics.
Time to redefine how your LLMs recall information, deploy FrameRecall and ignite knowledge! π