Compact, reproducible experiments at the intersection of AI and audio/music. Code-first, latency-aware, and deployable on desktop or cloud.
- AI × Music — generation, transformation, assistive tooling.
 - Real-time Audio DSP — low-latency processing & effects.
 - Neural Networks for Audio — classification, tagging, separation.
 - LLMs & MCP Tools — agent workflows, tool use, automation.
 - Livecoding — performance setups and utilities.
 - Music Information Retrieval (MIR) — features, embeddings, search.
 - APIs & Deployments — embedded (Raspberry Pi) and cloud (AWS/GCP).
 
- Minimal, runnable prototypes (notebooks, scripts, services).
 - Reusable modules for DSP/MIR/LLM pipelines.
 - Configs for local, embedded, and cloud runs.
 
- Python ≥ 3.11 and Docker recommended.
 - Each project folder has its own 
README.mdwith setup and run steps. - Keep environments pinned; reproducibility > “it works on my machine”.
 
Audio processing jobs ready to run locally or in Cloud (GPU enabled) Google Cloud Platform (GCP) or AWS https://github.com/docker-audio-tools