Auralyse is an AI-assisted mix and analysis companion for music producers, engineers, and creators.
It listens to your mix, runs detailed audio analysis (loudness, tonal balance, stereo image, dynamics), and turns that into clear, actionable feedback you can use in your DAW. Auralyse is not a “one-click mastering” tool; it is a session-oriented assistant that helps you understand why your mix sounds the way it does and what to try next.
Auralyse is designed to be a thoughtful second opinion on your mix:
-
Deep analysis with simple views
LUFS, true peak, loudness range, crest factor, tonal balance (low / mid / high), stereo width. -
AI-powered feedback
Plain-language summaries and concrete mix suggestions (EQ, compression, tonal and stereo decisions) based on your audio analysis and context. -
Producer-first workflow
You keep your ears and taste in charge. Auralyse suggests moves; it does not touch your DAW or claim to “auto-master” your track. -
Session focus
Each analysis is a session you can revisit, compare, and learn from over time.
Naming and structure may evolve as the project grows, but this is the current layout.
-
engine
LangGraph-powered workflow engine for Auralyse. Orchestrates audio metadata, analysis, and AI feedback into a singleEngineState. -
api
API gateway for Auralyse. Handles authentication, sessions, persistence, and coordinates the engine with the audio microservices. -
web
Next.js + TypeScript frontend for the marketing site and user dashboard (uploads, session history, visual analysis, AI feedback). -
audio-metadata-service
Express microservice that uses ffprobe to extract file metadata (duration, sample rate, channels, bitrate, format). -
audio-analysis-service
Audio analysis microservice (ffmpeg-powered) for loudness, tonal balance bands, dynamics, and stereo width. -
audio-feedback-service
LLM-backed microservice that converts analysis and user context into human-readable mix feedback and suggestions. -
docs(planned)
Central home for product documentation, API references, and technical deep dives into how Auralyse works.
Auralyse is built as a set of focused services:
-
Frontend (
web)- Next.js, Tailwind CSS, shadcn/ui
- Uploads tracks, manages sessions, shows charts and feedback.
-
API Gateway (
api)- Express, TypeScript
- Authentication via Cerberus IAM
- Calls the engine and microservices, stores
EngineStatein a database.
-
Engine (
engine)- TypeScript package using LangGraph
- Orchestrates:
- Audio metadata client
- Audio analysis client
- Feedback client (LLM)
- Returns a single
EngineStatefor each session.
-
Audio services
audio-metadata-service: ffprobe-based metadata extractionaudio-analysis-service: ffmpeg-powered analysisaudio-feedback-service: OpenAI (or compatible) LLM feedback
The frontend only communicates with the API; the API and engine coordinate the rest of the system behind the scenes.
- Languages: TypeScript, Node.js
- Frontend: Next.js, Tailwind CSS, shadcn/ui, Recharts/VisX for visualizations
- Backend: Express, LangGraph, Prisma (planned), Cerberus IAM
- Audio: ffmpeg / ffprobe, EBU R128 loudness, banded spectral and stereo analysis
- AI: LLM-powered feedback via OpenAI-compatible APIs
Auralyse is currently in active development:
- Engine and service contracts are being defined and stabilized.
- Microservices are being wired together through the API gateway.
- The frontend is evolving from scaffolding to a polished, production-ready interface.
Expect breaking changes while the architecture and APIs are still settling.
The current focus is on solidifying the core experience and internal architecture.
Public contribution guidelines and detailed documentation will be added once the APIs and repositories are stable.
If you are interested in Auralyse, you can:
- Watch this organization for new repositories and updates.
- Follow issues and project boards as they become available.
Auralyse is built by people who care about both sound and software. The goal is to make mix analysis and feedback more accessible, transparent, and useful for working producers and engineers.