π GitHub Sponsors β’ π¬ Discord β’ π¦ NPM β’ β¨ Repo
- π§ Prefer listening than reading? Reliverse Deep Dive on SoundSDK is live!
βΆοΈ Listen here.- π€ Want to discuss this repo with AI? Reliverse AI will be happy to chat with you! π¬ Talk here.
@reliverse/ssdk is a high-performance, TypeScript-first audio framework designed for modern JavaScript runtimes (Node.js 20+, Bun 1.2+, and all major browsers). Built on the Web Audio API, it provides developer-friendly abstractions for synthesizing, processing, sequencing, and interacting with audio and MIDI.
- π TypeScript-First: Fully written in TypeScript with strict type checking for robust development.
- π§© Modular & Tree-Shakable: Import only the features you need (e.g., Synth, Effects, MIDI) thanks to ESM modules and optimized bundling.
- π Cross-Runtime Compatibility: Works seamlessly in modern browsers, Bun, and Node.js (with optional audio backend shims for Node). SSR-safe by design.
- π¦ High-Level Audio Abstractions: Easy-to-use classes for
Synth
,PolySynth
,Sampler
, real-timeEffects
(Reverb, Delay, etc.), and preciseTransport
scheduling. - β±οΈ Precise Timing & Scheduling: Built-in
Transport
andScheduler
for sample-accurate musical event timing, supporting tempo control, time signatures, and quantization. - π Extensible Plugin System: Load custom instruments or effects via a plugin architecture, potentially using
AudioWorklet
for high-performance custom DSP. - πΉ MIDI I/O: Unified API for MIDI input and output across browser (Web MIDI API) and Node.js (using shims). Includes utilities for parsing/creating MIDI files.
- β‘ Performance Optimized: Leverages native Web Audio nodes and the audio rendering thread for low-latency, reliable audio processing. Supports
OfflineAudioContext
for non-realtime rendering. - π¦ Modern Packaging: Published as ESM-only for smaller bundles, better tree-shaking, and to help push the JavaScript ecosystem toward a faster, cleaner mjs-only future.
- Most of the things mentioned in this doc arenβt implemented yet β theyβre part of the vision for
v1.0.0
. - Got thoughts? Ideas? Send your feedback in Discord or use GitHub Issues.
- Your feedback means the world and helps shape where this project goes next. Thank you!
# bun / pnpm / yarn / npm
bun add @reliverse/ssdk
SoundSDK prioritizes modularity and tree-shakability through its ESM-only packaging (via relidler
), ensuring minimal bundle sizes. It's designed to be SSR-safe (no browser-only APIs executed at import time) and offers a rich TypeScript developer experience with strict typing and comprehensive JSDoc.
Note for Node.js usage: To enable audio output in Node.js (which lacks a native Web Audio API), you may need to install and configure a shim library like node-web-audio-api
or web-audio-engine
. SoundSDK is designed to work with these but does not bundle them by default. See the Environment Handling section. We plan to implement our own Node.js Audio API shim in the future, as a @reliverse/ssdk-node
plugin.
Here's a simple example of creating a synthesizer and playing a note:
import { Synth, Context } from "@reliverse/ssdk";
async function playNote() {
// Ensure AudioContext is started (often requires user interaction in browsers)
await Context.start(); // Context manages the AudioContext lifecycle
// Create a simple Synth instance
const synth = new Synth().toDestination(); // Connects the synth to the main output
// Trigger a note (Middle C) immediately for 0.5 seconds
console.log("Playing C4...");
synth.triggerAttackRelease("C4", "8n"); // Play C4 for an eighth note duration
// Schedule another note to play 1 second later
const now = Context.now(); // Get the current AudioContext time
console.log("Scheduling G4...");
synth.triggerAttackRelease("G4", "4n", now + 1); // Play G4 for a quarter note, 1 sec from now
// Note: triggerAttackRelease is a convenience.
// For sustained notes, use triggerAttack(note, time) and triggerRelease(time).
}
// In a browser context, call this function after a user interaction (e.g., button click)
// Example: document.getElementById("playButton").onclick = playNote;
// Call the function (e.g., in response to a user click)
// playNote(); // This would typically be triggered by UI
SoundSDK is organized around several key abstractions:
- Context (
AudioContextManager
): Manages the underlying Web AudioAudioContext
. Ensures it's created lazily (on first use, typically after user interaction), reused (singleton), and handles SSR safety checks. Provides access to the high-resolution audio clock (Context.now()
). - Transport: The global timekeeper for musical events. Manages playback state (start/stop/pause), tempo (BPM), time signature, and the master timeline. Allows scheduling events using musical time units (e.g.,
"4n"
,"1m"
). - Scheduler / Timeline: Works with the
Transport
to schedule events (like note triggers or parameter changes) with sample-accurate precision. Supports quantization and repeating events. - Instrument: An abstraction for sound generators (e.g.,
Synth
,PolySynth
,Sampler
). Instruments typically respond totriggerAttack
andtriggerRelease
methods, often taking note (e.g.,"C4"
, 60), velocity, and optional scheduled time. - Effect: Represents an audio processing unit (e.g.,
Delay
,Reverb
,Filter
). Effects can be chained together and inserted between Instruments and the destination using.connect()
methods. - MIDI I/O (
MIDIInput
,MIDIOutput
): Provides access to MIDI devices, abstracting differences between browser Web MIDI API and Node.js MIDI libraries/shims. Allows listening for MIDI messages and controlling instruments. - Plugin System (
PluginHost
): Infrastructure for registering and managing custom Instruments or Effects created by the community or third parties.
SoundSDK is engineered to run consistently across environments:
- Browsers: Utilizes the standard Web Audio API.
- Web Workers: Can potentially run synthesis/processing in workers if needed (though core context management is typically main-thread).
- Bun / Node.js: Runs in modern server-side runtimes. For audio output in Node.js, an external shim like
node-web-audio-api
is required. Without a shim, SoundSDK can still function in a "headless" mode for tasks like MIDI parsing or offline rendering to buffers. - SSR Safe: SoundSDK does not attempt to instantiate
AudioContext
or access browser-only globals upon import. Initialization must be triggered explicitly on the client-side (in React 19+ and Next.js 14+ this is a file with"use client";
directive), often after user interaction.
// Example of SSR-safe initialization in a framework like Next.js
import { useEffect, useState } from "react";
function MyAudioComponent() {
const [synth, setSynth] = useState(null);
useEffect(() => {
// Dynamically import SoundSDK only on the client
async function loadSoundSDK() {
const { Synth, Context } = await import("@reliverse/ssdk");
// Ensure context is started (requires user gesture)
await Context.start();
const newSynth = new Synth().toDestination();
setSynth(newSynth);
}
loadSoundSDK();
// Cleanup on unmount
return () => {
synth?.dispose(); // Assuming a dispose method exists
};
}, []);
const playNote = () => {
synth?.triggerAttackRelease("C4", "8n");
};
return <button onClick={playNote} disabled={!synth}>Play Note</button>;
}
Note:
- If you are using React 19+ or Next.js 14+ you will need to use the
"use client"
directive in your separate client component file. - (not verified) This also allows you to import SoundSDK statically, like:
"use client";
import { Context } from "@reliverse/ssdk";
- Native Nodes: Primarily wraps native Web Audio nodes (
OscillatorNode
,GainNode
,BiquadFilterNode
, etc.) for maximum performance and reliability. - Accurate Timing: Leverages the
AudioContext
's high-resolution clock and scheduling capabilities (AudioParam
automation, precise event timing viaTransport
) to avoid issues withsetTimeout
orrequestAnimationFrame
for timing critical audio events. - AudioWorklet Ready: The plugin architecture is designed to support
AudioWorklet
for running custom, high-performance DSP code directly on the audio rendering thread, avoiding main thread bottlenecks. - Offline Rendering: Supports rendering audio faster-than-realtime to an
AudioBuffer
usingOfflineAudioContext
, ideal for exporting audio files without glitches.
SoundSDK allows extending its capabilities through plugins:
- Plugin Types: Supports custom
Instrument
plugins (new synths, samplers) andEffect
plugins (custom filters, delays, etc.). - Loading: Plugins can be registered programmatically or potentially loaded dynamically from URLs (using dynamic
import()
). - Authoring: Plugins are typically delivered as ES modules exporting classes that extend SoundSDK base classes (
Instrument
,Effect
). - AudioWorklet Integration: Provides infrastructure to simplify the use of
AudioWorklet
for plugins requiring custom DSP logic running on the audio thread. - Community: Aims to foster a community where developers can share and use custom sound modules, inspired by systems like Web Audio Modules (WAM) and VSTs.
import { Synth, Transport, Context, Sequence } from "@reliverse/ssdk";
async function playSequence() {
await Context.start();
const synth = new Synth().toDestination();
// Define a sequence of notes
const melody = [
{ time: "0:0:0", note: "C4", duration: "8n" }, // Bar 0, Beat 0
{ time: "0:0:2", note: "E4", duration: "8n" }, // Bar 0, Beat 0, Sub 2 (eighth note later)
{ time: "0:1:0", note: "G4", duration: "4n" }, // Bar 0, Beat 1
{ time: "0:2:0", note: "C4", duration: "4n" }, // Bar 0, Beat 2
];
// Create a Sequence using the synth and melody
// The Sequence likely uses Transport.scheduleRepeat internally
const sequence = new Sequence((time, noteEvent) => {
synth.triggerAttackRelease(noteEvent.note, noteEvent.duration, time);
}, melody).start(0); // Start the sequence immediately when Transport starts
// Configure Transport
Transport.bpm.value = 120;
Transport.loop = true;
Transport.loopStart = "0:0:0";
Transport.loopEnd = "1:0:0"; // Loop one measure
// Start the transport
Transport.start();
console.log("Transport started. Playing sequence...");
// To stop later: Transport.stop(); sequence.stop();
}
// Trigger playSequence on user interaction
import { Synth, MIDIInput, Context } from "@reliverse/ssdk";
async function setupMidi() {
await Context.start();
const synth = new PolySynth().toDestination(); // Use PolySynth for keyboard playing
try {
// Get available MIDI inputs
const inputs = await MIDIInput.getInputs();
if (inputs.length === 0) {
console.log("No MIDI input devices found.");
return;
}
// Connect to the first available input device
const midiInput = inputs[0];
console.log(`Connected to MIDI device: ${midiInput.name}`);
// Listen for note on/off messages
midiInput.on("noteon", (event) => {
const { note, velocity } = event; // Note object might contain { number, name, octave }
console.log(`Note On: ${note.name}, Velocity: ${velocity}`);
synth.triggerAttack(note.name, Context.now(), velocity);
});
midiInput.on("noteoff", (event) => {
const { note } = event;
console.log(`Note Off: ${note.name}`);
synth.triggerRelease(note.name, Context.now());
});
} catch (err) {
console.error("Could not access MIDI devices.", err);
}
}
// Trigger setupMidi on user interaction
- Modern Tooling: Built for today's ecosystem (TypeScript, ESM, Bun/Node/Browsers).
- Developer Experience: Strong types, clear abstractions, and SSR safety make development easier and more robust compared to older libraries or raw Web Audio API.
- Modularity: Pay only for what you use in terms of bundle size and complexity. Ideal for performance-conscious applications.
- Flexibility: Provides high-level APIs for common tasks but allows access to lower-level features (like raw AudioNodes or AudioParams) when needed.
- Extensibility: The plugin system allows the SDK's capabilities to grow with community contributions.
SoundSDK aims to be the go-to choice for developers building sophisticated audio applications on the web platform, offering a balance of power, performance, and ease of use.
SoundSDK is designed to be import-safe in server-side rendering (SSR) environments.
- No Global Side Effects: Importing modules from SoundSDK does not automatically instantiate
AudioContext
or interact with browser-specific APIs. - Lazy Initialization: The
AudioContext
is typically created only when needed, usually triggered by an explicit call likeContext.start()
or the first time an audio node tries to connect. This call should only happen on the client-side, often initiated by user interaction (due to browser autoplay policies). - Node.js Audio Shim: For audio output in Node.js, SoundSDK relies on external shims like
node-web-audio-api
. Your Node application must install and potentially initialize this shim for SoundSDK to produce sound. Without a shim, SoundSDK can still perform non-audio tasks (MIDI parsing, scheduling logic) headlessly.
Recommendation for Frameworks (Next.js, etc.):
Use dynamic import()
or framework-specific mechanisms (like Next.js Client Components or useEffect
) to ensure SoundSDK modules that interact with Web Audio are only loaded and initialized on the client.
- Implement Web API using Node.js and TypeScript
- Implement Desktop API (and maybe mobile) using Rust
- Expand the library of built-in Instruments and Effects
- Implement official hooks and bindings for React, Vue, and Svelte
- In the
ssdk
implementation use theclass
/this
approach as rarely as possible - Refine the Plugin API and potentially add compatibility layers for standards like Web Audio Modules (WAM)
- Performance optimizations, especially for large numbers of voices or complex routing
- Add more utilities (e.g., advanced audio analysis, more file format support)
- Enhance RESEARCH.md, documentation, and examples
- Integrate SoundSDK into CanvasSDK
- Make DX as beginner-friendly as possible
- π MFPiano
- π CanvasSDK
- π reliverse.org
- π Bleverse Games
- π Rengine
- (π Soon) Full API documentation can be found here.
Contributions are welcome! Please read our Contributing Guidelines (π Soon) for details on how to submit pull requests, report issues, and suggest features.
See our Deep Research to learn more about what we plan to implement.
π MIT 2025 Β© blefnk Nazar Kornienko & Reliverse