- Accounts
-
Shop
- Bundle / School Shop
- Personal Shop
-
SoundcheckPro
- Getting Started
- Session Mode
- Console Operation
- Audio Effects
- Education
- Advanced Functions
- Troubleshoot
- Xena Mixer
- Yamuka Mixer
- Maggie Mixer
- Zedd Mixer
- DG32 Digital Mixer
- SteeV88
- Audyssey Console
- Xena Web App
- EDU Portal
-
VirtualStudioXR
- Collab-Mode
- Getting Started
- Session Mode
-
WAX
- Getting Started
-
Developer
- Documentation
- Instruments
- Effects
- Visualizers
- Tempo-Based Scheduling
- Performance Tips
Performance Tips
Introduction
Every web app performs differently depending on how it is built. While standard web performance practices still apply, WAX integrations must also account for the needs of audio processing.
Separate Audio from UI
Audio runs on a strict timing model, developers should carefully separate audio logic from UI code to prevent interface updates from interrupting audio execution. Javascript threads are vulnerable to stalling and mistimed events.
// Instead of this: setInterval(() => { note.start() }, 125) // Do this: const t = audioContext.currentTime + 0.05 note.start(t)
General Audio Tips
When using the Web Audio API, audio code should be designed with timing, stability, and efficiency in mind. Small delays, unnecessary object creation, or heavy work on the main thread can quickly lead to glitches, dropouts, or inconsistent playback.
Keep the audio path as simple as possible and UI updates separate from time-critical audio tasks.
Parameter Smoothing
Abrupt parameter changes can introduce clicks, pops, or other artifacts in audio output. When adjusting values such as gain, frequency, or filter parameters, it is best to apply smoothing rather than setting values instantly. The Web Audio API provides methods such as setTargetAtTime, linearRampToValueAtTime, and exponentialRampToValueAtTime to transition parameters smoothly. Using these methods helps maintain stable audio playback and prevents unwanted artifacts.
// Instead of this (can cause clicks): gainNode.gain.value = 1 // Do this (smooth transition): const now = audioContext.currentTime gainNode.gain.linearRampToValueAtTime(1, now + 0.05) // Or use exponential smoothing: gainNode.gain.setTargetAtTime(1, now, 0.01)
Custom DSP
Developers can write their own audio-processing code instead of relying only on built-in nodes. The older way to do this was ScriptProcessorNode, which handled audio on the main JavaScript thread. ScriptProcessorNode is now deprecated and should be avoided in modern projects. The current approach is to use AudioWorklet, where DSP runs in an AudioWorkletProcessor and is connected to the graph through an AudioWorkletNode.
// register a processor class MyProcessor extends AudioWorkletProcessor { process(inputs, outputs){ const input = inputs[0] const output = outputs[0] for(let ch=0; ch < input.length; ch++){ output[ch].set(input[ch]) } return true } } registerProcessor("my-processor", MyProcessor) // connect it in the audio graph const ctx = new AudioContext() await ctx.audioWorklet.addModule("processor.js") const node = new AudioWorkletNode(ctx, "my-processor") node.connect(ctx.destination)