Frame-level video encoding and decoding. No FFmpeg CLI. No subprocess overhead. Just TypeScript.
Not "video processing in JS would be nice." These are the scenarios where WebCodecs architecture provides genuine engineering value.
Browser preview uses WebCodecs. Server export uses FFmpeg. Two codepaths. Two sets of bugs. Endless parity issues between what users see and what they get.
Recording multi-party calls with custom layouts requires screen capture (lossy) or separate native pipelines. Neither integrates with your Node WebRTC server.
Current pipelines analyze streams separately from recording. Timestamp handoff is lossy. Clips cut at keyframes, not the actual moment.
ffprobe gives you container metadata. It doesn't tell you if frames are actually decodable, if there are frozen segments, or visual corruption.
Template-based video with dynamic content means string-templating FFmpeg commands. Conditional logic becomes unreadable filter graphs.
Encode, decode, seek to exact frames. Per-frame encoding parameters. Keyframe insertion where you want it.
Native integration with Node streams. encodeQueueSize for flow control. Memory-bounded processing.
Access to platform encoders/decoders. Queryable capability support. GPU path when available.
Same VideoEncoder, VideoDecoder, VideoFrame interfaces. Code runs both environments unchanged.
VideoFrame.close() for deterministic cleanup. No GC surprises. Predictable resource management.
Full type definitions. Autocomplete for codec configs. Catch errors at compile time.
Full compatibility with Bun runtime via N-API. Same package, both runtimes. No separate install.
BT.2020 primaries, PQ (HDR10) and HLG transfer functions. Encode HDR content without color space conversion.
Decode JPEG, PNG, GIF, WebP, BMP. Extract frames from animated GIFs with timing info.
Frame-level access without leaving JavaScript.