TimeSeries Concepts¶
SGN-TS builds on top of SGN to add time-series handling. This page explains the design decisions behind the library.
Why SGN-TS?¶
SGN's core framework is data-agnostic — a Frame carries any Python object.
SGN-TS constrains this to uniformly sampled time-series data, which enables:
- Precise cross-channel alignment without floating-point errors
- Automatic buffering and overlap management for streaming
- Guarantee that data flows synchronously across all channels
Nothing stops you from building time-series pipelines with plain SGN, but you'd need to solve these problems yourself.
Uniform Sampling¶
All data in SGN-TS is uniformly sampled: each SeriesBuffer has a fixed
sample_rate, and every sample is exactly 1 / sample_rate seconds apart.
This is a fundamental assumption — non-uniform or event-based data should use
EventBuffer and EventFrame instead.
Power-of-2 Sample Rates¶
SGN-TS only allows power-of-2 sample rates (1, 2, 4, 8, ... up to a configurable maximum, defaulting to 16384 Hz). This restriction exists for precision: converting between sample counts and time must be exact.
Consider 1/2048 Hz = 488281.25 nanoseconds per sample. As a floating-point number, this accumulates rounding errors over millions of samples. By restricting to power-of-2 rates, every rate is an integer divisor of the maximum rate, and all conversions between rates are exact integer operations.
The Offset System¶
An offset is a sample count at the maximum sample rate. It serves as a
universal, rounding-error-free time reference. One second of data always spans
MAX_RATE offset units (16384 by default), regardless of the actual sample
rate. Two buffers at different rates that cover the same time span share
identical offset and offset_end values, making cross-channel alignment
trivial.
For a detailed discussion of why offsets are needed and how they solve floating-point precision, nanosecond alignment, and sample-count ambiguity problems, see The Offset System.
Synchronization and the Continuity Equation¶
In a multi-channel pipeline, all sources must produce data at the same rate in offset space. If one source produces 1 second of data per iteration and another produces 2 seconds, the faster source's buffers will "pile up" waiting for the slower one, eventually exhausting memory.
SGN-TS enforces this implicitly: the TimeSeriesMixin base class aligns
inputs from multiple pads by offset range. Pads that fall behind receive
heartbeat frames (empty frames that advance the offset) until they catch up.
Frame Alignment¶
When a transform or sink has multiple input pads at different sample rates,
TimeSeriesMixin ensures all inputs are aligned to the same offset range
before process() is called. This means:
- A transform receiving 2048 Hz on one pad and 4096 Hz on another will see both inputs covering the same time span
- If one input has a gap, the aligned output reflects that gap
- The
align_buffersoption inAdapterConfigcan further align buffer boundaries to the minimum sample rate across pads
TSFrame vs EventFrame¶
SGN-TS supports two frame types:
- TSFrame — holds a list of contiguous
SeriesBufferobjects. All buffers must be contiguous (no gaps in offset ranges). Represents uniformly sampled, continuous data. - EventFrame — holds a list of
EventBufferobjects. Each event has an offset range but no sample rate. Represents discrete events within a time span (triggers, detections, annotations).
Most pipelines use TSFrame. Use EventFrame when your data is naturally
event-based rather than continuously sampled.
The AudioAdapter¶
The Audioadapter is an internal buffer manager that sits between incoming
data and element processing. It solves a fundamental problem in streaming: an
element may need more data than a single buffer provides.
Why It Exists¶
Consider a correlation filter with a 128-sample kernel operating on a stream of 1024-sample buffers. To produce a valid output for the first sample, the filter needs 127 samples of history before it. Without buffering, the first buffer would produce a shorter output or require the source to know about the filter's needs.
The adapter handles this transparently: it accumulates incoming buffers in a deque, and when enough data is available, it slices out the requested region (including overlap) and passes it to the element.
Overlap and Stride¶
The two core parameters are overlap and stride:
overlap.before stride overlap.after
◄───────────► ◄───────► ◄───────────►
Buffer: [ context | output | context ]
- Overlap
(before, after)— extra data included before and after the output region. The element receives the full buffer (overlap + stride) but only the stride portion advances the output offset. - Stride — the number of offset units that advance per iteration. When stride is 0, all available data is processed at once.
On the next iteration, the adapter slides forward by stride offset units:
Iteration 1: [overlap | stride | overlap]
Iteration 2: [overlap | stride | overlap]
▲ overlaps with previous stride
This sliding window pattern is what enables overlap-save and overlap-add algorithms without the element needing to manage any buffering state.
Gap Handling¶
When a gap appears in the adapter's buffer region, two strategies are available:
skip_gaps=False(default) — gaps pass through. The element receives buffers where some may be gaps and must handle them.skip_gaps=True— the entire output is marked as a gap if any gap appears in the overlap+stride region. This is simpler but discards data near gap boundaries.
Offset Alignment¶
The align_to parameter snaps output offsets to regular boundaries. Without
alignment, the output offset depends on when data first arrives. With
align_to=Offset.fromsec(1), output offsets always fall on integer-second
boundaries — useful for sinks that write time-aligned files.
Latency Compensation¶
Filters introduce latency: the output corresponds to an earlier point in time
than the input offset suggests. The offset_shift parameter shifts output
offsets backward to correct this. For example, a filter with 2-sample latency
at 2048 Hz uses offset_shift=-Offset.fromsamples(2, 2048).
When You Don't Need It¶
Simple pass-through transforms (amplification, thresholding, type conversion)
don't need the adapter at all. If your element processes each buffer
independently without needing context from neighboring buffers, leave
AdapterConfig at its defaults and it stays disabled.
For configuration details, see Audio Adapter.