ICASSP 2026

Subtractive Modulative Network with Learnable Periodic Activations

Tiou Wang*, Zhuoqian Yang, Markus Flierl*, Mathieu Salzmann, Sabine Süsstrunk
*School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden
School of Computer and Communication Sciences, EPFL, Switzerland

Abstract

We propose the Subtractive Modulative Network (SMN), a novel, parameter-efficient Implicit Neural Representation (INR) architecture inspired by classical subtractive synthesis. The SMN is designed as a principled signal processing pipeline, featuring a learnable periodic activation layer (Oscillator) that generates a multi-frequency basis, and a series of modulative mask modules (Filters) that actively generate high-order harmonics. We provide both theoretical analysis and empirical validation for our design. Our SMN achieves a PSNR of 40+ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency. Furthermore, consistent advantage is observed on the challenging 3D NeRF novel view synthesis task.

Highlights

41.40 / 42.53 dB PSNR on Kodak / DIV2K — state-of-the-art 2D image representation
264K params Most compact model among top performers; 4× fewer FLOPs than WIRE
+0.98 dB Margin over next-best on 3D NeRF novel view synthesis (avg. 8 scenes)

Method

SMN is inspired by classical subtractive synthesis from audio signal processing: a spectrally rich signal is generated first, then progressively shaped. Given a coordinate input, the network processes it through three interpretable stages:

Oscillator Learnable periodic activation layer. Generates a multi-frequency basis by adaptively weighting sinusoidal components across multiple learned frequencies.
Filter (×2) Modulative mask modules with dual pathways: additive modulation combines features, while multiplicative masking sculpts the spectrum and generates high-order harmonics.
Amplifier Self-mask stage using a squaring operation for non-linear amplification, enabling fine detail recovery without additional parameters.
SMN end-to-end architecture diagram

End-to-end architecture of the Subtractive Modulative Network, illustrating the Oscillator, multi-stage Filter (main and masking pathways), and the Amplifier (Self-Mask) stage.

Results

2D Image Representation

Method Kodak PSNR DIV2K PSNR Parameters GFLOPs
MLP28.63 dB30.21 dB272,415
RINR32.96 dB34.03 dB289,716
SIREN33.65 dB33.73 dB272,703214
Gauss37.90 dB38.34 dB272,703
WIRE40.24 dB38.90 dB265,523835
SMN (Ours)41.40 dB42.53 dB264,216208

SMN achieves the highest PSNR on both datasets with fewer parameters than all baselines and 4× lower GFLOPs than WIRE.

3D NeRF Novel View Synthesis (avg. over 8 scenes, 400×400)

Method Average PSNR Parameters
PE + WIRE25.14 dB283,479
PE + MLP26.66 dB290,370
PE + RINR26.84 dB314,703
PE + SIREN29.06 dB287,749
PE + Gauss32.00 dB287,749
PE + SMN (Ours)32.98 dB287,749

PE = Positional Encoding. SMN outperforms the next-best method (PE + Gauss) by +0.98 dB on average across all 8 NeRF scenes.

Visual Comparison — 2D Image Reconstruction

Ground Truth
GT
SIREN
SIREN
RINR
RINR
Gauss
Gauss
WIRE
WIRE
SMN (Ours)
SMN (Ours)

Magnified crops from Kodak. SMN preserves fine textures and edge details most faithfully among all methods.

Ablation Study

Modulation Mechanism (Kodak)

SMN-Add (element-wise addition) 40.25 dB
SMN (multiplicative masking) 41.40 dB

Multiplicative masking yields a +1.15 dB gain — confirming the importance of spectral sculpting over simple addition.

Learnable Sine Layer Variants (Kodak)

Variant 1 — fixed amplitudes 35.08 dB
Variant 2 — K=1 learnable basis 42.87 dB
Variant 3 — K=2 learnable bases 43.09 dB
Final — K=3 learnable bases 43.68 dB

Each additional learnable basis consistently improves performance; K=3 is selected as the final design.

Citation

@inproceedings{wang2026smn,
  title     = {Subtractive Modulative Network with Learnable Periodic Activations},
  author    = {Wang, Tiou and Yang, Zhuoqian and Flierl, Markus and
               Salzmann, Mathieu and S{\"u}sstrunk, Sabine},
  booktitle = {Proceedings of ICASSP},
  year      = {2026}
}