Trainable Bitwise Soft Quantization for Input Feature Compression

Schrödter, Karsten; Stenkamp, Jan; Herrmann, Nina; Gieseke, Fabian

Forschungsartikel in Sammelband (Konferenz) | Peer reviewed

Zusammenfassung

The growing demand for machine learning applications in the context of the Internet of Things calls for new approaches to optimize the use of limited compute and memory resources. Despite significant progress that has been made w.r.t. reducing model sizes and improving efficiency, many applications still require remote servers to provide the required resources. However, such approaches rely on transmitting data from edge devices to remote servers, which may not always be feasible due to bandwidth, latency, or energy constraints. We propose a task-specific, trainable feature quantization layer that compresses the input features of a neural network. This can significantly reduce the amount of data that needs to be transferred from the device to a remote server. In particular, the layer allows each input feature to be quantized to a user-defined number of bits, enabling a simple on-device compression at the time of data collection. The layer is designed to approximate step functions with sigmoids, enabling trainable quantization thresholds. By concatenating outputs from multiple sigmoids, introduced as bitwise soft quantization, it achieves trainable quantized values when integrated with a neural network. We compare our method to full-precision inference as well as to several quantization baselines. Experiments show that our approach outperforms standard quantization methods, while maintaining accuracy levels close to those of full-precision models. In particular, depending on the dataset at hand, compression factors of to can be achieved without significant performance loss.

Details zur Publikation

BuchtitelProceedings of the Third Conference on Parsimony and Learning (CPAL)
Statusakzeptiert / in Druck (unveröffentlicht)
Veröffentlichungsjahr2026
KonferenzConference on Parsimony and Learning (CPAL), March 23rd–26th, 2026, Tübingen, Deutschland
StichwörterSoft Quantization; Trainable Quantization; Input Compression; Tiny Machine Learning; Split Inference

Autor*innen der Universität Münster

Gieseke, Fabian
Lehrstuhl für Maschinelles Lernen und Data Engineering (Prof. Gieseke) (MLDE)
Herrmann, Nina
Lehrstuhl für Maschinelles Lernen und Data Engineering (Prof. Gieseke) (MLDE)
Schrödter, Karsten
Lehrstuhl für Maschinelles Lernen und Data Engineering (Prof. Gieseke) (MLDE)
Stenkamp, Jan
Lehrstuhl für Maschinelles Lernen und Data Engineering (Prof. Gieseke) (MLDE)