Home > Research > Publications & Outputs > Splitting stump forests

Links

Text available via DOI:

View graph of relations

Splitting stump forests: tree ensemble compression for edge devices (extended version)

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Close
Article number219
<mark>Journal publication date</mark>31/10/2025
<mark>Journal</mark>Machine Learning
Issue number10
Volume114
Publication StatusE-pub ahead of print
Early online date27/08/25
<mark>Original language</mark>English

Abstract

We introduce Splitting Stump Forests—small ensembles of weak learners extracted from a trained random forest. The high memory consumption of random forests renders them unfit for resource-constrained devices. We show empirically that we can significantly reduce the model size and inference time by selecting nodes that evenly split the arriving training data and applying a linear model on the resulting representation. Our extensive empirical evaluation indicates that Splitting Stump Forests outperform random forests and state-of-the-art compression methods on memory-limited embedded devices.