Home > Research > Publications & Outputs > Decision Snippet Features

Links

Text available via DOI:

View graph of relations

Decision Snippet Features

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Decision Snippet Features. / Welke, Pascal; Alkhoury, Fouad; Bauckhage, Christian et al.
2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. p. 4260-4267.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Welke, P, Alkhoury, F, Bauckhage, C & Wrobel, S 2021, Decision Snippet Features. in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, pp. 4260-4267. https://doi.org/10.1109/ICPR48806.2021.9412025

APA

Welke, P., Alkhoury, F., Bauckhage, C., & Wrobel, S. (2021). Decision Snippet Features. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 4260-4267). IEEE. https://doi.org/10.1109/ICPR48806.2021.9412025

Vancouver

Welke P, Alkhoury F, Bauckhage C, Wrobel S. Decision Snippet Features. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE. 2021. p. 4260-4267 Epub 2021 Jan 10. doi: 10.1109/ICPR48806.2021.9412025

Author

Welke, Pascal ; Alkhoury, Fouad ; Bauckhage, Christian et al. / Decision Snippet Features. 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. pp. 4260-4267

Bibtex

@inproceedings{290bc66606924009b8308bbf9bff668c,
title = "Decision Snippet Features",
abstract = "Decision trees excel at interpretability of their prediction results. To achieve required prediction accuracies, however, often large ensembles of decision trees - random forests - are considered, reducing interpretability due to large size. Additionally, their size slows down inference on modern hardware and restricts their applicability in low-memory embedded devices. We introduce Decision Snippet Features, which are obtained from small subtrees that appear frequently in trained random forests. We subsequently show that linear models on top of these features achieve comparable and sometimes even better predictive performance than the original random forest, while reducing the model size by up to two orders of magnitude.",
author = "Pascal Welke and Fouad Alkhoury and Christian Bauckhage and Stefan Wrobel",
note = "DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.",
year = "2021",
month = may,
day = "5",
doi = "10.1109/ICPR48806.2021.9412025",
language = "English",
isbn = "9781728188096",
pages = "4260--4267",
booktitle = "2020 25th International Conference on Pattern Recognition (ICPR)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Decision Snippet Features

AU - Welke, Pascal

AU - Alkhoury, Fouad

AU - Bauckhage, Christian

AU - Wrobel, Stefan

N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.

PY - 2021/5/5

Y1 - 2021/5/5

N2 - Decision trees excel at interpretability of their prediction results. To achieve required prediction accuracies, however, often large ensembles of decision trees - random forests - are considered, reducing interpretability due to large size. Additionally, their size slows down inference on modern hardware and restricts their applicability in low-memory embedded devices. We introduce Decision Snippet Features, which are obtained from small subtrees that appear frequently in trained random forests. We subsequently show that linear models on top of these features achieve comparable and sometimes even better predictive performance than the original random forest, while reducing the model size by up to two orders of magnitude.

AB - Decision trees excel at interpretability of their prediction results. To achieve required prediction accuracies, however, often large ensembles of decision trees - random forests - are considered, reducing interpretability due to large size. Additionally, their size slows down inference on modern hardware and restricts their applicability in low-memory embedded devices. We introduce Decision Snippet Features, which are obtained from small subtrees that appear frequently in trained random forests. We subsequently show that linear models on top of these features achieve comparable and sometimes even better predictive performance than the original random forest, while reducing the model size by up to two orders of magnitude.

U2 - 10.1109/ICPR48806.2021.9412025

DO - 10.1109/ICPR48806.2021.9412025

M3 - Conference contribution/Paper

SN - 9781728188096

SP - 4260

EP - 4267

BT - 2020 25th International Conference on Pattern Recognition (ICPR)

PB - IEEE

ER -