Home > Research > Publications & Outputs > Logical Distillation of Graph Neural Networks.

Links

Text available via DOI:

View graph of relations

Logical Distillation of Graph Neural Networks.

Research output: Working paperPreprint

Published

Standard

Logical Distillation of Graph Neural Networks. / Pluska, Alexander; Welke, Pascal; Gärtner, Thomas et al.
Arxiv, 2024.

Research output: Working paperPreprint

Harvard

APA

Vancouver

Pluska A, Welke P, Gärtner T, Malhotra S. Logical Distillation of Graph Neural Networks. Arxiv. 2024 Aug 21. doi: 10.48550/ARXIV.2406.07126

Author

Pluska, Alexander ; Welke, Pascal ; Gärtner, Thomas et al. / Logical Distillation of Graph Neural Networks. Arxiv, 2024.

Bibtex

@techreport{cb4937368b2f46ba9053fc8016244979,
title = "Logical Distillation of Graph Neural Networks.",
abstract = "We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.",
author = "Alexander Pluska and Pascal Welke and Thomas G{\"a}rtner and Sagar Malhotra",
note = "DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.",
year = "2024",
month = aug,
day = "21",
doi = "10.48550/ARXIV.2406.07126",
language = "English",
volume = "abs/2406.07126",
publisher = "Arxiv",
type = "WorkingPaper",
institution = "Arxiv",

}

RIS

TY - UNPB

T1 - Logical Distillation of Graph Neural Networks.

AU - Pluska, Alexander

AU - Welke, Pascal

AU - Gärtner, Thomas

AU - Malhotra, Sagar

N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.

PY - 2024/8/21

Y1 - 2024/8/21

N2 - We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.

AB - We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.

U2 - 10.48550/ARXIV.2406.07126

DO - 10.48550/ARXIV.2406.07126

M3 - Preprint

VL - abs/2406.07126

BT - Logical Distillation of Graph Neural Networks.

PB - Arxiv

ER -