Home > Research > Publications & Outputs > DNformer

Links

Text available via DOI:

View graph of relations

DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks. / Jiang, Xin; Yu, Zhengxin; Hai, Chao et al.
In: ACM Transactions on Knowledge Discovery from Data, Vol. 17, No. 3, 43, 30.06.2023, p. 43:1-43:21.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Jiang, X, Yu, Z, Hai, C, Liu, H, Wu, X & Ward, T 2023, 'DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks', ACM Transactions on Knowledge Discovery from Data, vol. 17, no. 3, 43, pp. 43:1-43:21. https://doi.org/10.1145/3551892

APA

Jiang, X., Yu, Z., Hai, C., Liu, H., Wu, X., & Ward, T. (2023). DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks. ACM Transactions on Knowledge Discovery from Data, 17(3), 43:1-43:21. Article 43. https://doi.org/10.1145/3551892

Vancouver

Jiang X, Yu Z, Hai C, Liu H, Wu X, Ward T. DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks. ACM Transactions on Knowledge Discovery from Data. 2023 Jun 30;17(3):43:1-43:21. 43. Epub 2023 Feb 22. doi: 10.1145/3551892

Author

Jiang, Xin ; Yu, Zhengxin ; Hai, Chao et al. / DNformer : Temporal Link Prediction with Transfer Learning in Dynamic Networks. In: ACM Transactions on Knowledge Discovery from Data. 2023 ; Vol. 17, No. 3. pp. 43:1-43:21.

Bibtex

@article{45545429aa39467fa5722c8c5ccf6e93,
title = "DNformer: Temporal Link Prediction with Transfer Learning in Dynamic Networks",
abstract = "Temporal link prediction (TLP) is among the most important graph learning tasks, capable of predicting dynamic, time-varying links within networks. The key problem of TLP is how to explore potential link-evolving tendency from the increasing number of links over time. There exist three major challenges toward solving this problem: temporal nonlinear sparsity, weak serial correlation, and discontinuous structural dynamics. In this article, we propose a novel transfer learning model, called DNformer, to predict temporal link sequence in dynamic networks. The structural dynamic evolution is sequenced into consecutive links one by one over time to inhibit temporal nonlinear sparsity. The self-attention of the model is used to capture the serial correlation between the input and output link sequences. Moreover, our structural encoding is designed to obtain changing structures from the consecutive links and to learn the mapping between link sequences. This structural encoding consists of two parts: the node clustering encoding of each link and the link similarity encoding between links. These encodings enable the model to perceive the importance and correlation of links. Furthermore, we introduce a measurement of structural similarity in the loss function for the structural differences of link sequences. The experimental results demonstrate that our model outperforms other state-of-the-art TLP methods such as Transformer, TGAT, and EvolveGCN. It achieves the three highest AUC and four highest precision scores in five different representative dynamic networks problems.",
keywords = "General Computer Science",
author = "Xin Jiang and Zhengxin Yu and Chao Hai and Hongbo Liu and Xindong Wu and Tomas Ward",
year = "2023",
month = jun,
day = "30",
doi = "10.1145/3551892",
language = "English",
volume = "17",
pages = "43:1--43:21",
journal = "ACM Transactions on Knowledge Discovery from Data",
issn = "1556-4681",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

RIS

TY - JOUR

T1 - DNformer

T2 - Temporal Link Prediction with Transfer Learning in Dynamic Networks

AU - Jiang, Xin

AU - Yu, Zhengxin

AU - Hai, Chao

AU - Liu, Hongbo

AU - Wu, Xindong

AU - Ward, Tomas

PY - 2023/6/30

Y1 - 2023/6/30

N2 - Temporal link prediction (TLP) is among the most important graph learning tasks, capable of predicting dynamic, time-varying links within networks. The key problem of TLP is how to explore potential link-evolving tendency from the increasing number of links over time. There exist three major challenges toward solving this problem: temporal nonlinear sparsity, weak serial correlation, and discontinuous structural dynamics. In this article, we propose a novel transfer learning model, called DNformer, to predict temporal link sequence in dynamic networks. The structural dynamic evolution is sequenced into consecutive links one by one over time to inhibit temporal nonlinear sparsity. The self-attention of the model is used to capture the serial correlation between the input and output link sequences. Moreover, our structural encoding is designed to obtain changing structures from the consecutive links and to learn the mapping between link sequences. This structural encoding consists of two parts: the node clustering encoding of each link and the link similarity encoding between links. These encodings enable the model to perceive the importance and correlation of links. Furthermore, we introduce a measurement of structural similarity in the loss function for the structural differences of link sequences. The experimental results demonstrate that our model outperforms other state-of-the-art TLP methods such as Transformer, TGAT, and EvolveGCN. It achieves the three highest AUC and four highest precision scores in five different representative dynamic networks problems.

AB - Temporal link prediction (TLP) is among the most important graph learning tasks, capable of predicting dynamic, time-varying links within networks. The key problem of TLP is how to explore potential link-evolving tendency from the increasing number of links over time. There exist three major challenges toward solving this problem: temporal nonlinear sparsity, weak serial correlation, and discontinuous structural dynamics. In this article, we propose a novel transfer learning model, called DNformer, to predict temporal link sequence in dynamic networks. The structural dynamic evolution is sequenced into consecutive links one by one over time to inhibit temporal nonlinear sparsity. The self-attention of the model is used to capture the serial correlation between the input and output link sequences. Moreover, our structural encoding is designed to obtain changing structures from the consecutive links and to learn the mapping between link sequences. This structural encoding consists of two parts: the node clustering encoding of each link and the link similarity encoding between links. These encodings enable the model to perceive the importance and correlation of links. Furthermore, we introduce a measurement of structural similarity in the loss function for the structural differences of link sequences. The experimental results demonstrate that our model outperforms other state-of-the-art TLP methods such as Transformer, TGAT, and EvolveGCN. It achieves the three highest AUC and four highest precision scores in five different representative dynamic networks problems.

KW - General Computer Science

U2 - 10.1145/3551892

DO - 10.1145/3551892

M3 - Journal article

VL - 17

SP - 43:1-43:21

JO - ACM Transactions on Knowledge Discovery from Data

JF - ACM Transactions on Knowledge Discovery from Data

SN - 1556-4681

IS - 3

M1 - 43

ER -