Home > Research > Publications & Outputs > Hidden Schema Networks

Links

Text available via DOI:

View graph of relations

Hidden Schema Networks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Hidden Schema Networks. / Sánchez, Ramsés J.; Conrads, Lukas; Welke, Pascal et al.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, Pa.: Association for Computational Linguistics, 2023. p. 4764-4798.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Sánchez, RJ, Conrads, L, Welke, P, Cvejoski, K & Marin, CO 2023, Hidden Schema Networks. in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, Pa., pp. 4764-4798. https://doi.org/10.18653/V1/2023.ACL-LONG.263

APA

Sánchez, R. J., Conrads, L., Welke, P., Cvejoski, K., & Marin, C. O. (2023). Hidden Schema Networks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 4764-4798). Association for Computational Linguistics. https://doi.org/10.18653/V1/2023.ACL-LONG.263

Vancouver

Sánchez RJ, Conrads L, Welke P, Cvejoski K, Marin CO. Hidden Schema Networks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, Pa.: Association for Computational Linguistics. 2023. p. 4764-4798 doi: 10.18653/V1/2023.ACL-LONG.263

Author

Sánchez, Ramsés J. ; Conrads, Lukas ; Welke, Pascal et al. / Hidden Schema Networks. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, Pa. : Association for Computational Linguistics, 2023. pp. 4764-4798

Bibtex

@inproceedings{15571b171c4a46d9bc00b8d18d650dd9,
title = "Hidden Schema Networks",
abstract = "Large, pretrained language models infer powerful representations that encode rich semantic and syntactic content, albeit implicitly. In this work we introduce a novel neural language model that enforces, via inductive biases, explicit relational structures which allow for compositionality onto the output representations of pretrained language models. Specifically, the model encodes sentences into sequences of symbols (composed representations), which correspond to the nodes visited by biased random walkers on a global latent graph, and infers the posterior distribution of the latter. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next, we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to infer networks of symbols (schemata) from natural language datasets. Our experiments show that (i) the inferred symbols can be interpreted as encoding different aspects of language, as e.g. topics or sentiments, and that (ii) GPT-2-like models can effectively be conditioned on symbolic representations. Finally, we explore training autoregressive, random walk “reasoning” models on schema networks inferred from commonsense knowledge databases, and using the sampled paths to enhance the performance of pretrained language models on commonsense If-Then reasoning tasks.",
author = "S{\'a}nchez, {Rams{\'e}s J.} and Lukas Conrads and Pascal Welke and Kostadin Cvejoski and Marin, {C{\'e}sar Ojeda}",
year = "2023",
month = jul,
day = "9",
doi = "10.18653/V1/2023.ACL-LONG.263",
language = "English",
pages = "4764--4798",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - Hidden Schema Networks

AU - Sánchez, Ramsés J.

AU - Conrads, Lukas

AU - Welke, Pascal

AU - Cvejoski, Kostadin

AU - Marin, César Ojeda

PY - 2023/7/9

Y1 - 2023/7/9

N2 - Large, pretrained language models infer powerful representations that encode rich semantic and syntactic content, albeit implicitly. In this work we introduce a novel neural language model that enforces, via inductive biases, explicit relational structures which allow for compositionality onto the output representations of pretrained language models. Specifically, the model encodes sentences into sequences of symbols (composed representations), which correspond to the nodes visited by biased random walkers on a global latent graph, and infers the posterior distribution of the latter. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next, we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to infer networks of symbols (schemata) from natural language datasets. Our experiments show that (i) the inferred symbols can be interpreted as encoding different aspects of language, as e.g. topics or sentiments, and that (ii) GPT-2-like models can effectively be conditioned on symbolic representations. Finally, we explore training autoregressive, random walk “reasoning” models on schema networks inferred from commonsense knowledge databases, and using the sampled paths to enhance the performance of pretrained language models on commonsense If-Then reasoning tasks.

AB - Large, pretrained language models infer powerful representations that encode rich semantic and syntactic content, albeit implicitly. In this work we introduce a novel neural language model that enforces, via inductive biases, explicit relational structures which allow for compositionality onto the output representations of pretrained language models. Specifically, the model encodes sentences into sequences of symbols (composed representations), which correspond to the nodes visited by biased random walkers on a global latent graph, and infers the posterior distribution of the latter. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next, we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to infer networks of symbols (schemata) from natural language datasets. Our experiments show that (i) the inferred symbols can be interpreted as encoding different aspects of language, as e.g. topics or sentiments, and that (ii) GPT-2-like models can effectively be conditioned on symbolic representations. Finally, we explore training autoregressive, random walk “reasoning” models on schema networks inferred from commonsense knowledge databases, and using the sampled paths to enhance the performance of pretrained language models on commonsense If-Then reasoning tasks.

U2 - 10.18653/V1/2023.ACL-LONG.263

DO - 10.18653/V1/2023.ACL-LONG.263

M3 - Conference contribution/Paper

SP - 4764

EP - 4798

BT - Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

PB - Association for Computational Linguistics

CY - Stroudsburg, Pa.

ER -