Home > Research > Publications & Outputs > Experts Collaboration Learning for Continual Mu...

Links

Text available via DOI:

View graph of relations

Experts Collaboration Learning for Continual Multi-Modal Reasoning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Experts Collaboration Learning for Continual Multi-Modal Reasoning. / Xu, Li; Liu, Jun.
In: IEEE Transactions on Image Processing, Vol. 32, 31.12.2023, p. 5087-5098.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Xu, L & Liu, J 2023, 'Experts Collaboration Learning for Continual Multi-Modal Reasoning', IEEE Transactions on Image Processing, vol. 32, pp. 5087-5098. https://doi.org/10.1109/TIP.2023.3310336

APA

Vancouver

Xu L, Liu J. Experts Collaboration Learning for Continual Multi-Modal Reasoning. IEEE Transactions on Image Processing. 2023 Dec 31;32:5087-5098. Epub 2023 Sept 5. doi: 10.1109/TIP.2023.3310336

Author

Xu, Li ; Liu, Jun. / Experts Collaboration Learning for Continual Multi-Modal Reasoning. In: IEEE Transactions on Image Processing. 2023 ; Vol. 32. pp. 5087-5098.

Bibtex

@article{94cbd32bc47749b0a73e2e833caf2679,
title = "Experts Collaboration Learning for Continual Multi-Modal Reasoning",
abstract = "Multi-modal reasoning, which aims to capture logical and causal structures in visual content and associate them with cues from other modality inputs (e.g., texts) to perform various types of reasoning, is an important research topic in artificial intelligence (AI). Existing works for multi-modal reasoning mainly exploit offline learning, where the training samples of all types of reasoning tasks are assumed to be available at once. Here we focus on continual learning for multi-modal reasoning (i.e., continual multi-modal reasoning), where the model is required to continuously learn to solve novel types of multi-modal reasoning tasks in a lifelong fashion. Continual multi-modal reasoning is challenging since the model needs to be able to effectively learn various types of new reasoning tasks, meanwhile avoiding forgetting. Here we propose a novel brain-inspired exp erts co llaboration network (Expo), which incorporates multiple learning blocks (experts). When encountering a new task, our network dynamically assembles and updates a set of task-specific experts that are most relevant to learning the current task, by either utilizing learned experts or exploring new experts. This thus enables effective learning of new tasks, and meanwhile consolidates previously learned reasoning skills. Moreover, to automatically find optimal task-specific experts, an effective experts selection strategy is designed. Extensive experiments demonstrate the efficacy of our model for continual multi-modal reasoning.",
author = "Li Xu and Jun Liu",
year = "2023",
month = dec,
day = "31",
doi = "10.1109/TIP.2023.3310336",
language = "English",
volume = "32",
pages = "5087--5098",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - Experts Collaboration Learning for Continual Multi-Modal Reasoning

AU - Xu, Li

AU - Liu, Jun

PY - 2023/12/31

Y1 - 2023/12/31

N2 - Multi-modal reasoning, which aims to capture logical and causal structures in visual content and associate them with cues from other modality inputs (e.g., texts) to perform various types of reasoning, is an important research topic in artificial intelligence (AI). Existing works for multi-modal reasoning mainly exploit offline learning, where the training samples of all types of reasoning tasks are assumed to be available at once. Here we focus on continual learning for multi-modal reasoning (i.e., continual multi-modal reasoning), where the model is required to continuously learn to solve novel types of multi-modal reasoning tasks in a lifelong fashion. Continual multi-modal reasoning is challenging since the model needs to be able to effectively learn various types of new reasoning tasks, meanwhile avoiding forgetting. Here we propose a novel brain-inspired exp erts co llaboration network (Expo), which incorporates multiple learning blocks (experts). When encountering a new task, our network dynamically assembles and updates a set of task-specific experts that are most relevant to learning the current task, by either utilizing learned experts or exploring new experts. This thus enables effective learning of new tasks, and meanwhile consolidates previously learned reasoning skills. Moreover, to automatically find optimal task-specific experts, an effective experts selection strategy is designed. Extensive experiments demonstrate the efficacy of our model for continual multi-modal reasoning.

AB - Multi-modal reasoning, which aims to capture logical and causal structures in visual content and associate them with cues from other modality inputs (e.g., texts) to perform various types of reasoning, is an important research topic in artificial intelligence (AI). Existing works for multi-modal reasoning mainly exploit offline learning, where the training samples of all types of reasoning tasks are assumed to be available at once. Here we focus on continual learning for multi-modal reasoning (i.e., continual multi-modal reasoning), where the model is required to continuously learn to solve novel types of multi-modal reasoning tasks in a lifelong fashion. Continual multi-modal reasoning is challenging since the model needs to be able to effectively learn various types of new reasoning tasks, meanwhile avoiding forgetting. Here we propose a novel brain-inspired exp erts co llaboration network (Expo), which incorporates multiple learning blocks (experts). When encountering a new task, our network dynamically assembles and updates a set of task-specific experts that are most relevant to learning the current task, by either utilizing learned experts or exploring new experts. This thus enables effective learning of new tasks, and meanwhile consolidates previously learned reasoning skills. Moreover, to automatically find optimal task-specific experts, an effective experts selection strategy is designed. Extensive experiments demonstrate the efficacy of our model for continual multi-modal reasoning.

U2 - 10.1109/TIP.2023.3310336

DO - 10.1109/TIP.2023.3310336

M3 - Journal article

VL - 32

SP - 5087

EP - 5098

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

ER -