Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2008

Abstract

This paper presents a self-organizing network model for the fusion of multimedia information. By synchronizing the encoding of information across multiple media channels, the neural model known as fusion Adaptive Resonance Theory (fusion ART) generates clusters that encode the associative mappings across multimedia information in a real-time and continuous manner. In addition, by incorporating a semantic category channel, fusion ART further enables multimedia information to be fused into predefined themes or semantic categories. We illustrate the fusion ART’s functionalities through experiments on two multimedia data sets in the terrorist domain and show the viability of the proposed approach.

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany, June 30 - July 3

First Page

1738

Last Page

1744

Identifier

10.1109/ICIF.2008.4632421

Publisher

IEEE

City or Country

New York

Share

COinS