Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

9-2007

Abstract

This paper explores a variety of visual and audio analysis techniques in selecting the most representative video clips for rushes summarization at TRECVID 2007. These techniques include object detection, camera motion estimation, keypoint matching and tracking, audio classification and speech recognition. Our system is composed of two major steps. First, based on video structuring, we filter undesirable shots and minimize the inter-shot redundancy by repetitive shot detection. Second, a representability measure is proposed to model the presence of objects and four audio-visual events: motion activity of objects, camera motion, scene changes, and speech content, in a video clip. The video clips with the highest representability scores are selected for summarization. The evaluation at TRECVID shows that our experimental results are highly encouraging, where we rank first in EA (easy to understand), second in RE (little redundancy) and third in IN (inclusion of objects and events).

Keywords

event understanding, object detection, rushes video summarization

Discipline

Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the international workshop on TRECVID video summarization, TVS'07, Augsburg, Bavaria, September 28

First Page

25

Last Page

29

ISBN

9781595937803

Identifier

10.1145/1290031.1290035

Publisher

ACM

City or Country

Augsburg, Bavaria

Share

COinS