Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

1-2017

Abstract

Our successful multimedia event detection system at TREC-VID 2015 showed its strength on handling complex concepts in a query. The system was based on a large number of pre-trained concept detectors for textual-to-visual relation. In this paper, we enhance the system by enabling human-in-the-loop. In order to facilitate a user to quickly find an information need, we incorporate concept screening, video reranking by highlighted concepts, relevance feedback and color sketch to refine a coarse retrieval result. The aim is to eventually come up with a system suitable for both Ad-hoc Video Search and Known-Item Search. In addition, as the increasing awareness of difficulty in distinguishing shots of very similar scenes, we also explore the automatic story annotation along the timeline of a video, so that a user can quickly grasp the story happened in the context of a target shot and reject shots with incorrect context. With the story annotation, a user can refine the search result as well by simply adding a few keywords in a special “context field” of a query.

Keywords

Concept bank, Known-item search, Semantic query, Story annotation, Video reranking, Video search

Discipline

Databases and Information Systems | Numerical Analysis and Scientific Computing

Research Areas

Intelligent Systems and Optimization

Publication

Multimedia Modeling: 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6: Proceedings

Volume

10133

First Page

463

Last Page

468

ISBN

9783319518138

Identifier

10.1007/978-3-319-51814-5_42

Publisher

Springer

City or Country

Cham

Additional URL

https://doi.org/10.1007/978-3-319-51814-5_42

Share

COinS