Publication Type
Journal Article
Version
publishedVersion
Publication Date
2-2007
Abstract
This paper describes a novel framework for automatic lecture video editing by gesture, posture, and video text recognition. In content analysis, the trajectory of hand movement is tracked and the intentional gestures are automatically extracted for recognition. In addition, head pose is estimated through overcoming the difficulties due to the complex lighting conditions in classrooms. The aim of recognition is to characterize the flow of lecturing with a series of regional focuses depicted by human postures and gestures. The regions of interest (ROIs) in videos are semantically structured with text recognition and the aid of external documents. By tracing the flow of lecturing, a finite state machine (FSM) which incorporates the gestures, postures, ROIs, general editing rules and constraints, is proposed to edit videos with novel views. The FSM is designed to generate appropriate simulated camera motion and cutting effects that suit the pace of a presenter's gestures and postures. To remedy the undesirable visual effects due to poor lighting conditions, we also propose approaches to automatically enhance the visibility and readability of slides and whiteboard images in the edited videos.
Keywords
gesture, lecture video editing, posture and video text recognition
Discipline
Computer Sciences | Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
IEEE Transactions on Multimedia
Volume
9
Issue
2
First Page
397
Last Page
409
ISSN
1520-9210
Identifier
10.1109/TMM.2006.886292
Publisher
Institute of Electrical and Electronics Engineers
Citation
1
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.