School of
Information Technology and Electrical Engineering

Understanding events in video surveillance via summarization and description

Speaker: 
Patrick (Teng Zhang) Tue, 19/11/2013 - 14:00
Venue: 
78-224
Host: 
Prof Brian Lovell and Dr Arnold Wiliem
Abstract: 

Recently, video surveillance plays an important role in our society. However, this is limited by human factors such as fatigue, time efficiency and human resources. Therefore, several works are proposed to do automatic video surveillance. These methods include automatic anomaly detection, event detection, video summarization and application specific techniques such as illegal U-turn detection and crowd counting. Among these, applying video summarization for surveillance video is beneficial for addressing the aforementioned issues by only keeping the important information in the summary such as events and anomalies. While several papers tried to explore this direction, there are still some shortcomings which need to be addressed:

1. Most methods fail to explore the connections between subshots (or key frames). In other words, the resulting video summary only contains a series of subshots without any semantic connection.

2. Most existing methods do not consider user’s intention. So they fail to answer user’s questions such as “what happened in that specific region during a specific time window?” or “when was that green car disappeared?”. Currently, users still have to go through the entire summary. Whilst, all they need is answers to their specific queries.

3. Existing methods have not explored video description techniques for surveillance video. Formalizing video semantics will help users gain useful and refined information relevant to their demands and requirements. 

This research aims to address the three problems by exploring the research gap between video summarization and video description for surveillance videos.

In this research, we will utilize and extend the concepts in video summarization into video surveillance. Also, we will also explore the description for video surveillance via user query parser and semantic labeling. We will use semantic attribute feature to be the bridge connecting user’s keywords and low level features. In addition, we will validate our method in both human and animal surveillance video. Specifically, we will focus on Australia wild life surveillance videos thanks to the new dataset from Queensland Parks and Wildlife Service.

Currently, the output of this project is filtering the video to detect abnormal events. Experimental results of abnormal events localization has notable improvement to

the state-of-the-art methods on the UCSD surveillance dataset.  

Seminar Type: 

PhD Confirmation Seminar

Pages