陳 凡 CHEN FAN

Projects

Pic 1

=====>Project Management System<======


KAKEN CRABIS (Content Repurposing for Adaptive Browsing in Intelligent Surveillance) Project [2014-2017]

(MEXT Grant-in-Aid for Young Scientists (B) No.26730086)

The CRABIS project targets at providing a solution for automatic generation of informative and personalized field video reports for future intelligent multiview environments, e.g. surveillance, which is supported by the MEXT Grant-in-Aid for Young Scientists (B) No.26730086 from year 2014 to 2017.

This project is in fact launched as a further extended version of the Kaken Prime support. Especially, in this project, we will further focus on queriable summarization techniques from multiview camera inputs, and target at more practical and intelligent video surveillance systems.

These topics will include improved video summarization, multi-camera multi-object tracking, and scene understanding, especially in crowded scenarios.

More about CRABIS project


Prime (Personalized Video Reporting System for Intelligent Multiview Environment) Project [2011-2014]

(MEXT Grant-in-Aid for Young Scientists (B) No.23700110)

The PRIME project targets at providing a solution for automatic generation of informative and personalized field video reports for future intelligent multiview environments, e.g. surveillance, which is supported by the MEXT Grant-in-Aid for Young Scientists (B) No.23700110 from year 2011 to 2014.


This project is in fact launched as a further extended research project of the one supported by the start-up support. Especially, in this project, we will not only continue the discussion of the intelligent surveillance systems, but further explore other extended topics in implementing a comfortable and flexible intelligent environment for delivering personalized video contents.


These topics will include improved virtual camera network planning, virtual reality based representation and other fundermental techniques, such as multi-camera multi-object tracking.

More about Prime project


Autonomous Generation of Field Reports for Multi-view Surveillance System (2010 Research Grant for Start-up Support) [2010-2012]

This research targets at efficient management and retrieval of media data in future large-scale surveillance system. Especially, we aim at developing a system to automatically generate an online/offline field reports to satisfy various user preferences, based on contextual informations.

Since the grant was approved around Oct, 2010, this project is still in its early stage. By the end of year 2010, we have installed the camera network of eight cameras, which covers a room as shown below.

We are planning to capture several common activities between multiple persons, which simulate, e.g., stealing bags, robbing bags or people falling down due to heart attack.

We have obtained the acknowledgement of capturing human activities from the life-science committee of JAIST. All experiments on human-activities will thus start from April 2011. Latest results will be updated in this webpage.

Note that this project shared some important parts with the multivision robot system, i.e., to detect activities and understand events. Benefit from this similarity, we are allowed to build the same base for both application scenarios. Especially, now we have a camera network, we will focus on this to first figure out a reliable way of detecting human activities, and then apply this method to a human-robot cohabitation environment in the multivision robot project.


Autonomous Production of Images Based on Distributed and Intelligent Sensing (APIDIS) Project [2008-2010]

The apidis project is a FP-7 European project coordinated by Professor De Vleeschouwer in Catholique University of Louvain (UCL), which develops cost-effective solutions for autonomous and/or personalized production of video summaries for controlled scenarios (sports events or surveillance). I am glad that I had the chance to work as a post-doctoral researcher in UCL for this project from 2008 to 2010.

Democratic and personalized production of multimedia content is one of the most exciting challenges that content providers will have to face in the near future. APIDIS plans to address this challenge by proposing a framework to automate the collection and distribution of digital content. As a federating objective, APIDIS targets cost-effective autonomous production, so as to make the creation of audiovisual reports profitable, even in case of small- or medium-size audience.

More about Apidis project

My related works to Apidis

Dynamic Scene Understanding by Multi Robots with Human-Robot Interaction

From April, 2010, a new project on multiple robot vision has been launched to study the interactive environment between robots and human. The lab has imported ten mobile robots, which are "Papero"s produced by NEC.

Furthermore, we have equipped the experimental environment with a sensor floor system and two ceiling-mounted surveillance cameras.

As a first trial, we studied on recognizing the human activity based on sensor fusion of the fixed cameras and the sensor floor. Compared to mobile cameras, they are easier to deal with. We expect them to provide useful context information to ease the processing of the mobile robot system in the initial stage of our project.

More specifically, we intend to recognize the human activity from the volumetric structure of human body shape, where a paper on reconstruction of this volumetric information from the multi-sensored system has been submitted to PCS 2010.

Results for PCS 2010

 


QImageViewer for Managing and Testing Recognition Algorithm

QImageVi ewer is a platform developed for easily managing image database and testing pattern recognition methods. Especially, the project included various functionalities throughout the whole process of scientific research workflow, varing from matrix-based image processing, image list managements, fiducial point annotation, recognition, to batched data plotting.

Briefly, QImage Viewer has already included the following functionalities:
A. Data acquisition:
QImage Viewer supports data acquisition from webcameras, TWAIN devices, screen capturing, and various image/video files. It also supports organizing images into image lists, and perform batched processing on those generated lists. This is especially useful when you are organizing data list for training and testing your recognition algorithm.

B. Image Viewer
1. Full functional Image Viewer with thumbnail support, which also supports matrix view of the image;
2.Management of image database;

C. Data Operation
1. Matrix based image processing. QImageViewer will present an image as both a picture and a matrix file, which provides extra flexibilities when you are doing scentific researchs.
2. Image list generation and batched process.
3. Extensibility through a plug-in system.
4. Implemented matrix functionalities, such as decomposition (SVD , QR, Eigenvectors, and LU), calculating inverse matrix, solving linear equations, linear regression, and MSE estimation.

D. Image Processing Functionalities, such as feature extractions (Corner, edge, region, SIFT, gabor wavelets) and image correction (Histogram, brightness, position, size etc.)

E. Pattern Recognition Functionalities including (PCA, LDA, EMC, ICA, SICA, BP-NN, SVM, etc.)

F. Facial Recognition Related Manual/Automatic Annotation, and Datalist Organization.

G. Batched Data Plotting, which is especially useful when you want to generate many graphs at the same time.

More about QImageViewer project