陳 凡 CHEN FAN

Pic 1

Profile

     工学博士, 助教

     Ph.D, Assistant Professor


像情報処理分野

情報科学研究科

北陸先端科学技術大学院大学

〒923-1211石川県能美市旭台1-1


Lab of Image Processing

School of Information Science

Japan Advanced Institute of Science and Technology

Asahidai 1-1, Nomi, Ishikawa, 923-1211, Japan


Tel: 0761-51-1232

Fax: 0761-51-1149

email: chen-fan AT jaist.ac.jp

          chen-fan AT ieee.org

Profile HP: JP/EN

Social Net: View Fan CHEN's LinkedIn profile View Fan CHEN's Facebook profile













My Tools


     Apidis Annotation Toolset


A lite set of the whole apidis tools. In this distribution, we provide tools for managing, annotating multi-view videos (maximal 8 cameras).

Manual && Download


     QImageviewer


A tool developed for batched processing of still images. This distribution also supports image corrections, statistical learning tools, and also matrix based signal processing.

Online Manual && Download


















Conference Info


略歴 Short Bio.

Dr. Fan Chen is currently an assistant professor of Japan Advanced Institute of Science and Technology (JAIST) since 2010, which is Japan's first national post-graduate institute founded in 1990. He received his B.S., M.S. and Ph.D., from Nanjing University (China), Tohoku University (Japan), JAIST (Japan), in 2001, 2005 and 2008, respectively. He was supported by the Japanese Government MEXT Scholarship for foreign students, and received twice the awards for outstanding graduating students from both Nanjing Univ. (2001) and JAIST (2008), respectively. He was a software engineer in Rise Corporation for medical imaging application (Japan, 2001-2003), a COE Researcher for face recognition (JAIST, 2005-2007), a post-doctoral researcher in TELE, UCL, where he worked for the FP7 APIDIS European project (2008-2010). He was an academic visitor in QMUL, UK (supported by JSPS Institutional Program for Young Researcher Overseas Visits) in 2012, and a visiting scholar in University of Washington in 2014 (supported by the long-term oversea research grant by Telecommunication Advancement Foundation). His research interests are focused on statistical inference and optimization techniques related to computer vision, pattern recognition, and multimedia analysis. He has published one book chapter, over 45 reviewed journal/international conference papers, and holds two patents. He is an editorial member of American Journal of Signal Processing, an editorial member of Advances in Robotics & Automation, and is a reviewer of many journals and conferences, including TMM, TCSVT, Signal Processing-Image Communication, etc.

最新イベント Recent Activities        

  

  Jun.01,2014~             New Project Launched@Kaken CRABIS more
  Feb.01,2014~             Visiting Scholar@Univ. of Washington, EE more


  Oct.24~Oct.24,2013    Paper Accepted@SITIS'13       
  Oct.09~Oct.09,2013    財団法人電気通信普及財団長期海外研究援助
                                   The Telecom. Advancement Foundation
                                   Grant for Long-term Oversea Research
  Sep.23~Sep.23,2013   Paper Accepted@IEEE Trans. Multimedia [PDF]   
  Sep.11~Sep.11,2013   Paper Accepted@Fundamenta Informaticae        
  Aug.29~Aug.30,2013   Exhibition@Innovation Japan 2013        more
                                   Check our promotion video below!
Demo video

  Aug.26~Aug.28,2013   Session Co-chair&Presentation@RO-MAN13
  Aug.01~Aug.22,2013   Paper Accepted!@VCIP, RIVF,KICSS
  Jun.05~Jun.07,2013     Presentation@Malaysia-Japan Workshop
  May.08~May.08,2013   Paper Accepted!@INTETAIN 2013
  May.03~May.03,2013   Paper Accepted!@RO-MAN 2013
  Apr.22~Apr.22,2013     財団法人電気通信普及財団海外渡航旅費援助
                                    The Telecommunications Advancement
                                    Foundation Grant for Overseas Travel
  Mar.28~Mar.28,2013    特許出願。Patent Application Submitted!
  Feb.28~Feb.28,2013     Paper Accepted!@ICASSP 2013
  Feb.06~Feb.06,2013     2013年中国大使館留日学人春节招待会 Spring
                                     Festival Reception for Academic People
                                           @Chinese Embassy in Tokyo       

See All

最新論文 Recent Publications        

F. Chen, Z. Liu and M.T. Sun, "Anomaly Detection by Using Random Projection Forest", ICIP 2015, Paper ID 271, (Accepted) Sep. 2015.
F. Chen, C. De Vleeschouwer, and A.Cavallaro, "Resource Allocation for Personalized Video Summarization," IEEE Transactions on Multimedia, 16(2), pp.455-469, 2014. [PDF]
F. Chen, "Hot-spot Detection by Group Interaction Extraction from Trajectories," The 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2013, Special Session on Perception for Multiple Persons using Multiple Sensors, Paper 66. 2013.
F. Chen, "映像監視システム、映像処理サーバ及び監視映像の閲覧・配信方法 (A Video Surveillance System and A Method for Browsing and Broadcasting Video Content)," 特開2014-192776, Mar.28, 2013.[PDF]
F. Chen and A. Cavallaro, ”Detecting Group Interactions by Online Association of Trajectory Data,” ICASSP 2013 , Paper 2093. 2013.
F. Chen and C. De Vleeschouwer, ”Partial Motion Trajectory Grouping Through Rooted Arborescence,” ICIP 2012, Paper 2511. 2012. [PDF]
F. Chen, D. Delannay and C. De Vleeschouwer, "An autonomous framework to produce and distribute personalized team-sport video summaries: a basket-ball case study," IEEE TMM, 13(6), pp.1381-1394, 2011. [PDF]
F. Chen and C. De Vleeschouwer, "Formulating Team-Sport Video Summarization as a Resource Allocation Problem," IEEE TCSVT, 21(2), pp.193-205, 2011. [PDF]

See All

研究プロジェクト Research Projects        

Manage Projects [Internal Only]

最新プロジェクト Project Highlights


Pic 2

Prime (Personalized Video Reporting System for Intelligent Multiview Environment) Project [2011-2014] (MEXT Grant-in-Aid for Young Scientists (B) No.23700110)

知的監視システムにおける個人化現場レポートの自動生成

The PRIME project targets at providing a solution for automatic generation of informative and personalized field video reports for future intelligent multiview environments, e.g. surveillance, which is supported by the MEXT Grant-in-Aid for Young Scientists (B) No.23700110 from year 2011 to 2014.

PRIMEプロジェクトは、知的監視システムの将来展開として、物体や事件などの文脈情報を有効に利用し、ユーザーの要求に対して、オンラインまたはオフラインで状況説明用ビデオの自動生成機能を実現する手法を考える。分散式カメラから得た情報に基づき、背景除去、物体抽出及び事象識別など一連の分析を行なう。得られたメタデータを用い、興味のある事象や人物を一つビデオにまとめる表現手法を探す。特には、カメラの自動選択、細部表現のための自動バーチャルズーム、内容の自動構成及び描写時間の自動配分などを検討する。

Latest Demo 最新デモ2012.10.09

Demo video

more


Pic 2

Dynamic Scene Understanding and Relocation of Multiple Mobile Robots with Human-Robot Interaction

人間とロボットの相互作用における複数ロボットの動的再配置及びシーンの自動理解

From April, 2010, a new project on multiple robot vision has been launched to study the interactive environment between robots and human. The lab has imported ten mobile robots, which are "Papero"s produced by NEC.We also have equipped the experimental environment with a sensor floor system and two ceiling-mounted surveillance cameras.

2010年4月から、複数ロボットによる画像認識に関したプロジェクトが立ち上がった。そのうち、本研究は人間とロボットの相互作用における複数ロボットの動的再配置及びシーンの自動理解を取り込もうとする。 小谷研究室には十台の移動ロボット(NEC製のパペロ)が配置され、実験室には床センサと二つの天井カメラも設置されている。

専門分野 Research Interests

Pic 6

私の専門分野はパターン認識、コンピュータビジョンおよびマルチメディア解析です。特には顔・顔表情認識、動画像処理、個人化ビデオ内容の自動生成などに関連する研究を行っています.

My research intersts focus on pattern recognition, computer vision, and multimedia analysis. Especially, I have research experiences on topics such as face/facial expression recognition, video analysis and personalized video production.


*カメラワークの自動計画問題

Automatic Planning of Camerawork for Video Production and Intelligent Surveillance

We proposed a criterion and selection algorithm for evaluating and manipulating cameraworks, for satisfying three key ideas we have summarized, i.e., completeness, fineness and occlusion.

more


*リソース割り当て問題として扱う概要の自動生成

Automatic Summarization by a Resource Allocation Framework

This method is proposed to provide highly personalized summaries with well organized story-telling, by solving the summarization problem in a resource allocation framework.

Sum. by Resource Allocation

Try an online system!

more

拡張:早送り可能な概要生成手法

Extension: Summarization with adaptive fast-forwarding

more


*教師ありICAによる表情認識

Facial Expression Recognition by Supervised ICA

This method is proposed to solve the permutation ambiguity in ICA to further improve the discrimination capability of unsupervised ICA in facial expression recognition.

sICA vs. ICA in Extracted Bases

more


*画像分割や三次元復元におけるMRFモデル

MRF Model in Image Segmentation and 3D Shape Reconstruction

We have considered MRF model in both solving the problem of image segmentation and that of 3D shape reconstruction.

MRF Model and Its Application

more


デモエリア Demo Area

Pic 7

ここでは、研究結果の最新のデモ、実験結果、開発したツール、および関連したリソースなどを含めている。

In this area, you will find demo videos, experimental results, and all other interesting stuffs, such as tools and libraries.


*映像内容の自動生成 (APIDIS)

Autonomous Video Production (APIDIS)

Demo video

In the above demo, we show the result video of our personalized video production system, which automatically organizes a short summary of a basketball game captured by multiple cameras, and automatically selects and crops the video to provide the most proper contents to consumers with different device capability and preferrence.

上のデモには、我々が提案した個人化ビデオの自動生成システムの出力結果を示している。複数のカメラから撮ったバスケットボールの試合ビデオから、本システムは端末の性能とユーザーの好みに合わせって最適な時間断片、最適な視角(カメラ選択とカメラワークの計画)を自動的に判断し、個人化のコンテンツを作成することができる。


Results for TMM [Accepted]

デモビデオへ View More Results

Download Demo-Program


*映像内容の自動要約 (APIDIS)

Autonomous Video Summarization (APIDIS)

Demo Summarization

In the above demo, we show the result video of our personalized video summarization system, which automatically organizes a short summary of broadcasted videos, by automatically selecting the video clips to provide the most proper contents to consumers with different device capability and preferrence.

上のデモには、我々が提案した個人化ビデオの自動要約システムのデモを示している。既存の試合ビデオの効率的な再利用を目指し、編集されたビデオ番組の内容を本システムは端末の性能とユーザーの好みに合わせって最適な時間断片を自動的に判断し、個人化のコンテンツを作成することができる。

Results for ICME2011 [Accepted]

Results for TCSVT[Accepted]

Download Demo-Program


*顔表情認識 (QImageViewer)

Facial Experssion Recognition (QImageViewer)

Demo video

In the above demo, we show the result video of our facial expression recognition system, which includes face detection, facial fiducial point localization and facial expression recognition. As an important style for daily communicaiton, we intend to introduce this into the human-robot interaction system for a more comfortable human-robot cohabitation environment.

上のデモには、我々が提案した顔表情認識システムの出力結果を示している。日常交流に欠かせない表情を人間とロボットの相互作用に導入し、より快適な(人間とロボットの)共存環境を構築することを目指している。


デモビデオへ View More Results