Haoran XIE, Ph.D.
- 2020.05 6 papers accepted in NICOGRAPH Intl and Augmented Human 2020.
- 2020.04 Great thanks to JSPS KAKENHI for Young Scientists Grant.
- 2020.02 Great thanks to HAYAO NAKAYAMA Foundation for grant research A.
- 2020.01 We have 13 papers accepted in INTERACTION 2020, CGVI177, HCI187, AHs2020 Demo, etc.
- 2019.12 We presented two papers in HCG Symposium 2019 , received Student Best Presentation Award.
- 2019.07 Our work has been selected as CAVW Cover Page, Best Paper Nominee Award in CASA2019.
- 2019.06 Great thanks to HAYAO NAKAYAMA Foundation for international exchanges grant.
- 2019.05 Our paper Sketch2VF is online in Journal of CAVW (Special issue of CASA2019).
- 2019.04 Great thanks to FUNAI Foundation, received FUNAI Research Award for Young Scientists.
- 2019.03 Our work has been reported by The Daily Industrial News, Tokyo TV, NICONICO News, etc.
- 2019.03 Our work has been presented in ITE-ME2019, INTERACTION2019 and Augmented Human 2019.
H. Xie is an Assistant Professor at Japan Advanced Institute of Science and Technology (JAIST) from April 2018. Before Joining JAIST,
project Assistant Professor and postdoc researcher in User Interface Research Group working with
Prof. Takeo Igarashi in University of Tokyo from April 2015. Ph.D. and M.S. in Computer Graphics under the supervision of Prof. Kazunori Miyata
from JAIST (2015), B.S. in Applied Mathematics from Anhui University (2006). Visiting student in University of Sydney (2011), University of California Davis (2012), Kent State University (2013-2014). Awarded and funded in such as FUNAI Research Award for Young Scientists (2019), JSPS Research Fellow of
Japan Society for the Promotion of Science (2014-2016), Best Paper Nominee (CASA2019), Best Paper Award (NICOINTL 2013), IPSJ SIG Recommended PhD Thesis (2015).
I have received the research funds from JSPS, JAIST Research Grants, NAKAYAMA, FUNAI, OGASAWARA, EPSON, TATEISHI Foundations.
- xDesign: Human-In-Loop User Interfaces for computational design, deep learning, simulation, etc.
- xSpace: Human-Centric User Interfaces for spatial computation with VR, AR, Spatial AR, etc.
- xHuman: Human-Augmented User Interfaces for improving human intelligent and physical abilites
|For more details, please refer to page Publications|
xClothes: Shape-Changing Clothing with Retractable StructuresJAIST Human-Augmentation Group, Haoran Xie and Takuma Torii
HCG Symposium 2019. INTERACTION 2020. Augmented Human(AH) 2020.
EgoSpace: Augmenting Egocentric Space by Wearable ProjectorJAIST Human-Augmentation Group, Takuma Torii and Haoran Xie
HCG Symposium 2019. ININTERACTION 2020. Augmented Humans (AHs) 2020.
(< color="red">Student Best Presentation Award)
In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.
Sketch2Domino: Interactive Chain Reaction Design and GuidanceJAIST Projection-Mapping Group, Haoran Xie and Kazunori Miyata
INTERACTION 2020. NICOGRAPH International 2020.
GhostCube: Learning Rubik's Cube through User Operation HistoryJAIST Projection-Mapping Group, Haoran Xie and Kazunori Miyata
INTERACTION 2020. NICOGRAPH International 2020.
CalliShadow: Interactive User Guidance for Calligraphic PracticeZhizhou He, Haoran Xie and Kazunori Miyata
INTERACTION 2020. NICOGRAPH International 2020.
Sketch2VF: Sketch-Based Flow Design with Conditional Generative Adversarial NetworkZhongyuan Hu, Haoran Xie, Tsukasa Fukusato, Takahiro Sato and Takeo Igarashi
Computer Animation and Virtual Worlds (Sepcial issue of CASA2019).
(Journal Cover Page)(Best paper nominee award)(Top Downloaded Paper)
We present an interactive user interface to support sketch-based fluid design with a perceptual understanding of human sketches. In particular, the proposed system generates a 2D fluid animation from hand-drawn sketches. The proposed system utilizes a conditional generative adversarial network model to generate stationary velocity fields from a sketch input. The network model is trained with hand-drawn strokes and corresponding 2D velocity fields. On the basis of the generated velocity field, the system calculates fluid dynamics using a semi-Lagrangian method. We ran a user study of the proposed system and confirmed that the proposed interface is effective for a 2D fluid design and that the system achieves good results based on user input.
Visual Feedback for Core Training with 3D Human Shape and PoseHaoran Xie, Atsushi Watatani and Kazunori Miyata
NICOGRAPH International 2019. CVIM2019.
We propose a visual feedback system for core training using a monocular camera image. To support the user in maintaining the correct postures from target poses, we adopt 3D human shape estimation for both the target image and input camera video. Because it is expensive to capture human pose using depth cameras or multiple cameras using conventional approaches, we employ the skinned multi-person linear model of human shape to recover the 3D human pose from 2D images using pose estimation and human mesh recovery methods. We propose a user interface for providing visual guidance based on the estimated target and current human shapes. To clarify the differences between the target and current postures of 3D models, we adopt markers for visualization at ten body parts with color changes. From user studies conducted, the proposed visual feedback system is effective and convenient in performing core training.
Perceptual Font Manifold from Generative ModelYuki Fujita, Haoran Xie and Kazunori Miyata
NICOGRAPH International 2019. CGVI2019.
We propose a font manifold interface to visualize the perceptual adjustment in the latent space of a generative model of fonts. In this paper, we adopt the variational autoencoder network for font generation. Then, we conducted a perceptual study on the generated fonts from the multi-dimensional latent space of the generative model. After we obtained the distribution data of specific preferences, we utilized a manifold learning approach to visualize the font distribution. As a case study of our proposed method, we developed a user interface for generated font exploration in the designated user preference using a heat map representation.
BalloonFAB: Digital Fabrication of Large-Scale Balloon ArtJAIST☓NCTU Collaboration, and Chia-Ming Chang
ACM CHI2019, LBW. Visual Computing 2019.
We propose an interactive system that allows common users to build large-scale balloon art based on a spatial augmented reality solution. The proposed system provides fabrication guidance to illustrate the diffrences between the depth maps of the target three-dimensional shape and the current work in progress. Instead of using color gradients for depth diffrence, we adopt a high contrast black and white projection of the numbers in considering balloon texture. In order to increase user immersion in the system, we propose a shaking animation for each projected number. Using the proposed system, the unskilled users in our case study were able to build large scale balloon art.
Augmenting Human With a TailJAIST Human-Augmentation Group, and Takuma Torii
Augmented Human International Conference (AH2019), INTERACTION 2019.
Inspired by animal tails, this study aims to propose a wearable and functional tail device that combines physical and emotional-augmentation modes. In the physical-augmentation mode, the proposed device can be transformed into a consolidated state to support a user's weight, similar to a kangaroo's tail. In the emotional-augmentation mode, the proposed device can help users express their emotions, which are realized by different tail-motion patterns. For our initial prototype, we developed technical features that can support the weight of an adult, and we performed a perceptional investigation of the relations between the tail movements and the corresponding perceptual impressions. Using the animal-tail analog, the proposed device may be able to help the human user in both physical and emotional ways.
Precomputed Panel Solver for Aerodynamics SimulationHaoran Xie, Takeo Igarashi, and Kazunori Miyata
ACM Transactions on Graphics (TOG) 2018. SIGGRAPH 2018.
In this article, we introduce an effient and versatile numerical aerodynamics model for general three-dimensional geometry shapes in potential flow. The proposed model has low computational cost and achieves an accuracy of moderate fielity for the aerodynamic loads for a given glider shape. In the geometry preprocessing steps of our model, lifting-wing surfaces are recognized, and wake panels are generated automatically along the trailing edges. The proposed aerodynamics model improves the potential theory-based panel method. Furthermore, a new quadratic expression for aerodynamic forces and moments is proposed. It consists of geometrydependent aerodynamic coeffient matrices and has a continuous representation for the drag/lift-force coeffients. Our model enables natural and real-time aerodynamics simulations combined with general rigid-body simulators for interactive animation. We also present a design system for original gliders. It uses an assembly-based modeling interface and achieves interactive feedback by leveraging the partwise precomputation enabled by our method. We illustrate that one can easily design various flable gliders using our system.
Selfie Guidance System in Good Head PosturesNaihui Fang, Haoran Xie, and Takeo Igarashi
ACM Conference on Intelligent User Interfaces (IUI) workshop, 2018.
Taking selfies has become a popular and pervasive activity on smart mobile devices nowadays. However, it is still difficult for the average user to take a good selfie, which is a time-consuming and tedious task on normal mobile devices, especially for those who are not good at selfies. In order to reduce the difficulty of taking good selfies, this work proposes an interactive selfie application developed with multiple user interfaces to improve user satisfaction when taking selfies. Our proposed system helps average users take selfies by providing visual and voice guidance interfaces on the proper head postures to achieve good selfies. Preprocessing through crowdsourcing-based learning is utilized to evaluate the score space of possible head postures from hundreds of virtual selfies. For the interactive application, we adopt a geometric approach to estimate the current head posture of users. Our user studies show that the proposed selfie user interface can help common users taking good selfies and improve user satisfaction.
Human Augmented Intelligent Robotic SystemHaoran Xie, Alric Lee, and Hirokazu Tei
China Innovation & Entrepreneurship International Competition, 2017.
(IT 2nd Prize )(Final 3rd Prize )(competition rate = 0.5% )
A human-augmented robotic system will facilitate an intelligent manufacturing process with state-of-art information technologies. This project proposes a wearable robotic human-augmented system that can adapt and provide guidance and assistance to achieve solutions of spatial perception, working guidance and accurate operation. This machine is designed to understand its surrounding environment using deep learning and data-driven techniques, and predict human intentions by inertia sensors. The common users can observe and interact with the current working state via a mobile interface to designate the tasks in the process.
Data-driven Modeling and Animation of Outdoor Trees through Interactive ApproachShaojun Hu, Zhiyi Zhang, Haoran Xie, and Takeo Igarashi
The Visual Computer 2017. CGI2017.
Computer animation of trees has widespread applications in the fields of film production, video games and virtual reality. Physics-based methods are feasible solutions to achieve good approximations of tree movements. However, realistically animating a specific tree in the real world remains a challenge since physics-based methods rely on dynamic properties that are difficult to measure. In this paper, we present a low-cost interactive approach to model and animate outdoor trees from photographs and videos, which can be captured using a smartphone or handheld camera. An interactive editing approach is proposed to reconstruct detailed branches from photographs by considering an epipolar constraint. To track the motions of branches and leaves, a semi-automatic tracking method is presented to allow the user to interactively correct mis-tracked features. Then, the physical parameters of branches and leaves are estimated using a fast Fourier transform, and these properties are applied to a simplified physics-based model to generate animations of trees with various external forces. We compare the animation results with reference videos on several exam ples and demonstrate that our approach can achieve realistic tree animation.
Vision-based Robot DrawingHaoran Xie
DFL Technical workshop 2016.
In this talk, a simple application to control the Mitsubishi robot arm involving human movement is introduced. How can a robot do drawing or calligraphy as human is an interesting and challenging task. This work tried to propose a framework for this issue with guiding and control system to accomplish an intelligent human-machine interaction using vision technique.
Pattern-guided Simulations of Immersed Rigid BodiesHaoran Xie, and Kazunori Miyata
ACM SIGGRAPH Motion in Games (MIG), 2015
This paper proposes a pattern-guided framework for immersed rigid body simulations involving unsteady dynamics of a fully immersed or submerged rigid body in a still flow. Instead of the heavy computation of fluid-body coupling simulations, a novel framework considering different flow effects from the surrounding flow is constructed by parameter estimation of force coefficients. We distinguish the flow effects of the inertial, viscous and turbulent effects to the rigid body. It is difficult to clarify the force coefficients of viscous effect in real flow. In this paper we define the control parameters of viscous forces in rigid body simulator, and propose a energy optimization strategy for determining the time series of control parameters. This strategy is built upon a motion graph of motion patterns and the turbulent kinetic energy. The proposed approach achieves efficient and realistic immersed rigid body simulation results, and these results are relevant to the real-time animations of body-vorticity coupling.
A Prior Reduced Model of Dynamical SystemsHaoran Xie, Zhiqiang Wang, Ye Zhao, and Kazunori Miyata
Mathematics-for-Industry 2015. MEIS2014.
(Best poster award)
A reduced model technique for simulating dynamical systems in computer graphics is proposed. Most procedural models of physics-based simulations consist of control parameters in a high-dimensional domain in which the real-time controllability of simulations is an ongoing issue. Therefore, we adopt a separated representation of the model solutions that can be preprocessed offline without relying on the knowledge of the complete solutions. To achieve the functional products in this representation, we utilize an iterative method involving enrichment and projection steps in a tensor formulation. The proposed approaches are successfully applied to different parametric and coupled models.
Langevin Rigid: Animating Immersed Rigid Bodies in Real-timeHaoran Xie, and Kazunori Miyata
Journal of the Society for Art and Science 2014. NICOGRAPH International 2013.
(Best paper award)
We present the Langevin rigid approach, a technique for animating the dynamics of immersed rigid bodies in viscous incompressible fluid in real-time. We use generalized Kirchhoff equations to ensure forces and torques from the surrounding fluid that create realistic motion of immersed rigid bodies. We call our method the Langevin rigid approach because the generalized Langevin equations are applied to represent the effects of turbulent flow generated at the body surface. The Langevin rigid approach precomputes added-mass effects and the vortical loads from turbulent model, and executes the rigid body solver in runtime, so that this method is straightforward and efficient to the interactive simulations. Many types of rigid bodies with lightweight mass (e.g. leaf or paper) can be simulated realistically in high-Reynolds-number flows.
Real-time Simulation of Lightweight Rigid BodiesHaoran Xie, and Kazunori Miyata
The Visual Computer (TVC) 2014.
Unlike common rigid bodies, lightweight rigid bodies have special and spectacular motions that are known as free fall, such as fluttering (oscillation from side to side) and tumbling (rotation and sideways drifting). However, computer graphics applications cannot simulate the dynamics of lightweight rigid bodies in various environments realistically and efficiently. In this study, we first analyze the physical characteristics of free-fall motions in quiescent flow and propose a new procedural motion-synthesis method for modeling free-fall motions in interactive environments. Six primitive motions of lightweight rigid bodies are defined in a phase diagram and analyzed separately using a trajectory-search tree and precomputed trajectory database. The global paths of free-fall motions are synthesized on the basis of these primitive motions by using a free-fall motion graph whose edges are connected in the Markov-chain model. Then, our approach integrates external forces (e.g., a wind field) by using an improved noise-based algorithm under different force magnitudes and object release heights. This approach exhibits not only realistic simulation results in various environments but also fast computation to meet real-time requirements.
Stochastic Modeling of Immersed Rigid-body DynamicsHaoran Xie, and Kazunori Miyata
ACM SIGGRAPH Asia 2013, Technical briefs.
[Abstract][Project][PDF][PPT][ACM DL][WEB] [Youtube]
The simulation of immersed rigid-body dynamics involves the coupling between objects and turbulent flows, which is a complicated task in computer animation. In this paper, we propose a stochastic model of the dynamics of rigid bodies immersed in viscous flows to solve this problem. We first modulate the dynamic equations of rigid bodies using generalized Kirchhoff equations (GKE). Then, a stochastic differential equation called the Langevin equation is proposed to represent the velocity increments due to the turbulences. After the precomputation of the Kirchhoff tensor and the kinetic energy of a synthetic turbulence induced by the object moving, we utilize a fractional-step method to solve the GKE with vortical loads of drag and lift dynamics in runtime. The resulting animations include both inertial and viscous effects from the surrounding flows for arbitrary geometric objects. Our model is coherent and effective to simulate immersed rigid-body dynamics in real-time.
Realistic Motion Simulations of Objects in Free FallHaoran Xie, and Kazunori Miyata
Computer Graphics International(CGI), 2012
The free fall motion of a lightweight object is a familiar and spectacular phenomenon in which the object can flutter (oscillate from side to side) and tumble (rotate and drift sideways). However, in computer graphics, we lack the ability to simulate free fall motion in a still fluid. In this paper, we consider all the physical characteristics of free fall in a still fluid, and propose a new procedural motion synthesis method for modeling free fall motion in interactive environments. Six primitive motions are defined in a phase diagram and analyzed separately using a trajectory search tree and a precomputed trajectory database. The global paths of free fall motion are synthesized on the basis of these primitive motions, using a free fall motion graph whose edges are connected using the Markov chain model. In addition, our approach integrates with wind field methods by using an improved noise-based algorithm under different wind speeds and object release heights. This approach provides not only realistic results in both a still fluid and a wind field but also rapid computation for realtime applications.
Free Fall Motion SynthesisHaoran Xie, and Kazunori Miyata
ACM SIGGRAPH Asia 2011, Sketches.
[Abstract][PDF][WEB] [ACM DL] [Youtube]
We present in this paper a framework that generates free fall motions for the object within a still fluid. We introduce a new motion synthesis approach where six characteristic motion prototypes of free fall are defined and synthesized, and then the motion trajectory is specified form free fall motion graph. We automatically create motion sequences using trajectory search tree and pre-computed trajectory database. The proposed approach can produce realistic and controllable free fall motion that could be applied in many different applications, including virtual reality, game and other entertainment productions.
Precomputed Data-Driven Free Fall AnimationHaoran Xie, and Kazunori Miyata
NICOGRAPH International 2011.
Free fall motions, such as fluttering (oscillate from side to side) and tumbling (rotate and drift sideways) of lightweight objects or objects in the strong resistance fluid with high Reynolds number, are spectacular and familiar but we lack the predicable and realistic simulation of the phenomena in computer animation and other related fields. We propose a new data-driven approach for procedural motion synthesis by using free fall motion graphs in interactive environments. Six motion prototypes are defined in phase diagram and synthesized separately using trajectory search tree and pre- computed trajectory database. In motion graph, we decide the motion types of object in designated initial conditions and the motion types are constituted by motion prototypes. To get more natural and pleasing motion paths, we combine numerical simulation and experimental results of this topic from Physics. Depending on the data from thousands of experiments, we can simulate the unresolved physical problem, including chaotic motions or motions in three dimensional environments successfully by our approach. Other physical characteristics in free fall, such as rotation, are also figured out well. Optimizations are proposed for synthesized results by practical experiments to adjust and achieve a reasonable visual quality of the phenomena.
Immersed Rigid Body Dynamics in Computer GraphicsSchool of Knowledge Science,
Japan Advanced Institute of Science and Technology, 2015.
(IPSJ SIG Recommended PhD Thesis)
The real world is complex and specular: a leaf falling down from a tree side byside, a coin moving underwater swaying left and right, and the falling snowflakes dancing up and down in even still flow environments. Unfortunately, the virtual worlds under the conventional animation techniques utilize the ideal models with no consideration of the detailed effects of the flow environments, which take account of the inertial, viscous, and turbulent features in high Reynolds number flow. Although the physical simulations have dramatic success in 3D films, games and virtual reality applications in recent decades, the simulations of unsteady and turbulent dynamics frustrate researchers in both computation cost and simulation fidelity. To solve these issues, this dissertation proposes a new topic, immersed rigid body dynamics, into the real-time computer graphics community. It is clearly different from the other traditional topics in computer graphics that the research aim of immersed rigid body dynamics is to simulate the motion of rigid body fully immersed or submerged inside real flows, and strongly coupled with the surrounding flows. This dissertation presents a family of algorithms for real-time simulations of immersed rigid body dynamics in computer animation. These algorithms are built on data-driven simulation methods to simulate the rigid body dynamics with the flow effects in computer environment. These approaches make it feasible to achieve realistic simulation results in low computation cost. In addition, a promising prior reduced model of dynamical systems is introduced for the parameter identification into computer animation.
- Assistant Professor@JAIST, 2018.04 - current
- Project Assistant Professor@University of Tokyo, 2017.04 - 2018.03
- Postdoc Researcher@University of Tokyo, 2016.04 - 2017.03
- JSPS Research Fellow (PD)@University of Tokyo, 2015.04 - 2016.03
- Ph.D. (Computer Graphics)@JAIST, 2012.04 - 2015.03
- JSPS Research Fellow (DC2)@JAIST, 2014.04 - 2015.03
- Visiting Scholar@Kent State University, (2013)
- M.S. (Computer Graphics)@JAIST, 2010.04 - 2012.03
- Visiting Scholar@The University of Sydney, Australia, (2011)
- Research Student(Computer Graphics)@JAIST, 2009.10 - 2010.03
- System Engineer, Project Leader@USTC EBT, 2006.07 - 2009.09
- B.S. (Applied Mathematics)@Anhui University, 2002.09 - 2006.06
|Address:||1-1 Asahidai, Nomi, Ishikawa, 923-1211, Japan|