My publications:

W. Chen, H. Wang, Y. Yuan, T. Shao, and K. Zhou, Dynamic Future Net: Diversified Human Motion Generation, Association for Computing Machinery, 2020.

Abstract | Bibtex | PDF

Human motion modelling is crucial in many areas such as computergraphics, vision and virtual reality. Acquiring high-quality skele-tal motions is difficult due to the need for specialized equipmentand laborious manual post-posting, which necessitates maximiz-ing the use of existing data to synthesize new data. However, it is a challenge due to the intrinsic motion stochasticity of humanmotion dynamics, manifested in the short and long terms. In theshort term, there is strong randomness within a couple frames, e.g.one frame followed by multiple possible frames leading to differentmotion styles; while in the long term, there are non-deterministicaction transitions. In this paper, we present Dynamic Future Net,a new deep learning model where we explicitly focuses on the aforementioned motion stochasticity by constructing a generative model with non-trivial modelling capacity in temporal stochas-ticity. Given limited amounts of data, our model can generate a large number of high-quality motions with arbitrary duration, andvisually-convincing variations in both space and time. We evaluateour model on a wide range of motions and compare it with the state-of-the-art methods. Both qualitative and quantitative results show the superiority of our method, for its robustness, versatility and high-quality.

@misc{wrro163776,
month = {October},
author = {W Chen and H Wang and Y Yuan and T Shao and K Zhou},
note = { {\copyright} 2020 ACM. This is an author produced version of a conference paper published in MM '20: Proceedings of the 28th ACM International Conference on Multimedia. Uploaded in accordance with the publisher's self-archiving policy.
},
booktitle = {ACM Multimedia 2020},
title = {Dynamic Future Net: Diversified Human Motion Generation},
publisher = {Association for Computing Machinery},
year = {2020},
journal = {MM '20: Proceedings of the 28th ACM International Conference on Multimedia},
pages = {2131--2139},
url = {http://eprints.whiterose.ac.uk/163776/},
abstract = {Human motion modelling is crucial in many areas such as computergraphics, vision and virtual reality. Acquiring high-quality skele-tal motions is difficult due to the need for specialized equipmentand laborious manual post-posting, which necessitates maximiz-ing the use of existing data to synthesize new data. However, it is a challenge due to the intrinsic motion stochasticity of humanmotion dynamics, manifested in the short and long terms. In theshort term, there is strong randomness within a couple frames, e.g.one frame followed by multiple possible frames leading to differentmotion styles; while in the long term, there are non-deterministicaction transitions. In this paper, we present Dynamic Future Net,a new deep learning model where we explicitly focuses on the aforementioned motion stochasticity by constructing a generative model with non-trivial modelling capacity in temporal stochas-ticity. Given limited amounts of data, our model can generate a large number of high-quality motions with arbitrary duration, andvisually-convincing variations in both space and time. We evaluateour model on a wide range of motions and compare it with the state-of-the-art methods. Both qualitative and quantitative results show the superiority of our method, for its robustness, versatility and high-quality.}
}

W. Xiang, X. Yao, H. Wang, and X. Jin, FASTSWARM: A Data-driven FrAmework for Real-time Flying InSecT SWARM Simulation, Computer Animation and Virtual Worlds, 2020.

Abstract | Bibtex | PDF

Insect swarms are common phenomena in nature and therefore have been actively pursued in computer animation. Realistic insect swarm simulation is difficult due to two challenges: high?fidelity behaviors and large scales, which make the simulation practice subject to laborious manual work and excessive trial?and?error processes. To address both challenges, we present a novel data?driven framework, FASTSWARM, to model complex behaviors of flying insects based on real?world data and simulate plausible animations of flying insect swarms. FASTSWARM has a linear time complexity and achieves real?time performance for large swarms. The high?fidelity behavior model of FASTSWARM explicitly takes into consideration the most common behaviors of flying insects, including the interactions among insects such as repulsion and attraction, self?propelled behaviors such as target following and obstacle avoidance, and other characteristics such as random movements. To achieve scalability, an energy minimization problem is formed with different behaviors modeled as energy terms, where the minimizer is the desired behavior. The minimizer is computed from the real?world data, which ensures the plausibility of the simulation results. Extensive simulation results and evaluations show that FASTSWARM is versatile in simulating various swarm behaviors, high fidelity measured by various metrics, easily controllable in inducing user controls and highly scalable.

@article{wrro163467,
month = {September},
title = {FASTSWARM: A Data-driven FrAmework for Real-time Flying InSecT SWARM Simulation},
author = {W Xiang and X Yao and H Wang and X Jin},
publisher = {Wiley},
year = {2020},
note = {{\copyright} 2020 John Wiley \& Sons, Ltd. This is the peer reviewed version of the following article: Xiang, W, Yao, X, Wang, H et al. (1 more author) (2020) FASTSWARM: A Data-driven FrAmework for Real-time Flying InSecT SWARM Simulation. Computer Animation and Virtual Worlds. e1957. ISSN 1546-4261, which has been published in final form at http://doi.org/10.1002/cav.1957. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.},
journal = {Computer Animation and Virtual Worlds},
keywords = {collective behavior; data?driven; insect swarm simulation; optimization; real time},
url = {http://eprints.whiterose.ac.uk/163467/},
abstract = {Insect swarms are common phenomena in nature and therefore have been actively pursued in computer animation. Realistic insect swarm simulation is difficult due to two challenges: high?fidelity behaviors and large scales, which make the simulation practice subject to laborious manual work and excessive trial?and?error processes. To address both challenges, we present a novel data?driven framework, FASTSWARM, to model complex behaviors of flying insects based on real?world data and simulate plausible animations of flying insect swarms. FASTSWARM has a linear time complexity and achieves real?time performance for large swarms. The high?fidelity behavior model of FASTSWARM explicitly takes into consideration the most common behaviors of flying insects, including the interactions among insects such as repulsion and attraction, self?propelled behaviors such as target following and obstacle avoidance, and other characteristics such as random movements. To achieve scalability, an energy minimization problem is formed with different behaviors modeled as energy terms, where the minimizer is the desired behavior. The minimizer is computed from the real?world data, which ensures the plausibility of the simulation results. Extensive simulation results and evaluations show that FASTSWARM is versatile in simulating various swarm behaviors, high fidelity measured by various metrics, easily controllable in inducing user controls and highly scalable.}
}

F. He, Y. Xiang, X. Zhao, and H. Wang, Informative scene decomposition for crowd analysis, comparison and simulation guidance, ACM Transactions on Graphics, vol. 39, iss. 4, 2020.

Abstract | Bibtex | PDF

Crowd simulation is a central topic in several fields including graphics. To achieve high-fidelity simulations, data has been increasingly relied upon for analysis and simulation guidance. However, the information in real-world data is often noisy, mixed and unstructured, making it difficult for effective analysis, therefore has not been fully utilized. With the fast-growing volume of crowd data, such a bottleneck needs to be addressed. In this paper, we propose a new framework which comprehensively tackles this problem. It centers at an unsupervised method for analysis. The method takes as input raw and noisy data with highly mixed multi-dimensional (space, time and dynamics) information, and automatically structure it by learning the correlations among these dimensions. The dimensions together with their correlations fully describe the scene semantics which consists of recurring activity patterns in a scene, manifested as space flows with temporal and dynamics profiles. The effectiveness and robustness of the analysis have been tested on datasets with great variations in volume, duration, environment and crowd dynamics. Based on the analysis, new methods for data visualization, simulation evaluation and simulation guidance are also proposed. Together, our framework establishes a highly automated pipeline from raw data to crowd analysis, comparison and simulation guidance. Extensive experiments and evaluations have been conducted to show the flexibility, versatility and intuitiveness of our framework.

@article{wrro160067,
volume = {39},
number = {4},
month = {July},
author = {F He and Y Xiang and X Zhao and H Wang},
note = {Accepted in SIGGRAPH 2020. {\copyright} 2020 ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Graphics, https://doi.org/10.1145/3386569.3392407.},
title = {Informative scene decomposition for crowd analysis, comparison and simulation guidance},
publisher = {Association for Computing Machinery (ACM)},
year = {2020},
journal = {ACM Transactions on Graphics},
url = {http://eprints.whiterose.ac.uk/160067/},
abstract = {Crowd simulation is a central topic in several fields including graphics. To achieve high-fidelity simulations, data has been increasingly relied upon for analysis and simulation guidance. However, the information in real-world data is often noisy, mixed and unstructured, making it difficult for effective analysis, therefore has not been fully utilized. With the fast-growing volume of crowd data, such a bottleneck needs to be addressed. In this paper, we propose a new framework which comprehensively tackles this problem. It centers at an unsupervised method for analysis. The method takes as input raw and noisy data with highly mixed multi-dimensional (space, time and dynamics) information, and automatically structure it by learning the correlations among these dimensions. The dimensions together with their correlations fully describe the scene semantics which consists of recurring activity patterns in a scene, manifested as space flows with temporal and dynamics profiles. The effectiveness and robustness of the analysis have been tested on datasets with great variations in volume, duration, environment and crowd dynamics. Based on the analysis, new methods for data visualization, simulation evaluation and simulation guidance are also proposed. Together, our framework establishes a highly automated pipeline from raw data to crowd analysis, comparison and simulation guidance. Extensive experiments and evaluations have been conducted to show the flexibility, versatility and intuitiveness of our framework.}
}

Z. Zhang, Y. Wang, P. Jimack, and H. Wang, MeshingNet: A New Mesh Generation Method based on Deep Learning, Springer Verlag, 2020.

Abstract | Bibtex | PDF

We introduce a novel approach to automatic unstructured mesh generation using machine learning to predict an optimal finite element mesh for a previously unseen problem. The framework that we have developed is based around training an artificial neural network (ANN) to guide standard mesh generation software, based upon a prediction of the required local mesh density throughout the domain. We describe the training regime that is proposed, based upon the use of a posteriori error estimation, and discuss the topologies of the ANNs that we have considered. We then illustrate performance using two standard test problems, a single elliptic partial differential equation (PDE) and a system of PDEs associated with linear elasticity. We demonstrate the effective generation of high quality meshes for arbitrary polygonal geometries and a range of material parameters, using a variety of user-selected error norms.

@misc{wrro159526,
volume = {12139},
month = {June},
author = {Z Zhang and Y Wang and PK Jimack and H Wang},
note = {{\copyright} Springer Nature Switzerland AG 2020. This is an author produced version of a conference paper published inLecture Notes in Computer Science. Uploaded in accordance with the publisher's self-archiving policy. },
booktitle = {ICCS 2020: International Conference on Computational Science},
title = {MeshingNet: A New Mesh Generation Method based on Deep Learning},
publisher = {Springer Verlag},
year = {2020},
journal = {Lecture Notes in Computer Science},
pages = {186--198},
keywords = {Mesh generation; Error equidistribution; Machine learning; Artificial neural networks},
url = {http://eprints.whiterose.ac.uk/159526/},
abstract = {We introduce a novel approach to automatic unstructured mesh generation using machine learning to predict an optimal finite element mesh for a previously unseen problem. The framework that we have developed is based around training an artificial neural network (ANN) to guide standard mesh generation software, based upon a prediction of the required local mesh density throughout the domain. We describe the training regime that is proposed, based upon the use of a posteriori error estimation, and discuss the topologies of the ANNs that we have considered. We then illustrate performance using two standard test problems, a single elliptic partial differential equation (PDE) and a system of PDEs associated with linear elasticity. We demonstrate the effective generation of high quality meshes for arbitrary polygonal geometries and a range of material parameters, using a variety of user-selected error norms.}
}

Y. Ji, G. Jiang, M. Tang, N. Mao, and H. Wang, Three-dimensional simulation of warp knitted structures based on geometric unit cell of loop yarns, Textile Research Journal, 2020.

Abstract | Bibtex | PDF

Warp knitted fabrics are typically three-dimensional (3D) structures, and their design is strongly dependent on the structural simulation. Most of existing simulation methods are only capable of two-dimensional (2D) modeling, which lacks perceptual realism and cannot show design defects, making it hard for manufacturers to produce the required fabrics. The few existing methods capable of 3D structural simulation are computationally demanding and therefore can only run on powerful computers, which makes it hard to utilize online platforms (e.g. clouds, mobile devices, etc.) for simulation and design communication. To fill the gap, a novel, lightweight and agile geometric representation of warp knitting loops is proposed to establish a new framework of 3D simulation of complex warp knitted structures. Further, the new representation has great simplicity, flexibility and versatility and is used to build high-level models in representing the 3D structures of warp knitted fabrics with complex topologies. Simulations of a variety of warp knitted fabrics are presented to demonstrate the capacity and generalizability of this newly proposed methodology. It has also been used in virtual design of warp knitted fabrics in wireless mobile devices for digital manufacture and provides a functional reference model based on this simplified unit cell of warp knitted loops to simulate more realistic 3D warp knitted fabrics.

@article{wrro159605,
month = {May},
title = {Three-dimensional simulation of warp knitted structures based on geometric unit cell of loop yarns},
author = {Y Ji and G Jiang and M Tang and N Mao and H Wang},
publisher = {SAGE Publications},
year = {2020},
note = {{\copyright} The Author(s) 2020. This is an author produced version of an article published in Textile Research Journal. Uploaded in accordance with the publisher's self-archiving policy.},
journal = {Textile Research Journal},
keywords = {Warp knitted fabric, 3D simulation, geometric modeling, 3D loop model},
url = {http://eprints.whiterose.ac.uk/159605/},
abstract = {Warp knitted fabrics are typically three-dimensional (3D) structures, and their design is strongly dependent on the structural simulation. Most of existing simulation methods are only capable of two-dimensional (2D) modeling, which lacks perceptual realism and cannot show design defects, making it hard for manufacturers to produce the required fabrics. The few existing methods capable of 3D structural simulation are computationally demanding and therefore can only run on powerful computers, which makes it hard to utilize online platforms (e.g. clouds, mobile devices, etc.) for simulation and design communication. To fill the gap, a novel, lightweight and agile geometric representation of warp knitting loops is proposed to establish a new framework of 3D simulation of complex warp knitted structures. Further, the new representation has great simplicity, flexibility and versatility and is used to build high-level models in representing the 3D structures of warp knitted fabrics with complex topologies. Simulations of a variety of warp knitted fabrics are presented to demonstrate the capacity and generalizability of this newly proposed methodology. It has also been used in virtual design of warp knitted fabrics in wireless mobile devices for digital manufacture and provides a functional reference model based on this simplified unit cell of warp knitted loops to simulate more realistic 3D warp knitted fabrics.}
}

M. Hasan, M. Warburton, W. Agboh, M. Dogar, M. Leonetti, H. Wang, F. Mushtaq, M. Mon-Williams, and A. Cohn, Human-like Planning for Reaching in Cluttered Environments, , 2020.

Bibtex | PDF

@misc{wrro158051,
booktitle = {ICRA 2020},
month = {January},
title = {Human-like Planning for Reaching in Cluttered Environments},
author = {M Hasan and M Warburton and WC Agboh and MR Dogar and M Leonetti and H Wang and F Mushtaq and M Mon-Williams and AG Cohn},
year = {2020},
note = {This paper is protected by copyright. This is an author produced version of a paper accepted for publication in 2020 International Conference on Robotics and Automation (ICRA). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Uploaded in accordance with the publisher's self-archiving policy.},
journal = {2020 International Conference on Robotics and Automation (ICRA)},
url = {http://eprints.whiterose.ac.uk/158051/}
}

J. Chan, H. Shum, H. Wang, L. Yi, W. Wei, and E. Ho, A generic framework for editing and synthesizing multimodal data with relative emotion strength, Computer Animation and Virtual Worlds, vol. 30, iss. 6, 2019.

Abstract | Bibtex | Project | PDF

Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data?driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low?level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.

@article{wrro144010,
volume = {30},
number = {6},
month = {November},
author = {JCP Chan and HPH Shum and H Wang and L Yi and W Wei and ESL Ho},
note = {{\copyright} 2019 John Wiley \& Sons, Ltd. This is the peer reviewed version of the following article: Chan, JCP, Shum, HPH, Wang, H et al. (3 more authors) (2019) A generic framework for editing and synthesizing multimodal data with relative emotion strength. Computer Animation and Virtual Worlds. e1871. ISSN 1546-4261, which has been published in final form at https://doi.org/10.1002/cav.1871. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.},
title = {A generic framework for editing and synthesizing multimodal data with relative emotion strength},
publisher = {Wiley},
year = {2019},
journal = {Computer Animation and Virtual Worlds},
keywords = {data?driven; emotion motion; facial expression; image editing; motion capture; motion synthesis; relative attribute},
url = {http://eprints.whiterose.ac.uk/144010/},
abstract = {Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data?driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low?level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.}
}

H. Wang, E. Ho, H. Shum, and Z. Zhu, Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling, IEEE Transactions on Visualization and Computer Graphics, 2019.

Abstract | Bibtex | PDF

Data-driven modeling of human motions is ubiquitous in computer graphics and vision applications. Such problems can be approached by deep learning on a large amount data. However, existing methods can be sub-optimal for two reasons. First, skeletal information has not been fully utilized. Unlike images, it is difficult to define spatial proximity in skeletal motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. A frame could lead to different motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance. We propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component and is equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. We demonstrate that our system can create superior results comparing to existing work in multiple applications.

@article{wrro149862,
month = {August},
title = {Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling},
author = {H Wang and ESL Ho and HPH Shum and Z Zhu},
publisher = {Institute of Electrical and Electronics Engineers},
year = {2019},
note = {This article is protected by copyright. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
journal = {IEEE Transactions on Visualization and Computer Graphics},
keywords = {Computer Graphics, Computer Animation, Character Animation, Deep Learning},
url = {http://eprints.whiterose.ac.uk/149862/},
abstract = {Data-driven modeling of human motions is ubiquitous in computer graphics and vision applications. Such problems can be approached by deep learning on a large amount data. However, existing methods can be sub-optimal for two reasons. First, skeletal information has not been fully utilized. Unlike images, it is difficult to define spatial proximity in skeletal motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. A frame could lead to different motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance. We propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component and is equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. We demonstrate that our system can create superior results comparing to existing work in multiple applications.}
}

F. Pan, P. He, F. Chen, J. Zhang, H. Wang, and D. Zheng, A novel deep learning based automatic auscultatory method to measure blood pressure, International Journal of Medical Informatics, vol. 128, p. 71–78, 2019.

Abstract | Bibtex | PDF

Background: It is clinically important to develop innovative techniques that can accurately measure blood pressures (BP) automatically. Objectives: This study aimed to present and evaluate a novel automatic BP measurement method based on deep learning method, and to confirm the effects on measured BPs of the position and contact pressure of stethoscope. Methods: 30 healthy subjects were recruited. 9 BP measurements (from three different stethoscope contact pressures and three repeats) were performed on each subject. The convolutional neural network (CNN) was designed and trained to identify the Korotkoff sounds at a beat-by-beat level. Next, a mapping algorithm was developed to relate the identified Korotkoff beats to the corresponding cuff pressures for systolic and diastolic BP (SBP and DBP) determinations. Its performance was evaluated by investigating the effects of the position and contact pressure of stethoscope on measured BPs in comparison with reference manual auscultatory method. Results: The overall measurement errors of the proposed method were 1.4 {$\pm$} 2.4 mmHg for SBP and 3.3 {$\pm$} 2.9 mmHg for DBP from all the measurements. In addition, the method demonstrated that there were small SBP differences between the 2 stethoscope positions, respectively at the 3 stethoscope contact pressures, and that DBP from the stethoscope under the cuff was significantly lower than that from outside the cuff by 2.0 mmHg (P {\ensuremath{<}} 0.01). Conclusion: Our findings suggested that the deep learning based method was an effective technique to measure BP, and could be developed further to replace the current oscillometric based automatic blood pressure measurement method.

@article{wrro146865,
volume = {128},
month = {August},
author = {F Pan and P He and F Chen and J Zhang and H Wang and D Zheng},
note = {{\copyright} 2019 Elsevier B.V. All rights reserved. This is an author produced version of a paper published in the International Journal of Medical Informatics . Uploaded in accordance with the publisher's self-archiving policy.},
title = {A novel deep learning based automatic auscultatory method to measure blood pressure},
publisher = {Elsevier},
year = {2019},
journal = {International Journal of Medical Informatics},
pages = {71--78},
keywords = {Blood pressure measurement; Convolutional neural network; Manual auscultatory method; Stethoscope position; Stethoscope contact pressure},
url = {http://eprints.whiterose.ac.uk/146865/},
abstract = {Background: It is clinically important to develop innovative techniques that can accurately measure blood pressures (BP) automatically. Objectives: This study aimed to present and evaluate a novel automatic BP measurement method based on deep learning method, and to confirm the effects on measured BPs of the position and contact pressure of stethoscope. Methods: 30 healthy subjects were recruited. 9 BP measurements (from three different stethoscope contact pressures and three repeats) were performed on each subject. The convolutional neural network (CNN) was designed and trained to identify the Korotkoff sounds at a beat-by-beat level. Next, a mapping algorithm was developed to relate the identified Korotkoff beats to the corresponding cuff pressures for systolic and diastolic BP (SBP and DBP) determinations. Its performance was evaluated by investigating the effects of the position and contact pressure of stethoscope on measured BPs in comparison with reference manual auscultatory method. Results: The overall measurement errors of the proposed method were 1.4 {$\pm$} 2.4 mmHg for SBP and 3.3 {$\pm$} 2.9 mmHg for DBP from all the measurements. In addition, the method demonstrated that there were small SBP differences between the 2 stethoscope positions, respectively at the 3 stethoscope contact pressures, and that DBP from the stethoscope under the cuff was significantly lower than that from outside the cuff by 2.0 mmHg (P {\ensuremath{<}} 0.01). Conclusion: Our findings suggested that the deep learning based method was an effective technique to measure BP, and could be developed further to replace the current oscillometric based automatic blood pressure measurement method.}
}

Y. Shen, J. Henry, H. Wang, E. Ho, T. Komura, and H. Shum, Data Driven Crowd Motion Control with Multi-touch Gestures, Computer Graphics Forum, vol. 37, iss. 6, p. 382–394, 2018.

Abstract | Bibtex | PDF

Controlling a crowd using multi?touch devices appeals to the computer games and animation industries, as such devices provide a high?dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre?defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data?driven gesture?based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run?time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run?time control. Our system is accurate and efficient, making it suitable for real?time applications such as real?time strategy games and interactive animation controls.

@article{wrro128152,
volume = {37},
number = {6},
month = {July},
author = {Y Shen and J Henry and H Wang and ESL Ho and T Komura and HPH Shum},
note = {{\copyright} 2018 The Authors Computer Graphics Forum published by John Wiley \& Sons Ltd. This is an open access article under the terms of the Creative Commons Attribution License, https://creativecommons.org/licenses/by/4.0/ which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.},
title = {Data Driven Crowd Motion Control with Multi-touch Gestures},
publisher = {Wiley},
year = {2018},
journal = {Computer Graphics Forum},
pages = {382--394},
keywords = {Animation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Animation},
url = {http://eprints.whiterose.ac.uk/128152/},
abstract = {Controlling a crowd using multi?touch devices appeals to the computer games and animation industries, as such devices provide a high?dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre?defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data?driven gesture?based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run?time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run?time control. Our system is accurate and efficient, making it suitable for real?time applications such as real?time strategy games and interactive animation controls.}
}

Y. Shen, H. Wang, E. Ho, L. Yang, and H. Shum, Posture-based and Action-based Graphs for Boxing Skill Visualization, Computers and Graphics, vol. 69, p. 104–115, 2017.

Abstract | Bibtex | PDF

Automatic evaluation of sports skills has been an active research area. However, most of the existing research focuses on low-level features such as movement speed and strength. In this work, we propose a framework for automatic motion analysis and visualization, which allows us to evaluate high-level skills such as the richness of actions, the flexibility of transitions and the unpredictability of action patterns. The core of our framework is the construction and visualization of the posture-based graph that focuses on the standard postures for launching and ending actions, as well as the action-based graph that focuses on the preference of actions and their transition probability. We further propose two numerical indices, the Connectivity Index and the Action Strategy Index, to assess skill level according to the graph. We demonstrate our framework with motions captured from different boxers. Experimental results demonstrate that our system can effectively visualize the strengths and weaknesses of the boxers.

@article{wrro122401,
volume = {69},
month = {December},
author = {Y Shen and H Wang and ESL Ho and L Yang and HPH Shum},
title = {Posture-based and Action-based Graphs for Boxing Skill Visualization},
publisher = {Elsevier},
year = {2017},
journal = {Computers and Graphics},
pages = {104--115},
keywords = {Motion Graph; Hidden Markov Model; Information Visualization; Dimensionality Reduction; Human Motion Analysis; Boxing},
url = {http://eprints.whiterose.ac.uk/122401/},
abstract = {Automatic evaluation of sports skills has been an active research area. However, most of the existing research focuses on low-level features such as movement speed and strength. In this work, we propose a framework for automatic motion analysis and visualization, which allows us to evaluate high-level skills such as the richness of actions, the flexibility of transitions and the unpredictability of action patterns. The core of our framework is the construction and visualization of the posture-based graph that focuses on the standard postures for launching and ending actions, as well as the action-based graph that focuses on the preference of actions and their transition probability. We further propose two numerical indices, the Connectivity Index and the Action Strategy Index, to assess skill level according to the graph. We demonstrate our framework with motions captured from different boxers. Experimental results demonstrate that our system can effectively visualize the strengths and weaknesses of the boxers.}
}

E. Ho, H. Shum, H. Wang, and L. Yi, Synthesizing Motion with Relative Emotion Strength, in ACM SIGGRAPH ASIA Workshop: Data-Driven Animation Techniques (D2AT), 2017.

Bibtex | PDF

@inproceedings{wrro121250,
booktitle = {ACM SIGGRAPH ASIA Workshop: Data-Driven Animation Techniques (D2AT)},
month = {November},
title = {Synthesizing Motion with Relative Emotion Strength},
author = {ESL Ho and HPH Shum and H Wang and L Yi},
year = {2017},
note = {{\copyright} 2017 Copyright held by the owner?author(s). This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version will be published in D2AT proceedings. Uploaded in accordance with the publisher's self-archiving policy. },
url = {http://eprints.whiterose.ac.uk/121250/}
}

Y. Shi, J. Ondrej, H. Wang, and C. O?Sullivan, Shape up! Perception based body shape variation for data-driven crowds, IEEE, 2017.

Abstract | Bibtex | PDF

Representative distribution of body shapes is needed when simulating crowds in real-world situations, e.g., for city or event planning. Visual realism and plausibility are often also required for visualization purposes, while these are the top criteria for crowds in entertainment applications such as games and movie production. Therefore, achieving representative and visually plausible body-shape variation while optimizing available resources is an important goal. We present a data-driven approach to generating and selecting models with varied body shapes, based on body measurement and demographic data from the CAESAR anthropometric database. We conducted an online perceptual study to explore the relationship between body shape, distinctiveness and attractiveness for bodies close to the median height and girth. We found that the most salient body differences are in size and upper-lower body ratios, in particular with respect to shoulders, waist and hips. Based on these results, we propose strategies for body shape selection and distribution that we have validated with a lab-based perceptual study. Finally, we demonstrate our results in a data-driven crowd system with perceptually plausible and varied body shape distribution.

@misc{wrro113877,
month = {June},
author = {Y Shi and J Ondrej and H Wang and C O?Sullivan},
note = {{\copyright} 2017, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
booktitle = {VHCIE workshop, IEEE Virtual Reality 2017},
title = {Shape up! Perception based body shape variation for data-driven crowds},
publisher = {IEEE},
journal = {2017 IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2017},
url = {http://eprints.whiterose.ac.uk/113877/},
abstract = {Representative distribution of body shapes is needed when simulating crowds in real-world situations, e.g., for city or event planning. Visual realism and plausibility are often also required for visualization purposes, while these are the top criteria for crowds in entertainment applications such as games and movie production. Therefore, achieving representative and visually plausible body-shape variation while optimizing available resources is an important goal. We present a data-driven approach to generating and selecting models with varied body shapes, based on body measurement and demographic data from the CAESAR anthropometric database. We conducted an online perceptual study to explore the relationship between body shape, distinctiveness and attractiveness for bodies close to the median height and girth. We found that the most salient body differences are in size and upper-lower body ratios, in particular with respect to shoulders, waist and hips. Based on these results, we propose strategies for body shape selection and distribution that we have validated with a lab-based perceptual study. Finally, we demonstrate our results in a data-driven crowd system with perceptually plausible and varied body shape distribution.}
}

H. Wang, J. Ondrej, and C. O'Sullivan, Trending Paths: A New Semantic-level Metric for Comparing Simulated and Real Crowd Data, IEEE Transaction on Visualization and Computer Graphics, vol. 23, iss. 5, p. 1454–1464, 2017.

Abstract | Bibtex | PDF

We propose a new semantic-level crowd evaluation metric in this paper. Crowd simulation has been an active and important area for several decades. However, only recently has there been an increased focus on evaluating the fidelity of the results with respect to real-world situations. The focus to date has been on analyzing the properties of low-level features such as pedestrian trajectories, or global features such as crowd densities. We propose the first approach based on finding semantic information represented by latent Path Patterns in both real and simulated data in order to analyze and compare them. Unsupervised clustering by non-parametric Bayesian inference is used to learn the patterns, which themselves provide a rich visualization of the crowd behavior. To this end, we present a new Stochastic Variational Dual Hierarchical Dirichlet Process (SV-DHDP) model. The fidelity of the patterns is computed with respect to a reference, thus allowing the outputs of different algorithms to be compared with each other and/or with real data accordingly. Detailed evaluations and comparisons with existing metrics show that our method is a good alternative for comparing crowd data at a different level and also works with more types of data, holds fewer assumptions and is more robust to noise.

@article{wrro109726,
volume = {23},
number = {5},
month = {May},
author = {H Wang and J Ondrej and C O'Sullivan},
note = {(c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.},
title = {Trending Paths: A New Semantic-level Metric for Comparing Simulated and Real Crowd Data},
publisher = {Institute of Electrical and Electronics Engineers},
year = {2017},
journal = {IEEE Transaction on Visualization and Computer Graphics},
pages = {1454--1464},
keywords = {Crowd Simulation, Crowd Comparison, Data-Driven, Clustering, Hierarchical Dirichlet Process, Stochastic Optimization},
url = {http://eprints.whiterose.ac.uk/109726/},
abstract = {We propose a new semantic-level crowd evaluation metric in this paper. Crowd simulation has been an active and important area for several decades. However, only recently has there been an increased focus on evaluating the fidelity of the results with respect to real-world situations. The focus to date has been on analyzing the properties of low-level features such as pedestrian trajectories, or global features such as crowd densities. We propose the first approach based on finding semantic information represented by latent Path Patterns in both real and simulated data in order to analyze and compare them. Unsupervised clustering by non-parametric Bayesian inference is used to learn the patterns, which themselves provide a rich visualization of the crowd behavior. To this end, we present a new Stochastic Variational Dual Hierarchical Dirichlet Process (SV-DHDP) model. The fidelity of the patterns is computed with respect to a reference, thus allowing the outputs of different algorithms to be compared with each other and/or with real data accordingly. Detailed evaluations and comparisons with existing metrics show that our method is a good alternative for comparing crowd data at a different level and also works with more types of data, holds fewer assumptions and is more robust to noise.}
}

H. Shum, H. Wang, E. Ho, and T. Komura, SkillVis: A Visualization Tool for Boxing Skill Assessment, New York, USA: ACM, 2016.

Abstract | Bibtex | PDF

Motion analysis and visualization are crucial in sports science for sports training and performance evaluation. While primitive computational methods have been proposed for simple analysis such as postures and movements, few can evaluate the high-level quality of sports players such as their skill levels and strategies. We propose a visualization tool to help visualizing boxers' motions and assess their skill levels. Our system automatically builds a graph-based representation from motion capture data and reduces the dimension of the graph onto a 3D space so that it can be easily visualized and understood. In particular, our system allows easy understanding of the boxer's boxing behaviours, preferred actions, potential strength and weakness. We demonstrate the effectiveness of our system on different boxers' motions. Our system not only serves as a tool for visualization, it also provides intuitive motion analysis that can be further used beyond sports science.

@misc{wrro106266,
month = {October},
author = {HPH Shum and H Wang and ESL Ho and T Komura},
booktitle = {The 9th International Conference on Motion in Games (MIG '16)},
title = {SkillVis: A Visualization Tool for Boxing Skill Assessment},
publisher = {ACM},
year = {2016},
journal = {MIG '16 Proceedings of the 9th International Conference on Motion in Games},
pages = {145--153},
keywords = {Motion Graph, Information Visualization, Dimensionality Reduction},
url = {http://eprints.whiterose.ac.uk/106266/},
abstract = {Motion analysis and visualization are crucial in sports science for sports training and performance evaluation. While primitive computational methods have been proposed for simple analysis such as postures and movements, few can evaluate the high-level quality of sports players such as their skill levels and strategies. We propose a visualization tool to help visualizing boxers' motions and assess their skill levels. Our system automatically builds a graph-based representation from motion capture data and reduces the dimension of the graph onto a 3D space so that it can be easily visualized and understood. In particular, our system allows easy understanding of the boxer's boxing behaviours, preferred actions, potential strength and weakness. We demonstrate the effectiveness of our system on different boxers' motions. Our system not only serves as a tool for visualization, it also provides intuitive motion analysis that can be further used beyond sports science.}
}

H. Wang and C. O'Sullivan, Globally Continuous and Non-Markovian Crowd Activity Analysis from Videos, Springer, 2016.

Abstract | Bibtex | PDF

Automatically recognizing activities in video is a classic problem in vision and helps to understand behaviors, describe scenes and detect anomalies. We propose an unsupervised method for such purposes. Given video data, we discover recurring activity patterns that appear, peak, wane and disappear over time. By using non-parametric Bayesian methods, we learn coupled spatial and temporal patterns with minimum prior knowledge. To model the temporal changes of patterns, previous works compute Markovian progressions or locally continuous motifs whereas we model time in a globally continuous and non-Markovian way. Visually, the patterns depict flows of major activities. Temporally, each pattern has its own unique appearance-disappearance cycles. To compute compact pattern representations, we also propose a hybrid sampling method. By combining these patterns with detailed environment information, we interpret the semantics of activities and report anomalies. Also, our method fits data better and detects anomalies that were difficult to detect previously.

@misc{wrro106097,
volume = {9909},
month = {September},
author = {H Wang and C O'Sullivan},
note = {(c) 2016, Springer International Publishing. This is an author produced version of a paper published in Lecture Notes in Computer Science. Uploaded in accordance with the publisher's self-archiving policy.},
booktitle = {European Conference on Computer Vision (ECCV) 2016},
title = {Globally Continuous and Non-Markovian Crowd Activity Analysis from Videos},
publisher = {Springer},
year = {2016},
journal = {Computer Vision - ECCV 2016: Lecture Notes in Computer Science},
pages = {527--544},
url = {http://eprints.whiterose.ac.uk/106097/},
abstract = {Automatically recognizing activities in video is a classic problem in vision and helps to understand behaviors, describe scenes and detect anomalies. We propose an unsupervised method for such purposes. Given video data, we discover recurring activity patterns that appear, peak, wane and disappear over time. By using non-parametric Bayesian methods, we learn coupled spatial and temporal patterns with minimum prior knowledge. To model the temporal changes of patterns, previous works compute Markovian progressions or locally continuous motifs whereas we model time in a globally continuous and non-Markovian way. Visually, the patterns depict flows of major activities. Temporally, each pattern has its own unique appearance-disappearance cycles. To compute compact pattern representations, we also propose a hybrid sampling method. By combining these patterns with detailed environment information, we interpret the semantics of activities and report anomalies. Also, our method fits data better and detects anomalies that were difficult to detect previously.}
}

H. Wang, J. Ondřej, and C. O'Sullivan, Path Patterns: Analyzing and Comparing Real and Simulated Crowds, ACM, 2016.

Abstract | Bibtex | PDF

Crowd simulation has been an active and important area of research in the field of interactive 3D graphics for several decades. However, only recently has there been an increased focus on evaluating the fidelity of the results with respect to real-world situations. The focus to date has been on analyzing the properties of low-level features such as pedestrian trajectories, or global features such as crowd densities. We propose a new approach based on finding latent Path Patterns in both real and simulated data in order to analyze and compare them. Unsupervised clustering by non-parametric Bayesian inference is used to learn the patterns, which themselves provide a rich visualization of the crowd's behaviour. To this end, we present a new Stochastic Variational Dual Hierarchical Dirichlet Process (SV-DHDP) model. The fidelity of the patterns is then computed with respect to a reference, thus allowing the outputs of different algorithms to be compared with each other and/or with real data accordingly.

@misc{wrro106101,
month = {February},
author = {H Wang and J Ond{\v r}ej and C O'Sullivan},
note = {{\copyright} 2016, The Authors. Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, https://doi.org/10.1145/2856400.2856410.},
booktitle = {I3D '16: 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games},
editor = {C Wyman and C Yuksel and SN Spencer},
title = {Path Patterns: Analyzing and Comparing Real and Simulated Crowds},
publisher = {ACM},
year = {2016},
journal = {Proceedings},
pages = {49--57},
keywords = {Crowd Simulation, Crowd Comparison, Data-Driven, Clustering, Hierarchical Dirichlet Process, Stochastic Optimization},
url = {http://eprints.whiterose.ac.uk/106101/},
abstract = {Crowd simulation has been an active and important area of research in the field of interactive 3D graphics for several decades. However, only recently has there been an increased focus on evaluating the fidelity of the results with respect to real-world situations. The focus to date has been on analyzing the properties of low-level features such as pedestrian trajectories, or global features such as crowd densities. We propose a new approach based on finding latent Path Patterns in both real and simulated data in order to analyze and compare them. Unsupervised clustering by non-parametric Bayesian inference is used to learn the patterns, which themselves provide a rich visualization of the crowd's behaviour. To this end, we present a new Stochastic Variational Dual Hierarchical Dirichlet Process (SV-DHDP) model. The fidelity of the patterns is then computed with respect to a reference, thus allowing the outputs of different algorithms to be compared with each other and/or with real data accordingly.}
}

H. Wang, E. Ho, and T. Komura, An energy-driven motion planning method for two distant postures, IEEE Transactions on Visualization and Computer Graphics, vol. 21, iss. 1, p. 18–30, 2015.

Abstract | Bibtex | PDF

In this paper, we present a local motion planning algorithm for character animation. We focus on motion planning between two distant postures where linear interpolation leads to penetrations. Our framework has two stages. The motion planning problem is first solved as a Boundary Value Problem (BVP) on an energy graph which encodes penetrations, motion smoothness and user control. Having established a mapping from the configuration space to the energy graph, a fast and robust local motion planning algorithm is introduced to solve the BVP to generate motions that could only previously be computed by global planning methods. In the second stage, a projection of the solution motion onto a constraint manifold is proposed for more user control. Our method can be integrated into current keyframing techniques. It also has potential applications in motion planning problems in robotics.

@article{wrro106108,
volume = {21},
number = {1},
month = {January},
author = {H Wang and ESL Ho and T Komura},
note = {{\copyright} 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.},
title = {An energy-driven motion planning method for two distant postures},
publisher = {https://doi.org/10.1109/TVCG.2014.2327976},
year = {2015},
journal = {IEEE Transactions on Visualization and Computer Graphics},
pages = {18--30},
keywords = {Planning; Interpolation; Equations; Couplings; Animation; Manifolds; Joints},
url = {http://eprints.whiterose.ac.uk/106108/},
abstract = {In this paper, we present a local motion planning algorithm for character animation. We focus on motion planning between two distant postures where linear interpolation leads to penetrations. Our framework has two stages. The motion planning problem is first solved as a Boundary Value Problem (BVP) on an energy graph which encodes penetrations, motion smoothness and user control. Having established a mapping from the configuration space to the energy graph, a fast and robust local motion planning algorithm is introduced to solve the BVP to generate motions that could only previously be computed by global planning methods. In the second stage, a projection of the solution motion onto a constraint manifold is proposed for more user control. Our method can be integrated into current keyframing techniques. It also has potential applications in motion planning problems in robotics.}
}

E. Ho, H. Wang, and T. Komura, A multi-resolution approach for adapting close character interaction, ACM, 2014.

Abstract | Bibtex | PDF

Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots.

@misc{wrro106110,
month = {November},
author = {ESL Ho and H Wang and T Komura},
note = {{\copyright} 2014 ACM. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in VRST '14 Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, http://doi.acm.org/10.1145/2671015.2671020. Uploaded in accordance with the publisher's self-archiving policy.},
booktitle = {20th ACM Symposium on Virtual Reality Software and Technology (VRST 14)},
title = {A multi-resolution approach for adapting close character interaction},
publisher = {ACM},
year = {2014},
journal = {Proceedings},
pages = {97--106},
keywords = {Character animation, close interaction, spacetime constraints},
url = {http://eprints.whiterose.ac.uk/106110/},
abstract = {Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots.}
}

X. Zhao, H. Wang, and T. Komura, Indexing 3d scenes using the interaction bisector surface, ACM Transactions on Graphics (TOG), vol. 33, iss. 3, 2014.

Abstract | Bibtex | PDF

The spatial relationship between different objects plays an important role in defining the context of scenes. Most previous 3D classification and retrieval methods take into account either the individual geometry of the objects or simple relationships between them such as the contacts or adjacencies. In this article we propose a new method for the classification and retrieval of 3D objects based on the Interaction Bisector Surface (IBS), a subset of the Voronoi diagram defined between objects. The IBS is a sophisticated representation that describes topological relationships such as whether an object is wrapped in, linked to, or tangled with others, as well as geometric relationships such as the distance between objects. We propose a hierarchical framework to index scenes by examining both the topological structure and the geometric attributes of the IBS. The topology-based indexing can compare spatial relations without being severely affected by local geometric details of the object. Geometric attributes can also be applied in comparing the precise way in which the objects are interacting with one another. Experimental results show that our method is effective at relationship classification and content-based relationship retrieval.

@article{wrro106156,
volume = {33},
number = {3},
month = {May},
author = {X Zhao and H Wang and T Komura},
note = {{\copyright} ACM, 2014. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Graphics (TOG) , 33 (3), May 2014, http://doi.acm.org/10.1145/2574860. Uploaded in accordance with the publisher's self-archiving policy.},
title = {Indexing 3d scenes using the interaction bisector surface},
publisher = {ACM},
year = {2014},
journal = {ACM Transactions on Graphics (TOG)},
keywords = {Algorithms, Design, Experimentation, Theory; Spatial relationships, classification, context-based retrieval},
url = {http://eprints.whiterose.ac.uk/106156/},
abstract = {The spatial relationship between different objects plays an important role in defining the context of scenes. Most previous 3D classification and retrieval methods take into account either the individual geometry of the objects or simple relationships between them such as the contacts or adjacencies. In this article we propose a new method for the classification and retrieval of 3D objects based on the Interaction Bisector Surface (IBS), a subset of the Voronoi diagram defined between objects. The IBS is a sophisticated representation that describes topological relationships such as whether an object is wrapped in, linked to, or tangled with others, as well as geometric relationships such as the distance between objects. We propose a hierarchical framework to index scenes by examining both the topological structure and the geometric attributes of the IBS. The topology-based indexing can compare spatial relations without being severely affected by local geometric details of the object. Geometric attributes can also be applied in comparing the precise way in which the objects are interacting with one another. Experimental results show that our method is effective at relationship classification and content-based relationship retrieval.}
}

H. Wang and T. Komura, Energy-Based Pose Unfolding and Interpolation for 3D Articulated Characters, Springer Verlag, 2011.

Abstract | Bibtex | PDF

In this paper, we show results of controlling a 3D articulated human body model by using a repulsive energy function. The idea is based on the energy-based unfolding and interpolation, which are guaranteed to produce intersection-free movements for closed 2D linkages. Here, we apply those approaches for articulated characters in 3D space. We present the results of two experiments. In the initial experiment, starting from a posture that the body limbs are tangled with each other, the body is controlled to unfold tangles and straighten the limbs by moving the body in the gradient direction of an energy function based on the distance between two arbitrary linkages. In the second experiment, two different postures of limbs being tangled are interpolated by guiding the body using the energy function. We show that intersection free movements can be synthesized even when starting from complex postures that the limbs are intertwined with each other. At the end of the paper, we discuss about the limitations of the method and future possibilities of this approach.

@misc{wrro105172,
volume = {7060},
month = {November},
author = {H Wang and T Komura},
booktitle = {4th International Workshop on Motion in Games (MIG 2011)},
editor = {JM Allbeck and P Faloutsos},
title = {Energy-Based Pose Unfolding and Interpolation for 3D Articulated Characters},
publisher = {Springer Verlag},
year = {2011},
journal = {Lecture Notes in Computer Science},
pages = {110--119},
keywords = {character animation; motion planning; pose interpolation},
url = {http://eprints.whiterose.ac.uk/105172/},
abstract = {In this paper, we show results of controlling a 3D articulated human body model by using a repulsive energy function. The idea is based on the energy-based unfolding and interpolation, which are guaranteed to produce intersection-free movements for closed 2D linkages. Here, we apply those approaches for articulated characters in 3D space. We present the results of two experiments. In the initial experiment, starting from a posture that the body limbs are tangled with each other, the body is controlled to unfold tangles and straighten the limbs by moving the body in the gradient direction of an energy function based on the distance between two arbitrary linkages. In the second experiment, two different postures of limbs being tangled are interpolated by guiding the body using the energy function. We show that intersection free movements can be synthesized even when starting from complex postures that the limbs are intertwined with each other. At the end of the paper, we discuss about the limitations of the method and future possibilities of this approach.}
}