Program

CVM 2016 Programme 

Download PDF conference programme

Day One Wednesday, 6 April 2016    
08:50 – 09:00 Welcome and Opening Remarks
09:00 – 09:45 Invited talk 1
4D Vision for Video-Realistic Interactive Animation
Adrian Hilton (University of Surrey, UK)
   
Session 1      Modeling from Images (Session Chair: Yu-Kun Lai)
09:45 – 10:10 Zhe Zhu, Ralph Martin, Robert Pepperell, Alistair Burleigh and Shimin Hu.
3D Modeling and Motion Parallax for Improved Videoconferencing
10:10 – 10:35 Yuliang Rong, Tianjia Shao, Youyi Zheng, Yin Yang and Kun Zhou.
An Interactive Approach for Functional Prototype Recovery from a Single RGBD Image
10:35 – 11:00 Jing Wu, Paul Rosin, Xianfang Sun and Ralph Martin.
Improving Shape from Shading with Interactive Tabu Search
   
11:00 – 11:20 Coffee Break
   
Session 2 Shape Structure (Session Chair: Ruofeng Tong)
11:20 – 11:45 Qiang Fu, Xiaowu Chen, Xiaoyu Su and Hongbo Fu.
Natural Lines Inspired 3D Shapes Re-Design
11:45 – 12:10 Zishun Liu, Juyong Zhang and Ligang Liu.
Upright Orientation of 3D Shapes with Convolutional Networks
12:10 – 12:35 Haiyong Jiang, Dong-Ming Yan, Weiming Dong, Liangliang Nan and Xiaopeng Zhang.
Symmetrization of Facade Layouts
   
12:35 – 14:00 Lunch
   
Session 3 Geometry Processing (Session Chair: Caiming Zhang)
14:00 – 14:25 Minfeng Xu and Changhe Tu.
Building Binary Orientation Octree for an Arbitrary Scattered Point Set
14:25 – 14:50 Wenpeng Xu, Wei Li and Ligang Liu.
Skeleton-Sectional Structural Analysis for 3D Printing
14:50 – 15:15 Bo Wu, Kai Xu, Yueshan Xiong and Hui Huang.
Skeleton-guided 3D Shape Distance Field Metamorphosis
15:15 – 15:40 Daniel Beale, Peter Hall, Yongliang Yang, Darren Cosker and Neill Campbell.
Fitting Quadrics with a Bayesian Prior
   
15:40 – 16:00 Coffee Break
   
16:00 – 17:00 Posters
David Pickup, Xianfang Sun, Paul Rosin and Ralph Martin
Skeleton-Based Canonical Forms for Non-Rigid 3D Shape Retrieval
Hongwei Lin.
A computational model of topological and geometric recovery for visual curve completion
Xiaole Zhao, Yadong Wu, Jinsha Tian and Hongying Zhang.
Single Image Super-Resolution via Blind Blurring Estimation and Anchored Space Mapping
Xiao Dong Tang, Ji Xiang Guo, Peng Li and Jian Cheng Lv.
A surgical simulation system for predicting facial soft tissue deformation
Zhimin Zhou, Xu Zhao and Lei Wang.
Parallelized Deformable Part Models with Effective Hypothesis Pruning
Hao Zhu and Qing Wang.
Accurate Disparity Estimation in Light Field using Ground Control Points
Cuixia Ma, Yang Guo and Hongan Wang.
VideoMap: An Interactive and Scalable Visualization for Exploring Video Content
   
17:30 – 18:30 Drink Reception with Snacks
   
Day Two Thursday, 7 April 2016
09:00 – 09:45 Invited talk 2 
Computational Imaging and Display – Hardware-Software Co-design for Imaging Devices
Wolfgang Heidrich (KAUST)
   
Session 4 Stereoscopic Video (Session Chair: Yong-Liang Yang)
09:45 – 10:10 Miao Wang, Xi-Jin Zhang, Jun-Bang Liang, Song-Hai Zhang and Ralph Martin.
Comfort-driven Disparity Adjustment for Stereoscopic Video
10:10 – 10:35 David Roberts and Ioannis Ivrissimtzis.
Quality Measures of Reconstruction Filters for Stereoscopic Volume Rendering
10:35 – 11:00 Zhang Chunping, Ji Zhe and Wang Qing.
Decoding and Calibration method on Focused Plenoptic Camera
   
11:00 – 11:20 Coffee Break
   
Session 5 Recognition (Session Chair: Peter Hall)
11:20 – 11:45 Yun-Hao Yuan, Yun Li, Hong-Wei Ge, Xiao-Bo Shen, Guoqing Zhang and Xiaojun Wu.
Learning Multi-kernel Multi-view Canonical Correlations for Image Recognition
11:45 – 12:10 Ashwan Abdulmunem, Yukun Lai and Xianfang Sun.
Saliency Guided Local and Global Descriptors for Effective Action Recognition
12:10 – 12:35 Xi-Jin Zhang, Yi-Fan Lu and Song-Hai Zhang.
Multi-task Learning For Food Identification and Analysis with Deep Convolutional Neural Network
   
12:35 – 14:00 Lunch
   
Session 6 Image Processing (Session Chair: Wolfgang Heidrich)
14:00 – 14:25 Xiang Chen, Weiwei Xu, Sai-Kit Yeung and Kun Zhou.
View-aware Image Object Compositing and Synthesis from Multiple Sources
14:25 – 14:50 Liqiong Wu, Caiming Zhang and Yepeng Liu.
High-resolution Image Based on Directional Fusion of Gradient
14:50 – 15:15 Wenqian Deng, Xuemei Li and Caiming Zhang.
A Modified Fuzzy C-means Algorithm for Brain MR Image Segmentation and Bias Field Correction
   
15:15 – 15:35 Coffee Break
   
Session 7 Scene Analysis (Session Chair: Changhe Tu)
15:35 – 16:00 Shi-Sheng Huang, Hongbo Fu and Shi-Min Hu.
Structure Guided Interior Scene Synthesis via Graph Matching
16:00 – 16:25 Wei Qi, Ming-Ming Cheng, Ali Borji, Huchuan Lu, Lian-Fa Bai.
SaliencyRank: Two-stage Manifold Ranking for Salient Object Detection
16:25 – 16:45 Zhendong Wang, Tongtong Wang, Min Tang and Ruofeng Tong.
Efficient and Robust Strain Limiting and Treatment of Simultaneous Collisions with Semidefinite Programming
   
18:00 – 20:00 Banquet
   
Day Three         Friday, 8 April 2016
09:00 – 09:45 Invited talk 3 
Digital Avatars for All: Interactive Face and Hairs
Kun Zhou (Zhejiang University, China)
   
Session 8 Alignment and Calibration (Session Chair: Ralph Martin)
09:45 – 10:10 Tong Lin, Yao Liu, Bo Wang, Liwei Wang and Hongbin Zha.
Local Orthogonality Preserving Alignment for Nonlinear Dimensionality Reduction
10:10 – 10:35 Ke-Li Cheng, Xuan Ju, Ruo-Feng Tong, Min Tang, Jian Chang and Jian-Jun Zhang.
A Linear Approach for Depth and Colour Camera Calibration Using Hybrid Parameters
   
10:35 – 10:55 Coffee Break
   
Session 9 Miscellaneous (Session Chair: Shi-Min Hu)
10:55 – 11:20 Craig Henderson and Ebroul Izquierdo.
Rethinking Random Hough Forests for video database indexing and pattern search
11:20 – 11:45 Lu, Shaoping; Dauphin, Guillaume; Lafruit, Gauthier; Munteanu, Adria.
Color retargeting: Interactive time-varying color image composition from time-lapse sequences
11:45 – 12:10 Daniel Kauker, Martin Falk, Guido Reina, Anders Ynnerman and Thomas Ertl.
VoxLink – Combining Sparse Volumetric Data and Geometry for Efficient Rendering
   
12:10 – 12:25 Closing Session
   
12:25-13:30 Lunch

Keynote Speakers

Prof. Adrian Hilton

Title:

4D Vision for Video-Realistic Interactive Animation

Abstract:

Over the past decade advances in computer vision have enabled the 3D reconstruction of dynamic scenes from multiple view video. This has allowed video-based free-viewpoint rendering with the photo-realism of videowhilst allowing interactive viewpoint control. This technology initially pioneered for highly controlled indoor scenes has been extended to free-viewpoint rendering of large-scale outdoor scenes such as sports for TV production. Free-viewpoint video content is limited to the replay of the captured performance. This talk will present results of recent research  in 4D vision research for actor performance capture which allows both video-realistic free-viewpoint rendering and interactive control of movement. Recent research has introduced methods for spatio-temporal alignment and parametric representation of dynamic shape and appearance from capture performance to allow interactive control whilst maintaining the realism of the captured video. This opens-up the potential for reuse of 4D performance capture to create video-realistic characters for immersive entertainment. This talk will review recent advances and  identify future research challenges for 4D vision in entertainment and human motion analysis.

Speaker’s Biography:

Adrian Hilton, BSc(hons),DPhil,CEng, is Professor of Computer Vision and Graphics and Director of the Centre for Vision, Speech and Signal Processing at the University of Surrey, UK. He leads research investigating the use of computer vision for applications in entertainment content production, visual interaction and clinical analysis.

His interest is n robust computer vision to model and understand real world scenes and his work in bridging-the-gap between real and computer generated imagery combines the fields of computer vision, graphics and animation to investigate new methods for reconstruction, modelling and understanding of the real world from images and video. Applications include: sports analysis (soccer, rugby and athletics), 3D TV and film production, visual effects, character animation for games, digital doubles for film and facial animation for visual communication.

Contributions include technologies for the first hand-held 3D scanner, modeling of people from images and 3D video for games, broadcast and film production. Current research is focused on video-based measurement in sports, multiple camera systems in film and TV production, and 3D video for highly realistic animation of people and faces. Research is conducted in collaboration with UK companies and international institutions in the creative industries

Adrian is currently the Principal Investigator of the multi-million EPSRC Progamme Grant S3A: ‘Future Spatial Audio for Immersive Listener Experience at Home’ (2013-2018), he also leads several EU and UK/ industry projects. Adrian currently holds a 5-year Royal Society Wolfson Research Merit Award (2013-2018).


Prof. Wolfgang Heidrich

Title:

Computational Imaging and Display – Hardware-Software Co-design for Imaging Devices

Abstract:

Computational Imaging aims to develop new cameras and imaging modalities that optically encode information about the real world in such a way that it can be captured by image sensors. The resulting images represent detailed information such as scene geometry, motion of solids and liquids, multi-spectral information, or high contrast (high dynamic range), which can then be computationally decoded using inverse methods, machine learning, and numerical optimization. Computational Displays use a similar approach, but in reverse. Here, the goal is to computationally encode a target image that is then optically decoded by the display hardware for presentation to a human observer. Computational displays are capable of generating glasses-free 3D displays, high dynamic range imagery, or images and videos with spatial and/or temporal super-resolution. In this talk I will give an overview of recent advances and current challenges in rapidly expanding research area.

Speaker’s Biography:

Wolfgang Heidrich is a Professor of Computer Science and the Director of the Visual Computing Center at King Abdullah University of Science and Technology. He is also a Professor (on leave) at the University of British Columbia. Dr. Heidrichreceived his PhD in Computer Science from the University of Erlangen in 1999, and then worked as a Research Associate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC. Dr. Heidrich’s research interests lie at the intersection of computer graphics, computer vision, imaging, and optics. In particular, his more recent work is on computational photography and displays, High Dynamic Range imaging and display, as well as image-based modeling, measuring, and rendering, geometry acquisition. His work on High Dynamic Range Displays served as the basis for the technology behind Brightside Technologies, which was acquired by Dolby in 2007. Dr. Heidrichhas served on numerous program committees for top-tier conferences such as Siggraph, Siggraph Asia, Eurographics, EGSR, and in 2016 he is chairing the papers program for both Siggraph Asia and the International Conference of Computational Photography (ICCP). Dr. Heidrich is the recipient of a 2014 Humboldt Research Award.


Prof. Kun Zhou

Title:

Digital Avatars for All: Interactive Face and Hairs

Abstract:

Although realistic face/hair modeling and animation technologies have been widely employed in computer generated movies, it remains challenging to deploy them in consumer-level applications such as computer games, social networks and other interactive applications. The main difficulties come from the requirement of special equipment, sensitivity to daily environments, laborious manual work and high computational costs. In this talk, I will introduce our recent work on realistic face/hair modeling and animation, targeting at interactive applications and ordinary users. In particular, I will describe fully automatic approaches to real-time facial tracking and animation with a single web camera, methods for modeling hairs from images, and real-time algorithms for realistic hair simulation.

Speaker’s Biography:

 Kun Zhou is a Cheung Kong Professor and the Director of the State Key Lab of CAD&CG at Zhejiang University. Earlier, he was a Lead Researcher of the Internet Graphics Group at Microsoft Research Asia. He received his BS and PhD degrees from Zhejiang University in 1997 and 2002, respectively. His research interests include geometry processing, photorealistic rendering, computer animation and GPU parallel computing. He is currently on the editorial boards of ACM Transactions on Graphics and The Visual Computer, and serves on the editorial advisory board of IEEE Spectrum. He is a Fellow of IEEE.