Program
CVM 2019 Programme
Conference venue: CB 2.6, The Chancellor’s Building, University of Bath
Download PDF conference programme
Day One | Wednesday, 24 April 2019 | |
08:45 – 09:00 | Welcome and Opening Remarks | |
09:00 – 09:45 | Invited talk 1 CreativeAI: Data-driven Editable 3D Content Creation Niloy J. Mitra (University College London, UK) |
|
Session 1 | Rendering & Inverse Rendering (Session Chair: David Mould) |
|
09:45 – 10:10 | Yulin Liang, Beibei Wang, Lu Wang, Nicolas Holzschuch. Fast Computation of Single Scattering in Participating Media with Refractive Boundaries using Frequency Analysis |
|
10:10 – 10:35 | Xin Yang, Wenbo Hu, Dawei Wang, Lijing Zhao, Baocai Yin, Qiang Zhang, Xiaopeng Wei, Hongbo Fu. DEMC: A Deep Dual-Encoder Network for Denoising Monte Carlo Rendering |
|
10:35 – 11:00 | Jiri Filip, Radomir Vavra. Image-based Appearance Acquisition of Effect Coatings |
|
11:00 – 11:20 | Coffee Break | |
Session 2 | Packing and Mosaic (Session Chair: Yukun Lai) | |
11:20 – 11:45 | Cong Feng, Minglun Gong, Oliver Deussen, Hui Huang. Treemapping via Balanced Partitioning |
|
11:45 – 12:10 | Lars Doyle, Forest Anderson, Ehren Choy, David Mould. Automated Pebble Mosaic Stylization of Images |
|
12:10 – 12:35 | Pengfei Xu, Jianqiang Ding, Hao Zhang, Hui Huang. Discernible Image Mosaic with Edge-Aware Adaptive Tiles |
|
12:35 – 14:00 | Lunch | |
Session 3 | Geometry Processing (Session Chair: Ralph Martin) | |
14:00 – 14:25 | Wensong Wang, Zhen Fang, Shiqing Xin, Ying He, Yuanfeng Zhou, Shuangmin Chen. Tracing High-quality Isolines for Discrete Geodesic Distance Fields |
|
14:25 – 14:50 | Jun-Xiong Cai, Tai-Jiang Mu, Yu-Kun Lai, Shi-Min Hu. Deep Point-based Scene Labeling with Depth Mapping and Geometric Patch Feature Encoding |
|
14:50 – 15:15 | Haiyue Fang, Xiaogang Wang, Bin Zhou, Zheyuan Cai, Yahao Shi, Xun Sun, Shilin Wu. Learning Semantic Abstraction of Shape via 3D Region of Interest |
|
15:15 – 15:40 | Yujie Yuan, Yu-Kun Lai, Tong Wu, Shihong Xia, Lin Gao. Data-Driven Weight Optimization for Real-Time Mesh Deformation |
|
15:40 – 16:00 | Coffee Break | |
16:00 – 17:00 | Poster Session | |
Qing Ran, Yongliang Yang, Jieqing Feng. High-precision Human Body Acquisition via Multi-view Binocular Stereopsis Xiao-Chang Liu, Shao-Ping Lu, Jie Wang, Ming-Ming Cheng. DeepBrush: CNN-based Structured Style Transfer on 2D/3D Surfaces Biyao Shao, Feng Xu, Hao Zhao, Chenggang Yan. 3D Room Layout Estimation from a Single RGB Image Shan Luo, Jieqing Feng. Symmetry-aware Kinematic Skeleton Extraction of a 3D Human Body Model Lili Wang, Xinglun Liang, Jianjun Chen. Generating Light Sources for 3D Building with a Single Night Photo Da Chen, Luca Benedetti, Dmitry Kit, Wenbin Li, Peter Hall. NPD: The Natural Phenomena Dataset Yuxi Jin, Ping Li, Wenxiao Wang, Bin Sheng, Enhua Wu. Cross-Scale Based 3D Effects Pencil Drawing Image Generation Xiangyang Su, Linjing Lai, Zhongchang Sun, Lei Zhang, Hua Huang. Infrared And Visible Image Fusion Guided by Hierarchical Saliency Structure Paul Maximilian Bittner, Jan-Philipp Tauscher, Steve Grogorick, Marcus Magnor. Evaluation of Optimised Centres of Rotation Skinning Xiaohan Liu, Baorong Yang, Junfeng Yao, Lei Lan, Liling Zheng. Sketch-based 3D Shape Retrieval via Convolutional Neural Networks Using an Angle Matrix Feature as Shape Descriptor Congyue Deng, Jiahui Huang, Yongliang Yang, Shi-Min Hu. Interactive Modeling of Lofting Shapes from a Single Image Zhifeng Xie, Shuhan Zhang, Jiaping Wu. CNN-based Detection, Recognition, and Regularization for Cigarette Code |
||
19:00 – 21:00 | Drink Reception with Snacks at The Roman Baths |
|
Day Two | Thursday, 25 April 2019 | |
09:00 – 09:45 | Invited talk 2 Expressive Modelling of Animated Virtual Worlds Marie-Paule Cani (Ecole Polytechnique, France) |
|
Session 4 | Character Animation and Poses (Session Chair: Lin Gao) | |
09:45 – 10:10 | Yilong Liu, Chengwei Zheng, Feng Xu, Xin Tong, Baining Guo. Data-Driven 3D Neck Modeling and Animation |
|
10:10 – 10:35 | Shaojun Bian, Zhigang Deng, Ehtzaz Chaudhry, Lihua You, Xiaosong Yang, Lei Guo, Hassan Ugail, Xiaogang Jin, Zhidong Xiao, Jian Jun Zhang. Efficient and Realistic Character Animation through Analytical Physics-based Skin Deformation |
|
10:35 – 11:00 | Shuai Li, Zheng Fang, Wenfeng Song, Aimin Hao, Hong Qin. Bidirectional Optimization Coupled Lightweight Networks for Efficient and Robust Multi-Person Pose Estimation |
|
11:00 – 11:20 | Coffee Break | |
Session 5 | Segmentation & Reconstruction (Session Chair: Lei Zhang) | |
11:20 – 11:45 | Jiahui Huang, Jun Gao, Vignesh Ganapathi-Subramanian, Hao Su, Yin Liu, Chengcheng Tang, Leonidas Guibas. DeepPrimitive: Image Decomposition by Layered Primitive Detection |
|
11:45 – 12:10 | Salma Alqazzaz, Sun Xianfang, Xin Yang, Len Nokes. Automated Brain Tumor Segmentation on Multi-modal MR Image Using SegNet |
|
12:10 – 12:35 | Bo Ren, Jiacheng Wu, Yalei Lyu, Ming-Ming Cheng, Shaoping Lu. Geometry-aware ICP for Scene Reconstruction from RGB-D Camera |
|
12:35 – 14:00 | Lunch | |
Session 6 | Recognition (Session Chair: Feng Xu) | |
14:00 – 14:25 | Liang Han, Pin Tao, Ralph Martin. Livestock Detection in Aerial Images using a Fully Convolutional Network |
|
14:25 – 14:50 | Tailing Yuan, Zhe Zhu, Kun Xu, Chengjun Li, Taijiang Mu, Shi-Min Hu. A Large Chinese Text Dataset in the Wild |
|
14:50 – 15:15 | Min Liu, Yifei Shi, Lintao Zheng, Kai Xu, Hui Huang, Dinesh Manocha. Recurrent 3D Attentional Networks for End-to-End Active Object Recognition |
|
15:15 – 15:40 | Yizhi Song, Ruochen Fan, Sharon Huang, Zhe Zhu, Ruofeng Tong. A Three-Stage Real-time Detector for Traffic Signs in Large Panoramas |
|
15:40 – 16:00 | Coffee Break | |
Session 7 | Image Synthesis (Session Chair: Christian Richardt) | |
16:00 – 16:25 | Shu-Yang Zhang, Run-Ze Liang, Miao Wang. ShadowGAN: Shadow Synthesis for Virtual Objects with Conditional Adversarial Networks |
|
16:25 – 16:50 | Xiaochuan Wang, Xiaohui Liang, Bailin Yang, Frederick W.B. Li. No-reference Synthesized Image Quality Assessment with Convolutional Neural Network and Local Image Saliency |
|
16:50 – 17:15 | Na Ding, Yepeng Liu, Linwei Fan, Caiming Zhang. Single Image Super-Resolution via Dynamic Lightweight Database with Local-Feature Based Interpolation |
|
19:00 – 21:00 | Banquet at Bath Function Rooms |
|
Day Three | Friday, 26 April 2019 | |
09:00 – 09:45 | Invited talk 3 A New Chapter of Mobile Photography Jue Wang (Megvii/Face++ Research, USA) |
|
Session 8 | 3D Printing (Session Chair: Tianjia Shao) | |
09:45 – 10:10 | Yisong Gao, Lifang Wu, Dongming Yan, Liangliang Nan. Near Support-free Multi-directional 3D Printing via Global-optimal Decomposition |
|
10:10 – 10:35 | Minghai Chen, Fan Xu, Lin Lu. Fabricable Patterns Collage along a Given Boundary |
|
10:35 – 10:55 | Coffee Break | |
Session 9 | Image Processing (Session Chair: Miao Wang) | |
10:55 – 11:20 | Gerben Jan Hettinga, Rowan van Beckhoven, Jiri Kosinka. Noisy Gradient Meshes: Augmenting Gradient Meshes with Procedural Noise |
|
11:20 – 11:45 | Mengke Yuan, Longquan Dai, Dongming Yan, Li-Qiang Zhang, Jun Xiao, Xiaopeng Zhang. Fast and Error-Bounded Space-Variant Bilateral Filtering |
|
11:45 – 12:10 | Dewang Li, Linjing Lai, Hua Huang. Defocus Hyperspectral Image Deblurring With Adaptive Reference Image And Scale Map |
|
12:10 – 12:25 | Closing Session | |
12:25-13:30 | Lunch |
Keynote Speakers
Title:
CreativeAI: Data-driven Editable 3D Content Creation
Abstract:
A long-standing goal of Computer Graphics is to create high-quality editable geometric content for a variety of applications including games, movies, product design, and engineering simulation. Decades of research has focused on developing tools to simplify such creative workflows. However, the process continues to heavily rely on highly skilled experts creating customized content using extensive manual effort. This is tedious and expensive. Moreover, only a few options allow reusing data across multiple content creation scenarios, even for very closely related tasks. Advances in machine learning open up new avenues to fundamentally change content creation workflows. In this talk, I will discuss the latest results in this area and discuss how futuristic content creation workflows are likely to be. The talk will feature our latest methods in the context of patterns, geometry, and texture creation, and discuss open challenges in this area. More at http://geometry.cs.ucl.ac.uk/publications.php
Speaker’s Biography:
Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London. He received his PhD from Stanford University under the guidance of Leonidas Guibas. His research interests include shape analysis, creativeAI, and computational design and fabrication. Niloy received the ACM Siggraph Significant New Researcher Award in 2013 and the BCS Roger Needham award in 2015. His work has twice been featured as research highlights in the Communications of the ACM, received best paper award at ACM Symposium on Geometry Processing 2014, and Honourable Mention at Eurographics 2014. Besides research, Niloy is an active DIYer and loves reading, bouldering, and cooking.
Title:
Expressive Modelling of Animated Virtual Worlds
Abstract:
While the use of digital models and simulation has already spread in many disciplines, recent advances in Computer Graphics open the way to a much lighter ways of creating 3D contents. In this talk, I’ll show how the expressive modelling paradigm – namely, the combination of graphical models embedding knowledge with gestural interfaces such as sketching, sculpting or copy-pasting – can be extended to full, animated 3D worlds. We will discuss how prior knowledge can be enhanced by learning from examples – the latter being possibly created on the fly by the user during a modelling session, and how the methods designed for static shapes can be extended to animation. As we show through examples, this methodology opens the way to seamless creation and interaction with the virtual worlds the user has in mind, opening new horizons to engineers and scientists and well as to the general public.
Speaker’s Biography:
Marie-Paule Cani is Professor of Computer Science at Ecole Polytechnique, a school she joined in 2017 after 24 years of career at Grenoble-INP & Inria. Her research interests cover both Shape Modelling and Computer Animation. She contributed to a number of high level models for shapes and motion such as implicit surfaces, multi-resolution physically-based animation and hybrid representations for real-time natural scenes. Following a long lasting interest for virtual sculpture, she has been searching for more expressive ways to create 3D contents such as combining sketch-based interfaces with procedural models based or with a combination of knowledge and learning. She received the Eurographics outstanding technical contributions award in 2011, a silver medal from CNRS in 2012, was elected at Academia Europaea in 2013 and was awarded an ERC advanced grant from 2012 to 2017. She served as Technical Paper Chair of SIGGRAPH 2017 and was President of the Eurographics Association in 2017 and 2018.
Title:
A New Chapter of Mobile Photography
Abstract:
The computational power, especially neural network processing power of mobile devices will be advanced significantly in the near future. Combined with better camera sensors and new camera module designs, it provides an exciting opportunity to revamp the traditional mobile imaging technologies, and brings new and exciting photography experiences to virtually everyone around the world. It is a brand new chapter in the history of photography, and it has never been a better time to work in this area. In this talk, I will introduce a series of new mobile photography technologies and applications that we are developing in-house, many will soon be released to tens of millions of end users worldwide. I will also shed some light on where the industry is going moving forward.
Speaker’s Biography:
Jue Wang is the managing director of Megvii Research USA, leading a research team that focuses on transforming mobile user experience with AI. He received his BE and MS from Tsinghua University in Beijing, and his PhD from the University of Washington at Seattle. He was a Principal Scientist at Adobe Research before joining Megvii in 2017. He has published more than 100 research articles in top-tier academic journals and conferences, and holds more than 60 international patents. Dr. Wang also has a respectable record of transferring advanced technologies into consumer products. His early work on image Matting has been transferred into a commercial product called Silhouette that won 2019 Oscar technical award. At Adobe he has developed a dozen of release-defining new features for Photoshop and After Effects. At Megvii, he and his team have developed new photography techniques that have been shipped with tens of millions of mobile phones worldwide.