Workshop Program

2018 Computational Visual Media Conference

The Fourth Workshop on Smart Robotics

Program (18 April, 2018)

TIME SPEECHES & TALKS
09:00 – 09:50 Keynote Speech (Dinesh Manocha)
09:50 – 10:10 TEA BREAK
10:10 – 11:00 Keynote Speech (Ruigang Yang)
11:00 – 12:20
(20 minutes/each)
Paper Session 1

Paper 1.1: An Empirical Comparison of Different Supports in Sequential Robotic Manipulation (Chao Cao, Weiwei Wan, Jia Pan, Kensuke Harada)

Paper 1.2: A Novel Iris Center Localization Method based on Geometric Relationship of Eyeball Model (Jianing Lin, Jialin Yu, Xiaolong Zhou, Zhanpeng Shao, Shengyong Chen)

Paper 1.3: Real-time 3D Reconstruction via A Low-cost Photometric Stereo System (Zhao Song, Zhan Song)

Paper 1.4 : A Novel Electrostatic Gripper for Fabric Handling (Bin Sun, Xinyu Zhang)
12:00 – 14:00 LUNCH
14:00 – 14:50 Keynote Speech (Hesheng Wang)
14:50 – 15:10 TEA BREAK
15:10 – 16:00 Keynote Speech (Rynson Lau)
16:00-17:20
(20 minutes/each)
Paper Session 2

Paper 2.1: Energy-Efficient Coverage Path Planning for Freeform Surfaces (Chenming Wu, Chengkai Dai, Yong-Jin Liu, Xianfeng David Gu, Charlie C. L. Wang)

Paper 2.2: Adaptive Visual Servoing for an Underwater Soft Robot with a Calibration-free Camera (Fan Xu, Hesheng Wang)

Paper 2.3: Efficient Inter-Process Communication Framework for Robotics Middleware (Yu-Ping Wang, Wen-De Tan, Shi-Min Hu)

Paper 2.4: Understanding Reinforcement Learning on Self-Driving (Yurong You, Zhaozhe Song, Chen Wang, Cewu Lu)
18:00 – 20:00 DINNER

 


Keynote Speaker I:

Dinesh Manocha

University of North Carolina at Chapel Hill
University of Maryland at College Park
http://gamma.cs.unc.edu/AutonoVi
Date & Time: 9:00-9:50

Talk Title:  Simulation and Navigation for Autonomous Vehicles in Dense Scenarios

Talk Abstract:

In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. Recently, there is considerable interest in developing simulator or virtual world for set-driving vehicles. We present AutonoVi-Sim, a novel high-fidelity simulation platform for testing autonomous driving algorithms. AutonoVi-Sim is a collection of high-level extensible modules which allows for the rapid development and testing of vehicle configurations, and facilitates construction of complex road networks. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. We highlight its performance in traffic and driving scenarios. We also describe new algorithms for planning and navigation in dense road conditions. Our approach takes into account the shapes and dynamics of road entities like cars, pedestrians, bicycles, trucks and use them to design local navigation methods. We also infer driver behavior’s from the vehicle trajectories and use them to design safe navigation strategies.

Short Bio:

Dinesh Manocha is currently the Phi Delta Theta/Mason Distinguished Professor of Computer Science at the University of North Carolina at Chapel Hill. In Fall’2018, he will join University of Maryland at College Park. He received his Ph.D. in Computer Science at the University of California at Berkeley 1992. He has published more than 480 papers and some of the software systems related to collision detection, GPU-based algorithms and geometric computing developed by his group have been downloaded by more than 200,000 users and are widely used in the industry. Along with his students, Manocha has also received 16 best paper awards at the leading conferences. He has supervised 33 Ph.D. dissertations and is a fellow of ACM, AAAS, AAAI, and IEEE. He received Distinguished Alumni Award from Indian Institute of Technology, Delhi.

 

Keynote Speaker II:

Ruigang Yang

Robotics and Autonomous Driving Lab @ Baidu Beiijng
University of Kentucky
http://vis.uky.edu/~ryang/
Date & Time: 10:10-11:00

Talk Title: Open-Source Autonomous Driving

Talk Abstract:

Autonomous driving cars is arguably the most anticipated smart robots in the near future. In this talk I will present Baidu’s open source effort in autonomous driving, the Apollo project. I will talk about the architecture and capabilities provided by Apollo. Then I will introduce ApolloScape, a set of open-access tools and datasets for autonomous driving research. I will conclude with a number of open questions in the quest for safe and robust autonomous driving technology.

Short Bio:

Ruigang Yang is currently Chief Scientist for 3D Vision at Baidu Research. He leads the Robotics and Autonomous Driving Lab (RAL). Before joining Baidu. he was a full professor of Computer Science at the University of Kentucky. He obtained his PhD degree from University of North Carolina at Chapel Hill and his MS degree from Columbia University. His research interests span over computer graphics and computer vision, in particular in 3D reconstruction and 3D data analysis. He has published over 100 papers, which, according to Google Scholar, has received close to 10000 citations with an h-index of 48 (as of 2017). He has received a number of awards, including US NSF Career award in 2004 and the Dean’s Research Award in 2013. He is currently an associate editor of IEEE TPAMI and a senior member of IEEE.

 

Keynote Speaker III:

Hesheng Wang

Shanghai Jiaotong University
http://robotics.sjtu.edu.cn/index.php?r=profile/view&id=10672
Date & Time: 14:00-14:50

Talk Title: Visual Servoing of Robots in Unstructured Environments

Talk Abstract:

Visual servoing is an important technique that uses visual information for the feedback control of robots. To implement a visual servo controller, an important step is to calibrate the intrinsic and extrinsic parameters of the camera. It is well known that the camera calibration is costly and tedious. The calibration accuracy of these parameters significantly affects the control errors. It is desirable to use uncalibrated visual signals directly in controller design. By directly incorporating visual feedback in the dynamic control loop, it is possible to enhance the system stability and the control performance. Dynamic visual servoing is to design the joint inputs of robot manipulators directly using visual feedback. In the design, the nonlinear dynamics of the robot manipulator is taken into account. In this talk, various visual servoing approaches will be presented to work in uncalibrated environments. These methods are also implemented in many robot systems such as manipulator, mobile robot, soft robot, quadrotor and so on.

Short Bio:

Hesheng Wang received the B.Eng. degree in Electrical Engineering from the Harbin Institute of Technology. Harbin, China, in 2002, the M.Phil. and Ph.D. degrees in Automation & Computer-Aided Engineering from the Chinese University of Hong Kong, Hong Kong, in 2004 and 2007, respectively. From 2007 to 2009, he was a Postdoctoral Fellow and Researcher Assistant in the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong. He joined Shanghai Jiao Tong University as an Associate Professor in 2009. Currently, he is a Professor of Department of Automation, Shanghai Jiao Tong University, China. He is also the director of the medical robot joint research center of the Chinese University of Hong Kong and Shanghai Jiao Tong University. He worked as a visiting professor at University of Zurich in Switzerland. His research interests include visual servoing, service robot, robot control and computer vision.
Prof. Wang has published more than 100 papers in refereed professional journals and international conference proceedings. He is an associate editor of Robotics and Biomimetics, Assembly Automation, International Journal of Humanoid Robotics and IEEE Transactions on Robotics. He was a guest editor of Mathematical Problems in Engineering, Journal of Applied Mathematics and International Journal of Advanced Robotic Systems. He served as associate editor in Conference Editorial Board of IEEE Robotics and Automation Society from 2011 to 2015. He was the program chair of the 2014 IEEE International Conference on Robotics and Biomimetics, the general chair of the 2016 IEEE International Conference on Real-time Computing and Robotics. He is the program chair of The 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics. He hold 20 national patents. He was a recipient of Shanghai Rising Star Award in 2014 and The National Science Fund for Outstanding Young Scholars in 2017. He is a Senior Member of IEEE.

 

Keynote Speaker IV:

Rynson Lau

The City University of Hong Kong
http://www.cs.cityu.edu.hk/~rynson/
Date & Time: 15:10-16:00

Talk Title: Saliency Analysis and Graphics Design

Talk Abstract:

We have been applying machine/deep learning techniques to solve two related problems, saliency detection and graphics design. In this talk, I would like to present our recent projects on these two problem, and on how saliency analysis could benefit graphics design.

Short Bio:

Rynson Lau received his Ph.D. degree from University of Cambridge. He was on the faculty of Durham University and Hong Kong Polytechnic University. He is now with City University of Hong Kong.
Rynson serves on the Editorial Board of Computer Graphics Forum, and Computer Animation and Virtual Worlds. He has served as the Guest Editor of a number of journal special issues, including ACM Trans. on Internet Technology, IEEE Multimedia, IEEE Trans. on Multimedia, IEEE Trans. on Visualization and Computer Graphics, and IEEE Computer Graphics and Applications. In addition, he has also served in the committee of a number of conferences, including Program Co-chair of ACM VRST 2004, ACM MTDL 2009, IEEE U-Media 2010, and Conference Co-chair of CASA 2005, ACM VRST 2005, ICWL 2007, ACM MDI 2009, ACM VRST 2010, ACM VRST 2014. His research interests include computer graphics and computer vision.