Plenary Talk I
Date : December 7 (Monday), 2015
Graduate School of Information Science and Technology
University of Tokyo,Japan
Superhuman Sports, a form of “Human-Computer Integration” to overcome somatic and spatial limitation of humanity by merging technology with the body. In Japan, official home of the 2020 Olympics and Paralympics, we hope to create a future of sports where everyone, strong or weak, young or old, abled or disabled, can play and enjoy playing without being disadvantaged. In order for us to realize to be equal athletes in the area of super-human sports, augmented human can be the technology that empowers us to overcome the biological barriers of individuals and of our specie. Our goal for Superhuman Sports is to push human performance into new peaks. This talk will show our vision and discuss a capability of Mixed and Augmented Reality for Superhuman Sports. Also, it includes our approaches with concrete steps such as Telexistence, Optical Camouflage, Stop-Motion Goggle and Galvanic Vestibular Stimulation.
Plenary Talk II
Date : December 7 (Monday), 2015
Biorobotics Laboratory, Department of Mechanical and Aerospace Engineering
Seoul National University, Seoul, Korea
Soft robotics is an emerging field of research that uses soft or compliant materials and elements to overcome the limitation of traditional robotics. Traditionally, robots have been used in an industrial environment with few unknown parameters. As more and more robots are used to interact with environments that are uncertain and vulnerable to change, a technology that can easily adapt to the changing environment is needed. Soft robotics deals with this issue by using soft and compliant elements in an intelligent way. Bio-inspired robotics is the field of robotics that leads the use of this technology. Nature has many examples where it achieves high performance with a soft intelligent design. In this talk, I will give an overview of various soft bio-inspired robotic technologies and some of the robots that are being developed at SNU Biorobotics Laboratory. Water strider inspired robotic insect, for example, can jump on water as high as jumping on ground by intelligently controlling the force profile that it applies on the water. Inchworm’s proleg inspired gripper achieves adaptive gripping by using the buckling effect. Such examples show that high adaptability to the environment can be achieved with a simple physically intelligent design and these soft bio-inspired robotic technologies will enable robots to perform in rough and unstructured environments effectively.
Plenary Talk III
Date : December 8 (Tuesday), 2015
Science and Engineering Faculty,
Electrical Engineering, Computer Science,
Robotics and Autonomous Systems
Queensland University of Technology, Brisbane, Australia
The brain circuitry involved in encoding space in rodents has been extensively tested over the past forty years, with an ever increasing body of knowledge about the components and wiring involved in navigation tasks. The learning and recall of spatial features is known to take place in and around the hippocampus of the rodent, where there is clear evidence of cells that encode the rodent’s position and heading. RatSLAM is a primarily vision-based robotic navigation system based on current models of the rodent hippocampus, which has achieved several significant outcomes in vision-based Simultaneous Localization And Mapping (SLAM), including mapping of an entire suburb using only a low cost webcam, and navigation continuously over a period of two weeks in a delivery robot experiment. This research led to the development of the SeqSLAM system, which in recent experiments has demonstrated that impressive feats of vision-based navigation can be achieved at any time of day or night, during any weather, and in any season using visual images as small as 2 pixels in size. In our current research we are investigating the problem of place recognition and visual navigation from two angles. The first is from a neuroscience-inspired perspective, modelling the multi-scale neuronal map of space found in the mammalian brain and the variably tolerant and selective visual recognition process in the primate and human brain. The second is from an algorithmic perspective, utilizing state of the art deep learning techniques. I will discuss the insights from this research, as well as current and future areas of study with the aim of stimulating discussion.
Plenary Talk IV
Date : December 9 (Wednesday), 2015
Graduate School of Information Sciences
Visual servo is a technique to control robots using real-time camera image feedback. It has long story starting from the 1980s. While the use of vision for autonomous robots is a familiar story, the use of visual servo is not very active in industrial applications. On the other hand, the research around the computer vision has very rapid progress by enjoying the technologies of artificial intelligence, machine learning and 3D sensors. These techniques are not really new but the quantitative change of available computational resource makes qualitative change of data processing techniques like the Cambrian Explosion. What is the difference between visual servo and computer vision? The industrial robots are expected to deal many kinds of targets. The time required for arrangement change should be as small as possible. However, the current industrial robots are designed to repeat the same motions without outside sensor feedback, the robot hand are designed to deal single manipulation part, and the upper and lower processes of manipulator are specially designed for single assembly product. More unfortunate situation is the number of SIers who can program the robot motion. In this talk I will show several examples of industrial use of visual servo, which has the potential to change the robotic assembly lines.
Plenary Talk V
Date : December 9 (Wednesday), 2015
Senior Principal Scientist
ABB Corporate Research Center
Industrial robots have been invented and gradually expanded their applications for decades. Besides traditional applications, based on the basic feature of the programmable machine, such as painting, spot welding and material handling, industrial robots have evolved intelligent features and grown advanced applications both in automotive and general industries. Robot force control and vision integration are among the major intelligent features of industrial robotics. This presentation overviews the evolution of industrial robotics, introduces their intelligent components, and describes their intelligent functions and applications from corporate R&D and manufacturing automation implantation point of view. The application examples include but not limited to force control assembly and machining, robotic vision for random object picking, meat processing, small part assembly, and additive manufacturing (3D printing). The trends of future industrial robotics will be also predicted and discussed.
Keynote Session (Dec. 8th)
Keypoint Matching with Consensus Constraint
Professor Hong Zhang
Department of Computing Science University of Alberta Edmonton, Alberta CANADA
In this talk I will describe a simple yet effective outlier pruning method for keypoint matching that is able to perform well under significant illumination changes, for applications including visual loop closure detection in robotics. We contend and verify experimentally that a major difficulty in matching keypoints when illumination varies significantly between two images is the low inlier ratio among the putative matches. The low inlier ratio in turn causes failure in the subsequent RANSAC algorithm since the correct camera motion has as much support as many of the incorrect ones. By assuming a weak perspective camera model and planar camera motion, we derive a simple constraint on correctly matched keypoints in terms of the flow vectors between the two images. We then use this constraint to prune the putative matches to boost the inlier ratio significantly thereby giving the subsequent RANSAC algorithm a chance to succeed. We validate our proposed method on multiple datasets, to show convincingly that it can deal with illumination change effectively in many computer vision and robotics applications where our assumptions hold true, with a superior performance to state-of-the-art keypoint matching algorithms.
A new MATLAB-Toolbox for the Design of Mechanisms and Robots
Prof. Tim C. Lueth
Technical University of Munich
Recent research has shown the potential of patient individual surgical robot systems. Those robots are disposable medical devices and can be printed using SLS additive manufacturing. An automatic design process for such a robot using patient specific data, such as a CT image stack, requires a closed design tool chain from the patient data processing to the robot design. Such a design tool for automatic design of additive manufactured robots could be MATLAB. In the talk, a new MATLAB Toolbox is presented for parametric design, analysis and manipulation of solid geometries that can be exported in STL-Format for 3D printing. The library and its application to create robot geometries are presented. The possible combination with simMechanics for multi-body-simulation is not part of the presentation. The library is available from TU München for non-commercial activities.
Temporal Heterogeneity and the Value of Slowness in Robotic Systems
Prof. Ronald Arkin
Georgia Institute of Technology
Robot teaming is a well-studied area, but little research to date has been conducted on the fundamental benefits of heterogeneous teams and virtually none on temporal heterogeneity, where timescales of the various platforms are radically different. This paper explores this aspect of robot ecosystems consisting of fast and slow robots (SlowBots) working together, including the bio-inspiration for such systems.
Intelligent Human Support System for Dual-Arm Construction Machinery
Prof. Shigeki Sugano
Construction is a complicated work and it is required human operators of construction machines to master high expert skill. Recently, construction machines are also introduced to rescue and recovery tasks, sorted dismantling, and forestry applications. The technology of construction machines makes rapid progress, and a prototype dual-arm construction machine has developed recently. It has two manipulators on a crawler type mobile base with a grasping mechanism as an end-effector. It is a kind of master-slave manipulator that has big power. However, an operator requires sophisticated operational skills to control more than twelve joints and a mobile mechanism cooperatively. In this presentation, I introduce a new control method for dual-arm construction machinery operation based on a framework of state identification in real-time task phase and time-series attentional condition, and show an evaluation experiment applying the control method to an object-removal task in demolition work.