Plenary Talks
December 11th 9:00-10:00, Thai Chitlada 1, 2/F
Yunhui Liu
Professor, The Chinese University of Hong Kong, China
Towards Vision-Driven Robots
Although robots are being widely used in various sectors, their performance and intelligence level are still far below humans’ expectations. One of the major reasons is that robots are not skillful in coordinating visual information collected by vision with their actions. Efficient and effective coordination of eyes with arms/hands in grasping/manipulation, and with legs or wheels in walking or moving are crucial for robots to act reliably, robustly, and efficiently in natural environments. Vision-driven robots, i.e. robots that are driven by visual information or feedback, will be the key paradigms for robots towards real-world applications. This talk presents technical challenges in vision-driven robots and demonstrates our latest results on 3D visual sensing and perception, vision-driven robot grasping and manipulation, etc. Applications of the vision-driven robotics technologies in manufacturing, logistics and healthcare will be introduced as well.
December 12th 9:00-10:00, Thai Chitlada 1, 2/F
Yasuhisa Hirata
Professor, Tohoku University, Japan
Envisioning a Future Society with AI-Enabled Robots
This talk introduces our Moonshot project, part of Japan’s National Research and Development (R&D) program. The Moonshot program supports high-risk, high-impact R&D aimed at achieving ambitious goals and addressing challenges such as the super-aging population. The objective of our project is to develop adaptable AI-enabled robots that can be deployed in various settings. Currently, we are working on a range of assistive robots called the Robotic Nimbus, which can alter their shape and form based on the user’s condition, environment, and the task at hand. These robots are designed to provide appropriate assistance, particularly for the elderly and disabled, empowering them to act independently.
December 13th 9:00-10:00, Thai Chitlada 1, 2/F
Jianwei Zhang
Professor, University of Hamburg, Germany
The Convergence of Embodied AI and Modular Control Towards Generalist Robots
Traditional modular control approaches in robotics primarily rely on manual programming and analytical models with hand-crafted rules for action planning and execution. While functional for specific tasks, these methods limit the dexterity and adaptability of robots in complex, open-ended environments. The emergence of embodied AI marks a rapid advancement in developing general-purpose robotic manipulation. Large multimodal models (LMMs) facilitate action planning by combining bottom-up skills, enabling robots to generate versatile and effective task sequences. In this talk, I will introduce foundational concepts inspired by cognitive systems that allow robots to better comprehend multimodal scenarios by integrating knowledge and learning. Next, I will also explore how LMMs learning techniques can be integrated into intelligent robotic systems. Finally, I will outline the key modules required to elevate a robot’s intelligence and adaptability and a hybrid architecture provides a balanced approach, avoiding the challenges of purely end-to-end training while enhancing physical interpretability. In parallel, I will showcase robotic platforms demonstrating capabilities in dexterous manipulation and robust dynamic locomotion, emphasizing their potential for general human-service applications.
Keynote Talks
December 11th 13:00-13:30, Thai Chitlada 1, 2/F
Koichi Hashimoto
Professor, Tohoku University, Japan
3D point cloud-based visual servo
Visual servo refers to the control of a robot using visual feedback, typically from cameras or depth sensors. The goal is to minimize the error between the current and desired states, where “state” can refer to the position, orientation, or configuration of an object or the robot itself. In position-based visual servo (PBVS), the robot’s end-effector is guided by estimating the target’s 3D position and orientation (pose). In image-based visual servo (IBVS), the control is done directly in the image plane without explicitly computing the pose. A 3D point cloud is a collection of data points representing a 3D surface or object. It is often captured using sensors such as depth cameras, LiDAR, or stereo vision systems. In visual servo, this data is used to estimate the geometry and pose of objects in the robot’s workspace (3DBVS). The advantage of using point clouds is that they provide rich spatial information about the environment, allowing for more accurate tracking and manipulation in 3D space compared to 2D image data. This naturally suggests PBVS for robot control. However, several points of discussion exist on estimating the target position and orientation. The talk will introduce recent approaches to visual servo based on 3D point cloud sensors.
December 11th 13:30-14:00, Thai Chitlada 1, 2/F
Weiwei Wan
Professor, Osaka University, Japan
AI-Driven Robot Manipulators for Laboratory Automation
Biological and chemical laboratories demand significant human labor and flexibility to respond to dynamic conditions such as biological growth or chemical reactions. Such requirements present challenges for traditional automation. In this talk, I will introduce two research projects from our lab focused on automating laboratory tasks through AI-driven robotic manipulators. These systems leverage AI for recognition, reinforcement learning to generate task sequences, and motion planning to autonomously create flexible task or action sequences. The flexibility enables the developed robotic manipulator systems to adapt to the unique demands of biological and chemical experiments. The developed systems have been deployed to real experimental scenarios, contributing to discoveries of new mechanisms and facilitating experimental processes that require high adaptability.
December 12th 13:00-13:30, Thai Chitlada 1, 2/F
Hao Liu
Professor, Shenyang Institute of Automation, Chinese Academy of Sciences, China
Augmented Sensing and Autonomous Control of Flexible Surgical Robots
Surgery through human body cavities such as the digestive tract and blood vessels is more minimally invasive and is an important direction in the development of modern medicine. The flexible surgical robot has excellent environmental adaptability and dexterous manipulating capability, and are an important enabling technology for operations within the human body lumens. It has been widely studied around the world and has achieved preliminary clinical applications. However, restricted by its small size and soft tissue cavity environment, the sensing ability of flexible surgical robot is rather weak. And its manipulating performance towards complex cavity environments and surgical tasks is still very limited. The report will review the current research status of flexible surgical robots and share the speaker’s long-term research progress in the sensing and intelligent control methods of flexible surgical robots.
December 12th 13:30-14:00, Thai Chitlada 1, 2/F
Xinyu Liu
Professor, University of Toronto, Canada
Robotic Manipulation of Small Model Organisms
Robotic manipulation has become an enabling technology for experimental studies of living biological samples such as cells, tissues, and organisms. In this talk, I will introduce our recent research on developing robotic devices and systems for performing a variety of manipulation tasks on small model organisms including Drosophila larva and C. elegans. We have designed novel microfluidic devices for controlling the position and orientation of swimming/crawling organisms, developed new computer vision algorithms and learning-based models for characterizing the morphological and molecular features of these organisms, and invented automated robotic systems for applying multimodal stimulations on and injecting genetic materials into the organism bodies. These innovative robotic tools have enabled new studies on neuroscience, development, and genetics of Drosophila and C. elegans. I will present our results on both technology development and biological application, and will also briefly discuss the future directions in this area.
December 13th 13:00-13:30, Thai Chitlada 1, 2/F
Antoine Ferreira
Professor, INSA Centre Val de Loire, France
AI-powered Navigation of Magnetic Microrobots for Targeted Drug Delivery
Microscale robots introduce great perspectives into many medical applications such as drug delivery. Fully automatic microrobots’ real-time detection and tracking using medical imagers are investigated for future clinical translation. Ultrasound imaging has been employed to monitor single agents and collective swarms of microrobots in vitro and ex vivo in controlled experimental conditions. However, low contrast and spatial resolution still limit the effective employment of such a method in a medical microrobotic scenario due to uncertainties associated with the position of microrobots. The positioning error arises due to the inaccuracy of the US-based visual feedback, which is provided by the detection and tracking algorithms. The application of deep learning networks is a promising solution to detect and track real-time microrobots in noisy clinical imagers. In this presentation, the navigation performance of endovascular magnetic microrobots with different geometries, materials and sizes are investigated in clinical settings with state-of-the-art deep learning detection and tracking research.
December 13th 13:30-14:00, Thai Chitlada 1, 2/F
Jackrit Suthakorn
Professor, Mahidol University, Thailand
Transforming Patient Care through Medical Robotics Research in Surgical, Rehabilitation, and Hospital Service Robotics
Medical robotics research is transforming patient care by advancing surgical precision, improving rehabilitation, and hospital services. This keynote presentation explores advances in surgical robotics that enable minimally invasive procedures with high accuracy, improving patient outcomes and reducing recovery time. Rehabilitation robotics offers tailored support, enabling patient mobility and independence, while hospital service robots optimize logistics, alleviating healthcare staff workload. By integrating artificial intelligence, sensor technology, and biomechanical design, medical robotics not only enhances safety and efficiency but also advances a patient-centered approach. These advancements promise to reshape healthcare, providing accessible, effective solutions across different clinical environments. At the forefront of this advancement is the Center for Biomedical and Robotics Technology (BART LAB), a leading research center dedicated to developing cutting-edge robotic technologies tailored to address real-world clinical challenges. Collaborating closely with hospitals and medical professionals, BART LAB ensures that its research aligns with the practical demands of healthcare providers. By prioritizing patient safety and efficacy from the outset, BART LAB adheres to ISO standards and regulatory guidelines, building trust within the medical community. Clinical trials rigorously test and validate these technologies, bridging the gap between lab research and practical patient care. This patient-centered approach drives significant advancements in medical robotics, enhancing precision, accessibility, and effectiveness across healthcare applications. Robotics is transforming surgery, rehabilitation, and hospital services, improving patient outcomes and operational efficiency as research and collaboration propel the development of adaptive, intelligent systems.