Control of complex, compliant, multi-degree of freedom (DOF) sensorimotor systems like humanoid robots, prosthetics and exoskeletons as well as autonomous vehicles have been pushing the limits of traditional planning and control methods. The next generation of robots is going to work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision‐making systems to one that involves shared control with significant autonomy devolved to the robot platform; and end users in the loop making only high‐level decisions.
This course will introduce technologies ranging from robust multi‐modal sensing, shared representations, compliant actuation and scalable machine learning techniques for real-time learning and adaptation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control. This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade‐off between autonomy and control that we are comfortable with? Domains, where this debate (and for which this course) is relevant, include self‐driving cars, mining, shared manufacturing, exoskeletons for rehabilitation, active prosthetics, large-scale scheduling (e.g. transport) systems as well as Oil and Gas exploration to list a few. Specifically, the course will introduce the state of the art machine learning approaches to these challenges and will take the students through various aspects involved in motor planning, control, estimation, prediction and learning with an emphasis on the computational perspective.
We will learn about statistical machine learning tools and methodologies particularly geared towards problems of real-time, online learning for robot and prosthetic control. Issues and possible approaches for learning I high dimensions, planning under uncertainty and redundancy, sensorimotor transformations and stochastic optimal control will be discussed. This will be put in context through exposure to topics in human motor control, experimental paradigms and the use of computational methods in understanding biological sensorimotor mechanisms and motor control.
We will use live demonstrations of one of the most advanced fully articulated upper limb prosthetics, the iLIMB hand developed by Touch Bionics as well as a 19 Dof mini‐humanoid robot, as hands-on experience of the concept of shared autonomy with EMG based inputs and force sensing based ‘autonomous’ control through hardware that will be transported to the site of the lecture.We will also organize a ‘live link’ to demonstrate the real-time operation of the $2.5M NASA Valkyrie HumanoidRobot via Skype link from the University of Edinburgh.