Skip to main content
Menu

Oxford Robotics Institute | Projects - Human-Machine Collaboration (HMC), Year 2

This update summarises the academic outputs of the research undertaken as part of the Human-Machine Collaboration (HMC) programme as supported by the AWS gift. The projects led by  the Oxford Robotics Institute (ORI) are outlined below including the SustainTech-3 and HighTech-1 testbed projects. For more information on the overall programme see our Human-Machine Collaboration page, or the MPLS Human-Machine Collaboration programme site.

 

Project 1: Long-Term Autonomy for Service Robots

 

GOALS group

We focused on the problems that occur when a robot’s model of the world is not specified in enough precision to plan future actions exactly. Instead, the robot must plan to be robust against future possible outcomes. We first approached this problem using uncertain MDPs before exploring Bayes-adaptive MDPs. We also explored how to trade-off the risk of a mission plan against the value a robot may expect from it. We are currently exploring how to apply these algorithms to the service robot task of UV-C disinfection in everyday workplaces, and to use an autonomous underwater vehicle (AUV) to return data from networks of environmental sensors.

 

A2I Group

A2I is pursuing an agenda of acquiring actionable, object-centric world models for use in prediction, skill learning, planning and control. Challenges in this domain include the rapid acquisition of such models (e.g. achieving high-quality scene inference in the form of object segmentation) as well as deploying these models into real-world contexts. As part of this project we have significantly advanced the current state of the art in object-centric world modelling on two counts. Our work on GENESISv2, published at NeurIPS 2021, succeeded in providing higher-quality segmentations and constitutes a significant push towards deploying these models into real-world settings. APEX, published at IROS 2021, further extended this work to scene inference from video, particularly for complex robotics tasks where the agent itself often forms part of the view.

 

DRS group

Using the Toyota Human Support Robot (HSR), and targeting general everyday tasks in dynamic environments shared with humans, we have developed a novel scene reconstruction approach that runs online and onboard the HSR. We have demonstrated how we can leverage this approach in our hybrid mapping and receding-horizon motion planning framework. Furthermore, we developed a state-of-the-art approach to gaze control for a mobile robot that needs to perceive how its environment is changing as it navigates through it. We showed how our approach can consistently find collision-free trajectories while better exploring and updating the map of the environment that the robot builds. This is an important skill for robots that share their space with humans and need to complete daily tasks in domestic environments.

 

In addition, we have been working on an approach for generating fast dynamic motions for quadruped robots. We have developed a method that uses a novel formulation for solving the nonlinear optimisation problem that describes body and leg motions with a centroidal dynamics model. This is very beneficial for very dynamic motions that either need to build up or conserve the momentum of the system, such as running and jumping. We tested our approach on a small-sized quadruped robot, the Unitree A1, in simulated experiments and on the real system.

 

 

Project 2: Large-Scale Mixed-Initiative Autonomy for Logistics

GOALS group

We are approaching Project 2 via two methods. The first has been developing computational models to allow us to simulate the operation of large-scale human-robot teams in logistics-style applications. We did this using a novel method based on Markov-automata. We have since extended this work to explore methods to provide guarantees on the response time of a robot team in a mixed-initiative logistics setting. In parallel, we developed new auction methods for sharing resources in multi-agent teams. We started by looking at the problem of multi-agent path planning in non-cooperative settings and are currently looking at the allocation of unit resources under probabilistic constraints.

 

Some of our major uses of AWS have been running robot simulations. The work above on multi-robot congestion and logistics made significant use of AWS to run a large number of multi-robot logistics simulations in a range of different conditions. We have been sharing much of this work with a team from Accenture Labs in San Francisco.

A2I group

Mixed initiative logistics operations require agents to adapt both their interpretation and action to variations in task demonstrations. When executing goal-conditioned tasks this may require being able to match objects of the same class across particular instantiations. Looking at object rearrangement tasks, in particular, in collaboration with DRS we tackled this challenge using a large-scale vision and semantic model, leading to novel capability in object matching. The resulting paper was published at ICRA 2022.

 

 

Project 3: Human-Robot Shared Autonomy

GOALS group

We have focused on developing planning models that represent the capabilities of a human and their effects on a robot task. We have developed a new model that plans with respect to stochastic human behaviour represented as a Markov chain and can be used to share actions between humans and robots in a variety of settings. To support this work we have also been working to develop experimental platforms in which humans and AI systems can interact on a shared task. We have one platform based on Angry Birds and a second which uses the Gazebo robot simulator to provide a mobile robot control task to a human operator. We are currently awaiting ethics approval to start gathering data with both platforms.

 

A2i Group

On already established knowledge to learn new skills is pivotal in human-robot shared autonomy tasks, where rapid adaptation is required. In this context, we have investigated the role of architectural inductive biases to speed up the learning as well as the transfer of skills. This work is currently under review at ICML. The second line of work in this context, in collaboration with DRS, assesses the use of uncertainty in task executions to engage in suitable recovery actions. We have augmented a standard visuomotor control algorithm with such an introspective capability and found it to substantially increase task success rates. This work was published at ICRA in 2021.

 

 

SustainTech-3 Testbed

The GOALS group and ORI engineering team also contributed significantly to the SustainTech-3 testbed: AI & Robotics for Biodiversity - Monitoring and Predicting Biodiversity Resilience through AI & Robotics. We have provided planning and robotics expertise in developing the autonomous robot for inspection at Wytham Woods, and in developing the IoT devices that will be used to support the robot. The mobile robot is based on a Clearpath Husky running visual teach and repeat (from the MRG group). The IoT devices are a custom solar-powered design with environment sensors that feed an AWS IoT pipeline.

 

 

HighTech-1 Testbed

Team ORIon is the University of Oxford's autonomous robotics competition team, led from the Oxford Robotics Institute. ORIon is led by ORI DPhil students  (currently Charlie Street and Ricardo Cannizaro), supported by ORI academic staff ( Lars Kunze, Ioannis, and Nick Hawes), postdocs, admin staff and engineers. The team competes in international, high-profile competitions focussing on autonomous robots for domestic human support, such as RoboCup@Home and the World  Robot Summit's Service Robotics category. The team works on a Toyota Human Support Robot (HSR), a platform designed by Toyota specifically for human-machine collaboration, and on application tasks specially design to support humans in domestic settings, such as unpacking shopping, clearing a room, or greeting visitors.

 

Team ORIon are due to compete at RoboCup in Bangkok this year. We look forward to cheering them on.