Skip to main content
Menu

How RobotCar works

A robot car on the road

Our approach

We use the mathematics of probability and estimation to allow computers in robots to interpret data from sensors like cameras, radars and lasers, aerial photos and on-the-fly internet queries. We use machine learning techniques to build and calibrate mathematical models which can explain the robot’s view of the world in terms of prior experience (training), prior knowledge (aerial images, road plans and semantics) and automatically generated web queries. We wish to produce technology which allows robots always to know precisely where they are and what is around them.

 

Infrastructure-free navigation

Already, robots carry goods around factories and manage our ports, but these are constrained, controlled and highly managed workspaces. Here, the navigation task is made simple by installing reflective beacons or guide wires. Our goal is to extend the reach of robot navigation to truly vast scales without the need for such expensive, awkward and inconvenient modification of the environment. It is about enabling machines to operate for, with and beside us in the multitude of spaces we inhabit, live and work.

 

Why not use GPS?

Even when GPS is available, it does not offer the accuracy required for robots to make decisions about how and when to move safely. Even if it did, it would say nothing about what is around the robot, and that has a massive impact on autonomous decision-making.

 

Why cars?

Perhaps the ultimate application is civilian transport systems. We are not condemned to a future of congestion and accidents. We will eventually have cars that can drive themselves, interacting safely with other road users and using roads efficiently, thus freeing up our precious time. But to do this the machines need life-long infrastructure-free navigation, and that is the focus of this work.

 

 Learning to drive

Although the car itself only moves in 2D, it senses in 3D. It only offers the driver autonomy if the 3D impression it forms as it moves matches that which it has stored in its memory. So before the car can operate, it must learn what its environment looks like. As an example, below is a video of the car learning/discovering what Woodstock town centre looks like.


Lasers

Tucked under the front and rear bumpers of the vehicle are two scanning lasers. These lasers allow us to sense the 3D structure of the cars environment – from this we can figure out the car’s location and orientation on the road.

 


Computer Vision

We can use discrete stereo cameras to figure out the trajectory of the vehicle relative routes it has been driven on before. This movie shows the vehicle interpreting live images in the context of its memory of our test site at Begbroke Science Park. We can also use these cameras to detect the presence of obstacles – although vision is not great at night so we also use laser.

 


Perception and environment understanding

Knowing what is where is pivotal for safe and robust operation. Here the car has knowledge of anything you as a driver might find useful – and more. The vehicle’s situational awareness is made up of static and dynamic environment features.


Static World

Static information consists of semantic information like the location and type of road markings and traffic signs, traffic lights, lane information, where curbs are, etc. This kind of information rarely changes and so a fairly accurate model can be built before the vehicle actually goes out. And it will last. Of course, you don’t really want to blindly believe such a prior map for all time – after all, things do change when conducting roadworks, for example – but knowing where you can expect to find certain things in the world is already incredibly helpful. The prior semantic map will get updated over time with information the vehicle actually gathers out there in the real world.




Dynamic World

Dynamic information relates to potential obstacles which are either moving or stationary: cars, bicycles, pedestrians, etc. Knowing where they are – and where they are likely going to be in the near future – with respect to the planned vehicle trajectory is crucial for safe operation as well as for appropriate trajectory planning. Dynamic obstacles are detected and tracked using an off-the-shelf laser scanner. The system scans an 85 degree field of view ahead of the car 13 times a second to detect obstacles up to 50 metres ahead.