Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a chip that allows new radar cameras to be made a hundred times smaller than current ones.
With this NTU technology, radar cameras that usually weigh between 50 kg and 200 kg and are commonly used in large satellites can be made to become as small as palm-sized.
Despite being small, they can produce images that are of the same high quality if not better compared to conventional radar cameras. They are also 20 times cheaper to produce and consume at least 75 per cent less power.
Developed over the past three years at NTU, the promising technology has already secured S$2.5 million in research funding from Singapore government agencies.
The radar chip has attracted the attention of several multinational corporations, and is now being researched for use in Unmanned Aerial Vehicles (UAVs) and satellite applications.
Assistant Professor Zheng Yuanjin from NTU’s School of Electrical and Electronic Engineering who led the research, said that the size and effectiveness of the chip will open up new applications not possible previously.
“We have significantly shrunk the conventional radar camera into a system that is extremely compact and affordable, yet provides better accuracy. This will enable high resolution imaging radar technology to be used in objects and applications never before possible, like small drones, driverless cars and small satellite systems,” said Asst Prof Zheng.
Advantages over current technology
Current radar camera systems are usually between half and two metres in length and weigh up to 200 kg. They cost more than US$1 million on the market and can consume over 1000 watts in electricity per hour, the energy equivalent of a household air-conditioning unit running for an hour.
Known as Synthetic Aperture Radar (SAR), these large radar cameras are often carried by large satellites and aircrafts that produce detailed images of the Earth’s surface. Objects longer than a metre, such as cars and boats, can be easily seen by the radar camera mounted on an aircraft flying at a height of 11 kilometres.
Unlike optical cameras which cannot work well at night due to insufficient light or in cloudy conditions, a radar camera uses microwaves (X-band or Ku-band) for its imaging, so it can operate well in all weather conditions and can even penetrate through foliage.
These detailed images from radar cameras can be used for environmental monitoring of disasters like forest fires, volcano eruptions and earthquakes as well as to monitor cities for traffic congestions and urban density.
But the huge size, prohibitive cost and energy consumption are deterrents for use in smaller unmanned aerial vehicles and autonomous vehicles. In comparison, NTU’s new radar chip (2mm x 3mm) when packaged into a module measures only 3cm x 4cm x 5cm, weighing less than 100 grams.
Production costs can go as low as US$10,000 per unit, while power consumption ranges from 1 to 200 watts depending on its application, similar to power-efficient LED TVs or a ceiling fan.
It can also capture objects as small as half a metre which is twice as detailed as the conventional radar camera used in large aircrafts or satellites.
Potential applications of the new radar chip
Asst Prof Zheng said that when mounted on UAVs, it can take high quality images on demand to monitor traffic conditions or even the coastlines for trespassers.
“Driverless cars will also be able to better scan the environment around them to avoid collisions and navigate more accurately in all weather conditions compared to current laser and optical technologies,” he added.
“Finally, with the space industry moving towards small satellite systems, such as the six satellites launched by NTU, smaller satellites can now also have the same advanced imaging capabilities previously seen only in the large satellites.”
Large satellites can weigh up to 1,000 kg, but microsatellites weigh only 100 to 200 kg.
Two technologies which use deep learning techniques to help machines to see and recognise their location and surroundings could be used for the development of driverless cars and autonomous robotics – and can be used on a regular camera or smartphone.
Vision is our most powerful sense and driverless cars will also need to see, but teaching a machine to see is far more difficult than it sounds.
Two newly-developed systems for driverless cars can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.
The separate but complementary systems have been designed by researchers from the University of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine ‘see’ and accurately identify where it is and what it’s looking at is a vital part of developing autonomous vehicles and robotics.
The first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.
Users can visit the SegNet website and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. The system has been successfully tested on both city roads and motorways.
For the driverless cars currently in development, radar and base sensors are expensive – in fact, they often cost more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example – it was ‘trained’ by an industrious group of Cambridge undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to ‘train’ the system before it was put into action.
“It’s remarkably good at recognising things in an image, because it’s had so much practice,” said Alex Kendall, a PhD student in the Department of Engineering. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”
SegNet was primarily trained in highway and urban environments, so it still has some learning to do for rural, snowy or desert environments – although it has performed well in initial tests for these environments.
The system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.
“Vision is our most powerful sense and driverless cars will also need to see,” said Professor Roberto Cipolla, who led the research. “But teaching a machine to see is far more difficult than it sounds.”
As children, we learn to recognise objects through example – if we’re shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it’s not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.
There are three key technological questions that must be answered to design autonomous vehicles: where am I, what’s around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.
The localisation system designed by Kendall and Cipolla runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. The system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.
It has been tested along a kilometre-long stretch of King’s Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees, which is far more accurate than GPS – a vital consideration for driverless cars. Users can try out the system for themselves here.
The localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.
“Work in the field of artificial intelligence and robotics has really taken off in the past few years,” said Kendall. “But what’s cool about our group is that we’ve developed technology that uses deep learning to determine where you are and what’s around you – this is the first time this has been done using deep learning.”