Robots are all around us; they are being built in various shapes, sizes and having various levels of autonomy.

For example, robots have been very successful at performing accurate and repetitive pre-defined movements and are commonplace in factories. They can also be seen in the home: washing machines or dishwashers, among others.

These kinds of stationary robots are limited to a specific task and cannot move about. Robots that are mobile and operated remotely by means of a wireless communication link are gaining popularity.

These are known as tele-operated mobile robots which may be manoeuvered in areas that are dangerous for humans to explore, such as the site of a damaged nuclear reactor, or the surface of Mars.

Despite this, mobile robots that guide themselves without a human operator still present one of the biggest challenges for mobile robotics.

Robots are increasingly needed to autonomously explore dangerous or unknown environments, assist in housework and office chores as well as to interact with us.

Robots are expected to become more important in work, education and entertainment; however, in order for them to move themselves about safely and avoid obstacles, they need some understanding of their surroundings.

In research, that was conducted as part of an M.Sc. programme at the University, the visual perception aspect was investigated in order to enable a mobile robot to recognise traversable surfaces in its environment in a real-time, reactive manner.

The mobile robot was equipped with a single low-quality webcam, and programming language was developed to allow the robot to make its own motion decisions by interpreting the images it acquires.

In contrast with commonly used range sensors, such as sonar or laser range finders, vision sensors allow a visual interpretation of the environment and have the potential to provide not just distance information from nearby objects, but also information useful for long-range navigation and about objects around the robot.

Images can also be used as a means to communicate between humans and robots, where an image of an interesting location to visit may be sent over the wireless link to the robot.

An artificial vision system receives a stream of numbers from the camera, and it is up to the vision engineer to create intelligent computer algorithms to process these images and extract useful information for the robot.

Giving autonomous robots the ability to explore and navigate through their environment using cameras has become a major area in computer vision and robotics research.

This is still open research which is challenged by the fundamental problem of automatic scene interpretation in computer vision.

The aim of this research was to develop a computer vision algorithm designed to analyse images in real-time and extract traversability information that a mobile robot would need to guide itself through an indoor or outdoor environment.

The autonomous vision algorithm was tested in practice on a mobile robot codenamed ‘VISAR01’, which stands for Visual Intelligence Systems for Autonomous Robotics.

This mobile robotic platform is compact and flexible; its on-board computer gives it the capability to operate autonomously, without the need for any wired or wireless connection.

Since this work has focused on providing a low-cost solution for autonomous robotics, a cheap webcam has been used instead of a high quality industrial grade camera.

Furthermore, the in-built sonar and infrared sensors were disabled since the objective of the study was to investigate robot guidance using only the single camera.

Our approach used image qualities such as colour and texture patterns to be able to distinguish the traversable ground from obstacle areas.

We also developed a novel model based on the theory of probability which enabled the robot to learn the classification parameters while it is moving.

This allows VISAR01 to adapt to various previously unknown environments without having to be trained beforehand.

The results obtained from this research demonstrated the feasibility of the traversability detection algorithm in various indoor and outdoor environments.

The robot can sometimes be seen roaming the corridors of the University’s engineering faculty and laboratory but is not to be entirely trusted on its own.

Just like a baby taking its first steps, it may still bump into some things due to its restricted field of view and limited understanding of the scene.

When VISAR01 encounters obstacles, it currently prefers to move towards the largest open space. In the future, this reaction may be replaced by higher-level behaviours such as moving towards a particular object or guiding a group of people around the faculty.

Sapienza’s M.Sc. studies were partially funded by the Strategic Educational Pathways Scholarship (STEPS) programme, which is part financed by the European Union – European Social Fund (ESF) under operational programme II – Cohesion policy 2007-2013, ‘Empowering People for More Jobs and a Better Quality of Life’.

Have your say

If you wish to contribute an article or would like a particular subject tackled in the Education section, call Davinia Hamilton on 2559 4513 or e-mail dhamilton@timesofmalta.com.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.