Explainable AI
Advert

Explainable AI

Each photo shows how the AI system classified the image and next to it is the part of the image that the system gave most importance to in order to arrive at the prediction. PHOTO: Why Should I Trust You?: Explaining the Predictions of Any Classifier, by Ribeiro, Singh and Guestrin. ACM SIGKDD 2016

Each photo shows how the AI system classified the image and next to it is the part of the image that the system gave most importance to in order to arrive at the prediction. PHOTO: Why Should I Trust You?: Explaining the Predictions of Any Classifier, by Ribeiro, Singh and Guestrin. ACM SIGKDD 2016

Artificial Intelligence (AI) is the latest hype, a buzz word that is thrown in any conversation concerning technology. This recent hype came about due to technological advances in the last decade that provided faster computational power, resulting in the possibility of building better computational models which are used by systems to make decisions, predict actions and classify new information.

One of the main concerns for both AI practitioners and users alike is the concept of explainability. Many times, AI systems and computational models can be somewhat of a black box. We provide an algorithm with data to learn from and build its model by learning how to generalise on the data. With some algorithms, we know how the model is learnt and how future answers are arrived at. However, for some algorithms, we might not be able to exactly pinpoint why something is learnt in a certain way. At least, not unless we dig deeper and delve under the hood.

Image classification is a simple example of this type of problem that we encounter with AI and the explainability of results. Say we build an image classifier that can distinguish between pictures of wolves and huskies. This is done by obtaining a dataset of images – some images will be of huskies and some of wolves. The algorithm is simply told which of the images correspond to huskies, and which to wolves.

Computationally, an image is simply analysed in terms of its pixels and colour. The algorithm tries to learn what the distinguishing factors in images are that could separate the husky images from the wolf images. However, the algorithm has no notion for instance of what a dog’s ear is and what it should look like. So such conceptual features that we as humans might use to distinguish such images are not available to these type of algorithms that we use. This also means that it is not completely straightforward how the algorithm manages to actually distinguish these images from each other.

From an AI perspective, this type of image classification system achieves very good results, and in general, would always manage to distinguish a husky from a wolf. However, when we look into the reason why and what were the distinguishing features that these two classes of images had to set them apart, we can see that it had nothing to do with the part of the image where there is either a husky or a wolf. Rather, the classifier learnt that from the images it was given, those images labelled as wolf had a large white area within the image (representing snow in the background), while those labelled as huskies did not have the same white pattern within the image, making this the main distinguishing factor bet­ween the two classes.

This is definitely not what we would expect out of an intelligent system. It is for this reason that Explainable AI (XAI) is becoming an important aspect. Researchers are always looking at not only creating computer systems that seem to act intelligently but to also understand why predictions and decisions are made and how the information is derived. It is only by achieving this level of understanding that practitioners and users alike can trust AI systems in the future.

Dr Claudia Borg is a lecturer at the Department of Artificial Intelligence at the University of Malta.

Did you know?

• The planet’s average surface temperature has risen about 0.9 degrees Celsius since the late 19th century.

• The five warmest years on record have taken place since 2010.

• The oceans absorb much of this increased heat, with the ocean surface registering an increase in temperature.

• Global sea levels rose about 20cm in the last century. The rate in the last two decades, however, is nearly double that of the last century and is accelerating slightly every year.

For more trivia see: www.um.edu.mt/think

Sound bites

• Heat trapped by greenhouse gases is raising ocean temperatures faster than previously thought. Measurements are collected using a fleet of 4,000 robots drifting in the world’s oceans and diving a depth of 2,000 metres to measure temperature, pH, salinity and other useful information. The results provide further evidence that earlier claims of a slowdown or ‘hiatus’ in global warming over the past 15 years were unfounded and that climate change remains a serious threat.

https://www.sciencedaily.com/releases/2019/01/190110141811.htm

• Research also continues to measure the loss of ice mass over time as an indicator of the impacts of climate change. Techniques used to estimate ice sheet balance use high-resol­ution aerial photos, satellite radar interferometry and imagery which began in the early 1970s. The analysis revealed that while the average loss of ice between the 1970s and 1990s was around 48 billion tons annually, the annual ice loss increased to an average of 134 billion tons per year in the last two decades.

https://www.sciencedaily.com/releases/2019/01/190114161150.htm

For more soundbites listen to Radio Mocha on Radju Malta every Monday at 7pm, with a repeat on Thursday at 4pm on Radju Malta 2.

https://www.facebook.com/RadioMochaMalta/

Comments not loading? We recommend using Google Chrome or Mozilla Firefox with javascript turned on.
Comments powered by Disqus  
Advert
Advert