In the midst of the blockchain frenzy that has currently taken hold of Malta’s business community, Prime Minister Joseph Muscat has announced that Malta will now turn its focus onto the regulation of ‘artificial intelligence’, or as it is increasingly referred to AI.

Speaking at the Delta Summit held earlier this month, the Prime Minister highlighted the need for “new forms of social safety nets and a rethink of basic interactions”.

 He said that “not only can we not stop change, but we have to embrace it with anticipation since it provides society with huge opportunities”.

This statement was followed by similar declarations at the Malta Innovation Summit, when Dr Muscat reiterated these intentions and even made an observation that “in the not too distant future, we may reach a stage where robots may be given rights under the law”.

This latter statement seems to have generated some unease. Comments posted online beneath the articles where the Prime Minister’s announcements were reported were sometimes quite negative. Reading through them, I came to the realisation that for many, the mention of ‘AI’ still conjures up images of the Terminator, apocalyptic outcomes and the words of the late Prof. Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.”

Despite resistance to the technology, however, the reality is that, although a machine possessing the full range of human cognitive abilities (self-awareness, sentience and consciousness) may take decades to materialise, artificial intelligence is already present in our daily lives and is already affecting our lives both negatively and positively, just as any other human invention does.

We can consider, for example, these familiar systems (which are only the tip of the iceberg):

Speech recognition and ‘intelligent assistants’, e.g. Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana;

Transactional AI systems, e.g. those used by Amazon and Netflix to predict products or content that a user is likely to be interested in, based on that user’s past behaviour;

Artificial intelligence is already present in our daily lives and is already affecting our lives both negatively and positively

‘Intelligent thermostats’, e.g. Nest, which anticipates and adjusts the temperature in a user’s home or office based on past personal patterns;

Self-driving vehicles, e.g. Tesla’s self-driving vehicles, which are proclaimed to “have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver”.

Of course, as this technology evolves, there have also been a number of high- profile failures such as the Google Photos application, which erroneously tagged photographs in a highly inappropriate manner, certain Google Home Minis which were apparently occasionally turning on secretly, recording audio from their owners and sending the recordings back to Google, and the Facebook AI-driven chatbots Alice and Bob, which had at a point in time developed their own language and were having private conversations with each other, leading to them being shut down.

In addition, there have already been two well-documented fatal autonomous car accidents in 2018.

In this scenario, where AI is still evolving but at the same time becoming part of our daily lives, we need, as a society, to ask ourselves some important questions. 

What is happening to the data that such systems are collecting about us?

To what extent are these systems taking decisions about us in such an automated manner so that we are not even aware of this?

Do we have a right to know the basis upon which such decisions were taken?

Do we have a right to request human intervention in relation to such decisions?

Would this then mean that owners of AI systems would be required to reveal the algorithms upon which these decisions were taken?

Can decisions taken by a machine be explained in a court of law other than by revealing the algorithms?

What happens when AI proprietors do not know the algorithms used as we reach the stage where AI will itself build AI in a way that might not be transparent to human beings?

If the machine’s ‘intelligence’ is based on big data being fed to it in an automated manner, how do we ensure that the data is free from bias of any kind? Can we do this at all?

If the machine’s decision is flawed, who is liable for this?

Last, but also very importantly, we need to ask ourselves: with machines becoming smarter and perhaps ‘outperforming’ humans in an increasing number of areas, to what extent will human jobs be threatened?

A focus on the regulation of AI is therefore not misplaced nor secondary: the issues are real and present, and the questions are endless.

The answer, however, is not to turn away from innovation, as this will come our way whether we want it to or not. The answer, as the Prime Minister said, is “to embrace it”, but it is highly crucialto do so in the most responsible way possible through appropriate strategy and optimal legislation.

Jackie Mallia, who obtained her Doctorate of Laws at the University of Malta, focuses on the regulation of technology and emerging trends in the sector.

This is a Times of Malta print opinion piece

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.