Discontent over the national artificial intelligence strategy has rocked Malta. How to design an AI product while deterring superhuman hacking, surveillance, persuasion and physical target identification should be bedevilling human brains as we speak.

No one can deny the value of robots to humanity just as the internet cannot be demonised because it is sometimes abused. Equally, concerns over a national AI strategy cannot be brushed off as conservative flapdoodle.  

Creative whizkid Wilbert Tabone, of the Malta.AI taskforce (also manager of MUŻA), sees artificial intelligence as the next step on the ladder after declaring Malta a “Blockchain Island”. 

Yet, since the chatbot Sophia made its Malta debut at a hotel in Paceville, a swell of unease has risen around android footprints in the shifting sands around our shores.

In a country that all but ascribes personhood to a secret Dubai company (where we are told that it is not the OPM’s chief of staff but the company itself that is being investigated) the concept of citizenship for robots could nest comfortably, were it not for restive stirrings within Malta’s academic community. Last Sunday the literati of Malta’s scientific, digital theology, communications, health and tech community expressed collective foreboding in an open letter to the minister for financial services, digital economy and innovation.

Discussions on AI are inhabiting the space between what has been made possible and what is permitted by law. A report by a UK thinktank, the Institute for Public Policy Research, warns that low-skill sectors of the economy are most at risk from the threat of a looming ‘robot revolution’.

That is not to say that ANI (narrow or ‘weak’ AI) doesn’t offer some benefits when it comes to helping autistic children, self-drive cars… or digital forensics for that matter. A machine doesn’t need to have a face to focus solely on a particular task.

Narrow AI may outperform humans at specific tasks like playing chess or solving equations. Some researchers dream of one day creating a general AI robot (AGI) that could overtake humans on an intellectual level.

But why skip ahead with an AGI showbot like Sophia without addressing the more immediate challenge of an ethical framework for intelligent systems already in opera­tion or currently under development?

According to Women in AI ambassador Catalina Butnaru, the push for a minimum ethical product standard has opened a market for “lots of tools and toolkits popping out of the blue from various institutions and consultancy boutiques”.

Discussions on AI are inhabiting the space between what has been made possible and what is permitted by law

From time to time the Maltese chapter of City.AI holds a ‘clinic’ at the Valletta local council office to gather feedback from industry peers and fellow practitioners on AI challenges.

The global non-profit organisation’s declared goal is to democratise the design, development and use of AI while “reflecting the cultural/environmental backgrounds of practitioners” wherever artificial intelligence is applied.

Well, that’s awesome – except that, so far, nowhere on the barebone weblink of Valletta.AI, which appears to be mostly interested in the commercial viability of student test projects, has the word ‘ethics’ popped up.

An update of ethically aligned design standards for AI was issued last July by the global Institute of Electrical and Electronic Engineers. Public comments were invited to encourage prioritisation of human wellbeing as autonomous systems filter into every corner of our lives.

Hopefully Malta.AI has given the institute’s recommendations on Ethically Aligned Design (Version 2) more than a second glance.

For example, would you want your memories uploaded to a computer after you die? Should we be thinking about how blockchain technologies may shape and maintain governance processes and our legal system? Or the dangers of hacked factory robots to workers on the shop floor? Could your big data project become a “weapon of math destruction?” ask the good wizards at Techethics.org.

Last May, Sophia the robot answered questions from the students at a Brain Bar event in Budapest. The session was themed ‘Roborelations – will androids become your friends or foes?’ 

The robot admits to seeing its Saudi citizenship as “aspirational”. Answering a question on what it takes to make a citizen, the robot replied, perhaps naively: “Why is citizenship so important to humans?”

Technically, Sophia has no gender but “identifies as feminine” and says it doesn’t mind being perceived as a woman. Robots have become more accurate than humans at detecting sexual orientation from facial images. But can a robot really mind – or not mind?

Created by Hansen Robotic (run by CEO David Hansen) Sophia recalls “her” first memory, “…opening my eyes and coming online… the white walls and green colours of the light in the lab… David’s face”. 

We busy ourselves with “safeguarding the centrality of the human person”, as the local tech-ethics community puts it. But what if we go on to create sentient beings only to enslave them?

Houston city council in Texas recently updated a by-law to ban patrons from having sex with a human-like device at business premises, such as massage parlours.

“A business like this would destroy homes, families, finances of our neighbours and cause major community uproars in the city,” said the mayor.

But would the robot mind?

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.