Barely hours after the new legislative framework comprising three Acts – the Virtual Financial Assets Act (VFAA), the Malta Digital Innovation Authority Act (MDIAA) and the Innovative Technology Arrangements and Services Act (ITASA) – came into force, Malta appointed a task force to deal with Artificial Intelligence and implement a national AI strategy.

The Innovative Technology Arrangements and Services Act regulates smart contracts, Decentralised Autonomous Organisations (DAOs) and elements of distributed or decentralised ledger technologies, a popular example of which is the blockchain. These can be voluntarily submitted for recognition to the MDIA. Prior to this stage, such innovative technology arrangements must be reviewed by a systems auditor, one of the services outlined as an Innovative Service under the ITASA.

The systems auditor must re­view the innovative technology arrangement based on recognised standards, in line with quality and ethical regulations and based on five key principles – security, process integrity, availability, confidentiality and protection of personal data. These have been reinforced by guidelines issued by the MDIA in conjunction with the provisions of the ITASA. These guidelines will also be further amplified very soon to cater for enhanced elements of systems audit in instances that would merit a deeper audit and analysis in critical areas of activity.

The MDIA here makes sure that the blueprint of an innovative technology, and thus its functionality, meets the desired outcomes. It is making sure that the techno­logy and the algorithm (code) can be trusted to achieve the desired outcome. The MDIA here is also establishing criteria for dressing up the code, culminating in certification, which could ultimately also used by courts in cases of software/code liability.

The coming into force of the new legal framework sees not the end of a journey, but the beginning of an immense chapter, which can also be extended to AI, with some minor amendments in the law to cater for non-distribu­ted ledger technology (DLT) elements in the definition of innovative technology arrangements.

The origins of AI can be traced back to the 18th century, from Thomas Bayes and George Boole to Charles Babbage, who constructed the first electronic computer. Alan Turing, in his classic essay ‘Computing Machinery and Intelligence’, also imagined the possibi­lity of computers created for simu­lating intelligence. John McCarthy in 1956 coined the definition for AI as ‘the science and engineering of making intelligent machines’. An AI essentially is built on software, thus composed of algorithms and code and which can also follow the same path of certification under the remit of the MDIA.

In this instance, given the intelligence and automated output of the code, the MDIA might request enhanced system audits as well as more structured certification criteria based on a unique and novel regulatory sandbox for certification. The Regulatory Sandbox here can be used to develop an environment in which AI and its underlying logic and code are able to function according to pre-determined functional outputs in a testing environment.

This testing silo will have a controlled environment, with a limi­ted number of participants, with­in the predetermined imple­mentation periods. The Regulatory Sandbox will not only look at the social and economic viability of the AI/code being proposed for certification but also at how this fits in with current enterprise or societal use and the eventual changes that would need to be made. It can also be used to verify the level of adaptability and adherence to principles like the Asilomar AI Principles or other principles which the MDIA would want to apply.

The House of Lords in the UK earlier this year did suggest an enhanced set of ethical principles in their report. These principles are aimed at the safe creation, use and existence of AI and include among others: Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abili­ties to self-replicate to strict safety and control measures).

One could also visualise this sandbox and the system auditors functioning as a small portal in AI’s so-called black box problem, and thus the ability to assess functionality and code response before it is deployed and thus certified.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments

This is not a novel concept in the industry. Apple, for example, uses a sandbox approach for the Mac OS’s graphical interface as well as for its apps to protect systems and users by limiting the privileges of an app to its intended functionality, increasing the difficulty for malicious software to compromise the user’s systems. In the case of certain AI it is envisaged that aside from veri­fying concrete properties of the code, there would also need to be the creation of a safe layer within the same sandbox which makes sure that the code interacts and functions correctly.

The AI sandbox would thus need to be modelled according to the particular use of the AI and functionality blueprint, creating an operational environment bas­ed on the blueprint and architecture, with the execution, operation and processes of the fun­ctions and thus the emanating certification criteria. These would be tested on a controlled environment to make sure the AI, and hence the code, has the required qualities to be deployed and used.

This would entail creating clear sandbox criteria consisting of the development hardware, software/ code, data, tools, interfaces, and policies necessary for starting an analytical operational deep learning practice in pre-determined environments. These pre-determined environments would need to have sufficient data and modular guard rails with inbuilt adversarial models. This would hypothetically allow the algorithms to be let loose and thus react to unexpected outcomes, including facing unexpected opposition.

The sandbox in this instance would be able to metre the resultant reactions due to the intelligence and autonomous nature of code. It is anticipated that such sandbox would also need an element of proportionality and flexibility so that, as far as possible, it limits the use of some AI technologies, such as neural networks, as that might stifle technological innovation.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments. It should stimulate creators and inventors with opportunities to build and deploy cutting-edge code within sound certification criteria and parameters.

Ultimately it should lead to an environment that develops an ethi­cal framework for inclusive and diverse AI with voluntary certification criteria to avoid obscure and unwarranted outcomes such as that of HAL900 AI in Space Odyssey.

Ian Gauci is a partner at GTG Advocates and Caledo. He lectures on Legal Futures and Technology at the University of Malta.

This article is intended for general information purposes and does not constitute legal advice.

Acknowledgment
Thanks to Prof. Gordon Pace for his guidance and valuable inputs.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.