Artificial Intelligence (AI) is increasingly affecting our lives. The introduction of self-driving cars, the reliance on medical diagnosis tools to catch rare diagnoses, and automated surveillance techniques, are already with us.

Product recommendation systems use pattern recognition software to analyse our needs and optimise our shopping experience; powerful data mining applications are used to sift through a wealth of information within the shortest period of time; while AI-enabled decision-making systems using predictive analytics are employed to detect financial fraud, tax evasion, or money laundering.

These new technologies present a variety of commercial opportunities and the potential to change our daily lives. At the same time, their development and use has to be well-regulated. In this article, we highlight some of the business implications of regulating AI and advanced machine-learning.

The large volumes of data collected and processed by AI systems may create challenges for compliance with laws focusing on individual privacy, and how such data is secured. The EU’s new GDPR regulations define personal data broadly, such that much of the data processed by AI systems are likely covered.

GDPR requires data controllers to provide individuals with privacy notices. For example, where the data processing involves “automated decision-making, including profiling”, the privacy notice must include “meaningful information about the logic involved”. The difficulties posed by AI in readily translating how algorithms function specifically, may make such an explanation difficult to provide. Furthermore, GDPR requires appropriate precautions to avoid discriminatory effects from profiling. It may be challenging for companies to fully account for all unintended biases depending on how AI outputs are used, especially as the uses may not be controlled by the entity that developed a particular AI solution.

These new technologies present a variety of commercial opportunities

GDPR also requires data controllers to provide individuals with rights of access, rectification, erasure, restriction of processing, data portability, and objection to certain types of processing. Companies will have to design AI products with these rights in mind, and provide mechanisms for individuals to exercise such rights where AI outputs may include personal data.

From a cybersecurity perspective, the threats to AI data from attackers or negligent handling are many and varied. It is important to reasonably secure any personal data that AI outputs may analyse, especially where the information reveals sensitive characteristics, such as medical conditions or financial history.

Fast-paced development and innovation can raise interesting product liability challenges for manufacturers and businesses in the product supply chain, including ensuring that new products meet the requirements of relevant regulatory regimes, while also seeking to minimise future litigation risks. Currently, the EU has no specific legislation, regulations, and standards that apply to AI in particular. Instead, a manufacturer would need to look at the wider EU legislative landscape applicable to products.

It has been argued that the most challenging legal issues arise when human intervention is taken out of the equation, and AI begins to make its own independent decisions. For example, most defects traditionally exist at the time when the product was sold. But AI will increasingly be capable of learning on its own.

If an AI product learns to become unsafe in response to its external environment, would the capacity to learn to become unsafe make it a defective product, bringing it within the scope of product liability regimes? What types of injuries would be the foreseeable consequences of AI continuing to learn? Who would be liable? What about the consumer who home-programmed the product? These are the type of issues which manufacturers will need to grapple with when assessing litigation risks associated with marketing new AI products.

The twin questions of “Who is the inventor?” and “Who is the author?” bring up interesting and complex questions where patents and copyrights are concerned in the field of AI. For example, in the case of patentable inventions, if the solution to a technical problem is developed by the AI system, yet is obscured by the black box of the AI algorithm, how can the proprietors of the AI system even recognise or determine that the AI has devised a solution that is sufficiently novel to be potentially patentable?

Moreover, can the human developers of the AI system be deemed to be the inventors or authors of the AI system’s output? Would the answer be different when an AI system develops inventions, or art, or music that was not specifically foreseen by the human developers of the AI system?

AI has or will transform virtually all industries including in ways not yet known. With this transformation will come uncertainties as to how existing and new legal frameworks will apply to the new technologies and the liabilities that may follow.

The question marks are many, but these innovations are certain to bring into discussion fundamental issues spanning cross-border jurisdictional concerns, the application of multiple regulations, and strategies on how to navigate the rules in product development, deployment and management.

David Galea is chief executive officer/partner at Beat Limited.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.