Focus on

Machina delinquere potest?

tophic image
Contents

Addressing the topic of artificial intelligence in businesses leads us to think from an integrated compliance perspective, also with reference to the various figures involved. It can be stated that the higher the risk rating of an artificial intelligence system in undermining the fundamental rights on which the European Union is founded, the more stringent the obligations to be complied with become. Similarly, the same ratio underpins the provisions of Legislative Decree no. 231/2001, mandatorily requiring companies to identify, assess and manage the risks of predicate offences within their activity, through the implementation of a robust and effective internal control system.

Public and private companies increasingly making use of software and AI systems lead to an in-depth reflection on how – and to which extent – the use of such tools may concretely facilitate the commission of specific predicate offences, exposing companies to the risk of potential liability.

Just think about money laundering, which may be favoured by automated payment systems allowing anonymous transactions between different bank accounts, cryptocurrencies or digital platforms.

Therefore, when setting up an internal control system, the implementation of procedural rules not only aimed at complying with the provisions of the AI Act – differentiated based on the risk assessment attributed to AI systems – but also suitable to monitor the previously identified offence risks cannot be disregarded.

Both regulations thus consider preliminary risk monitoring ethics as the key factor of risk management strategies.

The domestic legislator promptly intervened on this matter and with a draft law on artificial intelligence – currently being examined by the Chamber of Deputies – provided for a redefinition of the criteria for the attribution of an entity’s liability considering the actual degree of control over AI by the operator. This relates to the principle of human surveillance, key element of the AI Act, i.e. devising and developing a system encompassing human supervision measures in order to guarantee that AI systems be effectively monitored by people during its use.

In order to support businesses in promoting a culture of lawfulness it is necessary to know and apply regulation with an integrated approach, so as to enhance synergies among the various regulatory frameworks.

AI Act: risks and opportunities

AI Act: risks and opportunities

Download the PDF [1728 kb]