News

The European Union is setting the course for regulations on artificial intelligence

The COVID-19 pandemic clouded an important event, i.e. the publication of the White Paper on Artificial Intelligence by the European Commission[1]. The EU’s priority is to implement legislation in a responsible way with particular emphasis on the protection of personal data, non-discrimination, proper determination of responsibility as well as the leading and supervisory role of the human being. “The future is now, old man.” This sentence seems closer than ever to our perceptions about robotic future.

The European Union participates in the competition of designers of quantum computers, but its aim is not to achieve a higher computing power, but to develop a distributed system composed of smaller units[2].

The White Paper on Artificial Intelligence is accompanied by the Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee[3]. The conclusions of this report are set out in the following article.

THE EUROPEAN UNION AT THE THRESHOLD OF A NEW REALITY

– THE COURSE FOR REGULATIONS ON ARTIFICIAL INTELLIGENCE, THE INTERNET OF THINGS AND ROBOTICS

The Internet has led to the dissemination of knowledge on an unprecedented scale. Artificial Intelligence (AI) and the Internet of Things (IoT) are the solutions developed, in a sense, in the aftermath of the emergence of the global network in almost every corner of civilisation. The endeavours to automate and digitalise everyday life for the better functioning of society continue. The areas in which the implementation and development of AI and IoT can bring the greatest benefits to the largest number of people include health care, public transport, public administration and education. However, these solutions also entail significant risks. The identification and analysis of these risks will not only regulate the legal framework for AI and IoT, but also ensure its smooth adaptation to rapidly changing conditions.

The White Paper on Artificial Intelligence (“White Paper”) and the Report from the European Commission to the European Parliament, the Council and the European Economic and Social Committee (“Report”) set out the general directions for the European Union to develop legislation on AI and IoT.

  1. What are AI and IoT?

For the purposes of the above mentioned documents, AI has been defined in the following way: Artificial intelligence refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). Another definition, equally good, is that it is the capability of a machine to imitate intelligent human behaviour.

IoT has been defined in the following way: A global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies.

AI and IoT are conducive to the dynamic development of the economy. Automation and robotisation of manufacturing, production, distribution and sales processes are present in almost every industry, reducing costs and saving time. The European Union is trying to take a step ahead and create a legal framework for the environment of new technologies in each Member State. From the point of view of the structure of the Union and its internal economic relations, this step is necessary. For example, the harmonisation of regulations is likely to ensure smooth development of the “Horizon Europe” project intended to significantly stimulate R&D&I in the Member States.

  1. Ideas on how to deal with safety and responsibility

The primary aim of the White Paper and the Report is to ensure that the legal framework makes the product placed on the market safe for users. It refers both to health and environmental protection. The aim is to provide the European standardisation with uniform standards for all products to be placed on the EU market.

AI is widely used in many applications and software, processing huge amounts of publicly available data. The algorithms “learn” how to revise and correct errors in order to perform the task even faster and more accurately. It is thanks to artificial intelligence that, in a matter of minutes, we can use our phone to see ads on a product which we searched for in our PC browser. Obviously, there are a number of risks, especially in relation to personal data protection, the right to privacy, and competition law (in businesses).

The Report focuses on the following issues:

  1. As a general rule, in existing EU legislation, the concept of safety also refers to the protection of consumers and users. This serves as a good basis for any future attempts to harmonise user protection rules and increases legal certainty.
  1. The current procedure for product risk assessment can be extended to include the additional assessment in cases in which the product is modified during its life cycle, e.g. enriched with new features due to software upgrade. Such requirements should be followed by the regulations which prescribe the use of appropriate warnings for users.
  1. Following the overriding principle of human supervision over AI, products using AI should have adequate safeguards and tools for risk management during the product life cycle.
  1. In the case of AI-based humanoid robots, the explicit obligation to take account of the non-material damage caused by products should be considered. This refers primarily to vulnerable users.
  1. The current regulations do not explicitly refer to algorithms. Therefore, transparent procedures for responsibility and supervision of the results of their use should be considered.
  1. The current principle of a producer’s responsibility for a product placed on the market may prove outdated if entities responsible for product performance, from concept to end of its life cycle, change many times. Therefore, the application of the following rule should be taken into consideration: operators in the chain provide the necessary information and resources to the subsequent operators to maintain the necessary level of safety and transfer responsibility for a product.

The producer’s liability is based on the principle of risk resulting from the Directive on product liability (Directive 85/374/EEC). The Report points out that it may be difficult to determine whether the damage in a given product was caused by human behaviour or was the result of autonomous action by AI or IoT. In this case, gathering evidence is a fundamental problem which may impede the investigation and enforcement of claims by the injured party. The Report suggests clarifying the definition of the product to facilitate the identification of the responsible entity. For example, it should be clear how to determine the responsibility of a smartphone manufacturer with respect to software update or additional applications provided by a third party. The recommendation also assumes a detailed analysis of potential changes in the burden of proof in cases which pertain to claims for damage caused by a product using AI or IoT.

With respect to liability, the definition of the concept of marketing a product is reconsidered. In its life cycle, the product may undergo a transformation which will separate the original manufacturer from the entity actually responsible for the product performance. Therefore, it is suggested to change the definition in order to adapt it to actual market conditions.

  1. Conclusions

The widespread use of AI and IoT is inevitable and, due to a high degree of technological advancement, law has to be constantly revised to catch up with the reality. According to the Report, the current EU regulations have some gaps, but in general form a good basis for further work to adapt Community law to the challenges of the coming revolution. Manufacturers such as Tesla and Volvo have announced the launch of fully autonomous cars in the next few years. This means that Member States have a certain time limit to adopt a uniform legal standard for safety and responsibility for the use of AI and IoT.

 

Attorney-at-Law Adam Madejski 

 

[1] https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_pl.pdf
[2] https://www.sztucznainteligencja.org.pl/fugaku-ostatni-samuraj/
[3] https://ec.europa.eu/transparency/regdoc/rep/1/2020/PL/COM-2020-64-F1-PL-MAIN-PART-1.PDF