Regarding the scope of civil liability, understood as the obligation to compensate for the damages caused by seeking to restore the victim's economic balance before the damage, autonomous robots cannot be held responsible for the acts or omissions that cause damages to third parties; since it is not possible to determine the part that has to take charge of the compensation, nor demand that it repair the damage caused, in the current legal framework.
The legal status of AI systems does not clearly define their legal nature or whether they belong to one of the existing legal categories (natural persons, legal persons, animals or objects), putting on the table the possibility of creating a new category with their own legal characteristics.
As for the European Union (EU), global standards on AI are being worked on to avoid security risks and maximize their benefits. The EU promotes exchanges of information in multilateral forums, to formulate standards and create partnerships that regulate the confines of the current multilateral regulation.
The report approved in the European Parliament on February 16, 2017, includes the lines of work, of which we stand out the following (Fig. 2):
- To create a European AI Agency able to advise public authorities, thanks to its technical, ethical and regulatory knowledge.
- Development of a Code of Ethical Conduct and responsibility to serve as a basis for regulating who will be responsible for the social, environmental and human health impacts of AI; ensuring that they operate in accordance with legal, safety and ethical standards.
- Creation of an electronic person statute.
Another important point is the fact that, as AI software that help radiologists, use deep learning methods or algorithms that move away from human mental processing, it is really difficult to find out the reason for an error in these systems and, therefore, to weigh the responsibilities in the diagnostic or therapeutic decision process.
Depending on the point of view we interpret the existence and performance of the functions of an AI, we can design three possible scenarios (Fig. 3):
- In the case that we understand an AI as an entity that for practical purposes could be considered as a living being not aware of its actions or not attributable, we would face the situation of having to consider it as such; being the possessor (understood as person or agency that uses / benefits from it), the person responsible for the damages caused. This responsibility would only cease in the event that the damage came from force majeure or fault of the one who had suffered it. To better understand this example we can visualize an act performed by a pet or a very young child that has a negative effect on a third person.
- If, on the contrary, we perceive AI as an inanimate object, being considered as a “machine”, it will never be liable for its damages and the responsibility would be of the employer, despite not intervening in any way in the operation, nor giving instructions that are already coming programmed or self-programmed by AI itself. While this is clearer in the field of business, the debate in our field of medicine is clearly still open.
- As a last option we can understand the performance of AI systems from the point of view of a defective product, where the law is apparently clear and, perhaps, we hope it defends the consumer, but there is one weakness: Before a case that is considered not predictable by the manufacturer, it would be redeemed from all responsibility. Bearing in mind that medicine is a field in which the handling of uncertainty is essential and no two people are exactly alike, we must be alert in this part of the law.
Therefore, given the unpredictable nature of some AI systems, which may have a greater or lesser degree of autonomy, as well as their ability to react and learn, we would face an incomplete regime that would not finish solving the problem.
So far, deepfake attacks have been mainly employed to counterfeit messages from politicians declaring their support for causes they do not represent. Now, there seems to be sufficient evidence that deepfake can be used to alter data, even in places where we most need data to be specially accurate, for example in health. We will have to evaluate what risk we have of being affected in our diagnostic imaging environment, as well as establish the necessary preventive measures in proportion to the danger; for which we must be at least familiar with all these possible problems.