Robots (whose name comes from the Czech robota, meaning forced labor ) are gaining increasing capacity in some specific human activities. A crucial step consists of the robot with the ability to learn on the basis of their experience and, therefore, to perform acts that are not foreseeable by the manufacturer in their detail. The conduct of the robot and the relationship with humans are subject to the Three Laws of Robotics formulated by Isaac Asimov in 1940 and, today, a new discipline called roboethics . But the law is evidently called to deal with the multiplicity of conflicts that these new technological opportunities that can create.
It can be said that the blooming of these technologies does not seem, at least for now, reduce the need for legal regulations, which, indeed, according to the latest figures in Europe, are increasingly in demand by the public. From a strictly legal point of view it can be seen that does not exist, to date, a definition of new technologies, such as specific surface area of conflict for which the law has intervened or it feels the need that it intervenes. For example, the European Court of Human Rights, although it has dealt with several cases that, according to the same court, concerning new technologies and despite showing some familiarity with some of them, did not set up the new technologies, nor can be said it has also developed a consistent position on this matter.
The Charter of Fundamental Rights of the European Union, for its part, regulates various matters which fall within the general field of new technologies (such as those in art. 3) and, in addition, has an art. 8 (a novelty compared to the ECHR), which is explicitly to protect personal data,
a particularly sensitive issue in the ICT field. However, neither the Charter contains a criterion for determining what is meant by ‘new technology’ or ‘converging technologies’.
According to the robot voice in Wikipedia, “a robot is a mechanical or virtual intelligent agent (but the latter are usually referred to as bots) which can perform tasks on its own, or with guidance. In practice a robot is usually an electromechanical machine which is guided by computer and electronic programming. “. Starting from the easier definition , a robot can be also “an automatic machine that does the work of a human.” In this perspective, robotics (the discipline that deals with the design, construction and operation of the robot) is connected to the disciplines of engineering, electronics and mechanics, as well as to computer science and artificial intelligence.
There are a lot of different types of robots nowadays in the world, dealing each one with several fields.
In general terms it can be said that the robots are objects, artifacts in the hands of the producer, programmer, owner and user. The legal problems that may raise the use of a robot can be traced back to various macro-areas, such as the safety of new technologies, especially for their use in the workplace or in the context of dangerous activities, the putting into circulation of the product ‘robot “and market surveillance (currently ISO standards in the field of robotics are still to be defined).
The very important aspect to consider in this article is the difference between a ‘automatic’ robot and a ‘autonomous’ and cognitive one, meaning with the first robot with a capacity of reaction to certain sensors, while with the second the additional capacity of perception of the environment as a whole and the third (which is similarly in some aspects with the second) that is based on an internal representation of the external world, and is able to adapt also to a partially unknown and changing environment.
These robots, especially if autonomous and cognitive, can so be viewed also as agents, as entities that act and react in the environment in which they are immersed. In the latter case, the issue of responsibility for the actions of the robot becomes crucial.
There is a robot that performs tasks in a hospital care: administering medications, giving early advice, call the doctors and human assistants if necessary, check that systems for respiratory support and supply of patients are connected and provides repairs and reactivations necessary, and more. An attacker is introduced in the hospital and tries to disconnect a breathing tube or a drip that delivers a drug essential. The robot detects the fault and intervenes to prevent it from happening or to fix it. The attacker attacks the robot. The robot responds by activating the alarm system and, in the meantime, any resistance to the human aggressor. Eventually the attacker back injury.
The question in legal terms is as follows: there is no responsibility for injury from the attacker? And to whom?
The robot reacted to an attacker who was about to cause damage to a third helpless; we can say that the robot was in the performance of a task care, and therefore was fulfilling a duty, also one cannot attribute the act of defense to the manufacturer nor does the user (the hospital) because the incident is consequence of a definitely abnormal behavior of the aggressor.
In this perspective, it seems that his conduct is condoned, and the attacker must, therefore, suffer the consequences of their actions withoutbeing able to establish the liability of anyone, including the robot.
The fact that there will be apply rules to robots designed to make humans do not, in itself, human robots, but should prompt us to “slowly accepting the idea that we might not be so distinctly different from other entities and informational intelligent agents, and engineered artifacts “that now populate the world. If, between exercise and the other, there have been prefigured realistic scenarios for the future, there will be the men and women of tomorrow to be even better. In any case, in seeking legal solutions adhering to this way of life technologised of humans today, it is always advisable to have a light touch to avoid that the rule of law is simply replaced with the technology.