From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

Publikation: Bidrag til tidsskriftTidsskriftartikelfagfællebedømt

As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility (meaning a lack of responsibility) is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use.

From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility.

This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans (in a negative, defensive, sense) but also ‘strengthen’ people against the challenges introduced by robotics and AI (in a positive, empowering, manner), by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place.
OriginalsprogEngelsk
TidsskriftEthics and Information Technology
Vol/bind22
Udgave nummer4
Sider (fra-til)321 - 333
Antal sider13
ISSN1388-1957
DOI
StatusUdgivet - 29 nov. 2017

Links

ID: 186156268