Defining Risk and Promoting Trust in AI Systems

Gespeichert in:
Bibliographische Detailangaben
Titel: Defining Risk and Promoting Trust in AI Systems
Autoren: Chamberlain, Johanna, 1989, Kotsios, Andreas, Universitetslektor
Quelle: EU Law in the Digital Age. :105-122
Schlagwörter: AI, risk, uncertainty, trust, EU law, regulatory techniques, new technologies, Europarätt, European (Integration) Law
Beschreibung: As artificial intelligence (AI) technologies continue to expand in various (perhaps most) areas of society, many new societal and legal issues are coming to light. Two concepts that will be investigated in this contribution are risk and trust. These are both broad concepts in need of delimitations, and – to complicate matters – not necessarily legal concepts; at least not in a narrow sense. Despite this they have become predominant in the AI discourse and ongoing legislative processes concerning AI, a tendency that raises questions as to what the concepts mean in the area of AI. The relationship between risk and trust in an AI context can be summed up as follows: In order to create trust with individuals for new technologies and the free flow of AI services within the EU internal market, there is a need to control the risks that AI systems pose. This is where the law enters into the equation. In this chapter we aim to nuance this new narrative, elucidating that risk and trust as emerging legal notions can be rather problematic.In the first sections of the following text, the focus will be on the risk discourse in law and how it is evolving against the background of rapid technical developments, in particular in the EU general regulation on AI (which is compared to the data protection area). With an emphasis on the distinction between risk and uncertainty, the EU approach to AI will hereby be analysed critically. The discussion will then proceed to the concept of trust and its relevance in law, the connection between risk and trust in an AI setting, and the meaning of trust in relation to new technologies. We will end this chapter by laying down the problems that arise because of the uncertainties sparked by this new way of formulating regulatory goals and techniques.
Dateibeschreibung: print
Zugangs-URL: https://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-551625
https://doi.org/10.5040/9781509981212.ch-007
Datenbank: SwePub
Beschreibung
Abstract:As artificial intelligence (AI) technologies continue to expand in various (perhaps most) areas of society, many new societal and legal issues are coming to light. Two concepts that will be investigated in this contribution are risk and trust. These are both broad concepts in need of delimitations, and – to complicate matters – not necessarily legal concepts; at least not in a narrow sense. Despite this they have become predominant in the AI discourse and ongoing legislative processes concerning AI, a tendency that raises questions as to what the concepts mean in the area of AI. The relationship between risk and trust in an AI context can be summed up as follows: In order to create trust with individuals for new technologies and the free flow of AI services within the EU internal market, there is a need to control the risks that AI systems pose. This is where the law enters into the equation. In this chapter we aim to nuance this new narrative, elucidating that risk and trust as emerging legal notions can be rather problematic.In the first sections of the following text, the focus will be on the risk discourse in law and how it is evolving against the background of rapid technical developments, in particular in the EU general regulation on AI (which is compared to the data protection area). With an emphasis on the distinction between risk and uncertainty, the EU approach to AI will hereby be analysed critically. The discussion will then proceed to the concept of trust and its relevance in law, the connection between risk and trust in an AI setting, and the meaning of trust in relation to new technologies. We will end this chapter by laying down the problems that arise because of the uncertainties sparked by this new way of formulating regulatory goals and techniques.
DOI:10.5040/9781509981212.ch-007