Existential risk from transformative AI: an economic perspective

The prospective arrival of transformative artificial intelligence (TAI) will be a filter for the human civilization – a threshold beyond which it will either strongly accelerate its growth, or vanish. Historical evidence on technological progress in AI capabilities and economic incentives to pursue...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Technological and economic development of economy Ročník 30; číslo 6; s. 1682 - 1708
Hlavní autor: Growiec, Jakub
Médium: Journal Article
Jazyk:angličtina
Vydáno: Vilnius Vilnius Gediminas Technical University 06.11.2024
Témata:
ISSN:2029-4913, 2029-4921
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The prospective arrival of transformative artificial intelligence (TAI) will be a filter for the human civilization – a threshold beyond which it will either strongly accelerate its growth, or vanish. Historical evidence on technological progress in AI capabilities and economic incentives to pursue it suggest that TAI will most likely be developed in just one to four decades. In contrast, theoretical problems of AI alignment, needed to be solved in order for TAI to be “friendly” towards humans rather than cause our extinction, appear difficult and impossible to solve by mechanically increasing the amount of compute. This means that transformative AI poses an imminent existential risk to the humankind which ought to be urgently addressed. Starting from this premise, this paper provides new economic perspectives on discussions surrounding the issue: whether addressing existential risks is cost effective and fair towards the contemporary poor, whether it constitutes “Pascal’s mugging”, how to quantify risks that have never materialized in the past, how discounting affects our assessment of existential risk, and how to include the prospects of upcoming singularity in economic forecasts. The paper also suggests possible policy actions, such as ramping up public funding on research on existential risks and AI safety, and improving regulation of the AI sector, preferably within a global policy framework. First published online 10 July 2024
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2029-4913
2029-4921
DOI:10.3846/tede.2024.21525