Existential risk from transformative AI: an economic perspective

The prospective arrival of transformative artificial intelligence (TAI) will be a filter for the human civilization – a threshold beyond which it will either strongly accelerate its growth, or vanish. Historical evidence on technological progress in AI capabilities and economic incentives to pursue...

Full description

Saved in:
Bibliographic Details
Published in:Technological and economic development of economy Vol. 30; no. 6; pp. 1682 - 1708
Main Author: Growiec, Jakub
Format: Journal Article
Language:English
Published: Vilnius Vilnius Gediminas Technical University 06.11.2024
Subjects:
ISSN:2029-4913, 2029-4921
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The prospective arrival of transformative artificial intelligence (TAI) will be a filter for the human civilization – a threshold beyond which it will either strongly accelerate its growth, or vanish. Historical evidence on technological progress in AI capabilities and economic incentives to pursue it suggest that TAI will most likely be developed in just one to four decades. In contrast, theoretical problems of AI alignment, needed to be solved in order for TAI to be “friendly” towards humans rather than cause our extinction, appear difficult and impossible to solve by mechanically increasing the amount of compute. This means that transformative AI poses an imminent existential risk to the humankind which ought to be urgently addressed. Starting from this premise, this paper provides new economic perspectives on discussions surrounding the issue: whether addressing existential risks is cost effective and fair towards the contemporary poor, whether it constitutes “Pascal’s mugging”, how to quantify risks that have never materialized in the past, how discounting affects our assessment of existential risk, and how to include the prospects of upcoming singularity in economic forecasts. The paper also suggests possible policy actions, such as ramping up public funding on research on existential risks and AI safety, and improving regulation of the AI sector, preferably within a global policy framework. First published online 10 July 2024
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2029-4913
2029-4921
DOI:10.3846/tede.2024.21525