This paper contributes to the economics of AI by exploring three topics neglected by economists: (i) the notion of a Singularity (and Singleton), (ii) the existential risks that AI may pose to humanity, including that from an extraterrestrial AI in a Dark Forest universe; and (iii) the relevance of economics' Mythical Agent (homo economicus) for the design of value-aligned AI-systems. From the perspective of expected utility maximization, which both the fields of AI and economics share, these three topics are interrelated.
By exploring these topics, several future avenues for economic research on AI becomes apparent, and areas where economic theory may benefit from a greater understanding of AI can be identified. Two further conclusions that emerge are first that a Singularity and existential risk from AI are still science fiction: which, however, should not preclude economics from bearing on the issues (it does not deter philosophers); and two, that economists should weigh in more on existential risk, and not leave this topic to lose credibility because of the Pascalian fanaticism of longtermism.
We use cookies to provide you with an optimal website experience. This includes cookies that are necessary for the operation of the site as well as cookies that are only used for anonymous statistical purposes, for comfort settings or to display personalized content. You can decide for yourself which categories you want to allow. Please note that based on your settings, you may not be able to use all of the site's functions.
Cookie settings
These necessary cookies are required to activate the core functionality of the website. An opt-out from these technologies is not available.
In order to further improve our offer and our website, we collect anonymous data for statistics and analyses. With the help of these cookies we can, for example, determine the number of visitors and the effect of certain pages on our website and optimize our content.