Ignorance and the regulation of artificial intelligence

Item Type

Language

English

Abstract

Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached to the technology, but also for developers and researchers. Finally, it asserts the need to better recognise ignorance of AI, and explores what this means for how their risks are handled. The article concludes by stressing that proper regulation demands not only independent social knowledge about the pervasiveness, economic embeddedness and fragmented regulation of AI, but a social non-knowledge that is attuned to its complexity, and inhuman and incomprehensible behaviour. In properly allowing for ignorance of its social implications, the regulation of AI can proceed in a more modest, situated, plural and ultimately robust manner. © 2021 Informa UK Limited, trading as Taylor & Francis Group.

Subject

Ignorance
Non-knowledge
Artificial intelligence
Risk regulation

Publication Title

Publication Year

2021

Publication Date

2021

Source

Scopus

License

ISSN

1366-9877

Citer cette ressource

Ignorance and the regulation of artificial intelligence, dans Science & Ignorance, consulté le 21 Novembre 2024, https://ignorancestudies.inist.fr/s/science-ignorance/item/4749

Export