AI Alignment and Our Momentous Imperative to Get It Right
We are standing at the hinge of history, where actions taken today can lead towards a long and brilliant future for humanity, or to our extinction. Foremost among the rapidly developing technologies that we need to get right is AI. Already in 1951, Alan Turing warned that “once the machine thinking method had started, it would not take long to outstrip our feeble powers”, and that “at some stage therefore we should have to expect the machines to take control”. If and when that happens, our future hinges on what these machines’ goals and incentives are, and in particular whether these are compatible with and give sufficient priority to human flourishing. The still small but rapidly growing research area of AI Alignment aims at solving the momentous task of making sure that the first AIs with power to transform our world have goals that in this sense are aligned with ours.
Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences. His foremost research qualifications are in probability theory, but in recent years he has turned his attention to issues in global risk and AI safety. He has worked on AI policy on both the national and the EU level, as well as in the World Economic Forum. Most recent among his five books are Here Be Dragons (2016) and Tänkande maskiner (2021).