Global tensions are escalating, the climate crisis is coming to a head: Now artificial intelligence is supposed to help humanity solve existential problems. The historian and visionary Yuval Noah Harari warns of existential risks.
Wars, epidemics, climate crisis: the world is in a terrible state. Artificial intelligence is now supposed to help overcome the biggest problems and make life easier and better for humanity. This is how developers and tech companies are praising the AI revolution. But enormous risks are being overlooked, warns Yuval Noah Harari, one of the most important thought leaders of our time. The new technology could put an end to humanity’s rule.
How dependent are we already on AI? What dangers are we facing? And what can we humans do to protect ourselves and make sensible use of the advantages of AI? Yuval Noah Harari, whose new book “Nexus. A brief history of information networks from the Stone Age to artificial intelligence” is being published worldwide tomorrow, answers these questions in an interview with t-online.
t-online: Professor Harari, crisis after crisis is shaking the world, and with the rapid development of artificial intelligence, humanity is facing another challenge whose extent it cannot yet estimate. Are you afraid of the future?
Yuval Harari: I am actually afraid of the future. To a certain extent, at least. There are good reasons to be afraid of powerful new technologies. Artificial intelligence presents us with numerous risks and dangers. But we are not inevitably heading for disaster.
In your current book “Nexus” you point out that AI could wipe out human rule if handled incorrectly. That sounds dramatic.
I want to warn people about some of the most dangerous scenarios. That’s the point of my book – all in the belief that we can then make better decisions and prevent the worst from happening.
What is your basis for trust in humanity?
I know that humans are capable of the most terrible things. But we are also capable of the most wonderful things. Ultimately, it is up to us what the future will be. Neither natural laws nor celestial forces rule in this matter; it is our responsibility alone to decide what we do with AI. I don’t know what will happen, but I sincerely hope that we will all make sensible decisions in the years to come.
Yuval Noah Harariborn in 1976, teaches history at the Hebrew University in Jerusalem and is considered one of the most important thinkers of our time. His books “A brief history of humanity“, “Homo Deus” and “21 lessons for the 21st century” are international bestsellers. Together with his husband Itzik Yahav, Harari founded the organization “Sapienship” in 2019 to offer answers to solve global problems. With “Nexus. A brief history of information networks from the Stone Age to artificial intelligence” Harari’s latest work will be published on September 10, 2024.
How much do you trust AI, which is becoming more and more powerful with human help?
I have little faith in AI. Because, contrary to what is sometimes assumed, AI is not a brilliant, infallible machine. It is new, it is fallible. It even makes a lot of mistakes – and we know too little about them. The worst thing is that we humans are unsure how we can control AI. That is a very dangerous combination.
What do you see as the biggest threat posed by AI?
AI is the first technology in history that can make independent decisions. And not only that: AI can develop new ideas on its own. Everyone should understand what that means. AI is not a tool, but an actor.
The development of AI in the 21st century is often compared to the invention of the printing press in the 15th century. Does this comparison trivialize the potential danger posed by AI?
The printing press is designed to spread human ideas. It can copy a book, but it cannot write it. The printing press cannot decide which book to copy. Humans have always made that decision. Another example: the atom bomb is such a powerful weapon, but it does not decide whether it will be used in war. It cannot design better and stronger nuclear weapons. Humans have done that so far, too. But with AI, humanity is now creating competition for itself. Ever more sophisticated artificial intelligences are emerging that develop independently. That is their key feature: they learn and change on their own. The term AI is used indiscriminately for things, but a computer program that does not have these basic skills is simply not AI. The whole idea behind AI is that human engineers create the original system – but that system then learns independently, like a kind of technological baby, through interaction with the world.