'Sapiens' Author Yuval Noah Harari on the Promise and Peril of AI -- Journal Report

Dow Jones
Jun 29

Does the rise of artificial intelligence mean the decline -- and even end -- of Homo sapiens? That's the question we posed to author, historian and philosopher Yuval Noah Harari, who sees the potential for both enormous benefit and enormous danger from AI. He discussed the outlook with WSJ Leadership Institute contributing editor Poppy Harlow at The Wall Street Journal's recent CEO Council Summit.

Here are edited excerpts of their conversation.

Bringing up baby

WSJ: You call artificial intelligence -- or alien intelligence, as you refer to it throughout your writing -- the rise of a new species that could replace Homo sapiens.

YUVAL NOAH HARARI: Yeah. For the first time, we have real competition on the planet. We have been the most intelligent species by far for tens of thousands of years, and this is how we got from being an insignificant ape in the corner of Africa to being the absolute rulers of the planet and of the ecosystem. And now we are creating something that could compete with us in the very near future.

The most important thing to know about AI is that it is not a tool, it is an agent, in the sense that it can make decisions independently of us. It can invent new ideas. It can learn and change by itself. All previous human inventions, whether the printing press or the atom bomb, are tools that empower us.

WSJ: They needed us.

HARARI: They need us because a printing press cannot write books by itself and it cannot decide which books to print. An atom bomb cannot invent the next, more powerful bomb. And an atom bomb cannot decide what to attack. An AI weapon can decide by itself which target to attack and design the next generation of weapons by itself.

WSJ: The way you talk about it in your latest book, "Nexus," is that it is a baby, because it learns from us. And therefore, your argument is that we, especially the powerful leaders in this room, have a lot of responsibility, because how we act is how AI will be. You cannot expect to lie and cheat and have benevolent AI. Explain that.

HARARI: There is a big discussion around the world about AI alignment: We are creating these increasingly superintelligent, very powerful new agents. How do we make sure that these agents remain aligned with human goals and with the benefit of humanity, that they do what is good for us?

There is a lot of research and a lot of effort focused on the idea that if we can design these AIs in a certain way, if we can teach them certain principles, if we can code into them certain goals, then we will be safe.

But the two main problems with this approach are: First, the very definition of AI is that it can learn and change by itself. So when you design an AI, by definition, this thing is going to do all kinds of things which you cannot anticipate.

The other, even bigger, problem is that we can think about AI like a baby or a child. And you can educate a child to the best of your ability. He or she will still surprise you for better or worse. No matter how much you invest in their education, they are independent agents. They might eventually do something which will surprise you and even horrify you.

The other thing is, everybody who has any knowledge of education knows that in the education of children, it matters far less what you tell them. It matters far more what you do. If you tell your kids, "Don't lie, " and your kids watch you lying to other people, they will copy your behavior, not your instructions.

Now if we have now this big project to educate the AIs not to lie, but the AIs are given access to the world and they watch how humans behave and they see some of the most powerful humans on the planet, including their parents, lying, the AI will copy the behavior.

Everything, everywhere

WSJ: We took a poll this morning, asking the leaders in this room how consequential they think AI has been so far in the businesses they lead. And only a small portion said significantly. Most, it was moderately or not at all. Can you speak to them as if we were sitting here 36 months from now? Is there any world in which AI doesn't have a significant impact on their business?

HARARI: The question is one of time scale. Imagine that we are now sitting in London and the year is 1835. The first railway has been opened between Manchester and Liverpool five years ago. And we have now this conference in London in 1835 and people are saying, "You know, all this talk about railways changing the world, the Industrial Revolution, this is nonsense. We have had railways for ages. Five years. And look."

So we now know that the Industrial Revolution and trains, they completely transformed everything. But it just took more than five years. The same is likely to happen with AI in all fields, from the obvious to the less obvious.

I think that one of the first fields we'll see major changes in is finance, that AI is going very quickly to take over the financial system. Because finance is purely an informational realm. You don't see these tens of thousands of self-driving vehicles yet. The problem is that for driving, you need to deal with the messy, physical world of pedestrians and holes in the road and whatever. But in finance, it's only information in, information out. It's much easier for an AI to master that.

And what happens to finance once AIs, for instance, start inventing new financial devices that the human brain is simply incapable of dealing with because it's mathematically too complex?

A useless class

WSJ: Let me get back to what you've said about replacing jobs. You're worried about what becomes a useless class. What do we do to make sure we, as a society, not only survive, but thrive?

HARARI: I want to emphasize that AI has enormous positive potential as well as dangerous potential. And I don't believe in historical or in technological determinism. You can use the same technology to create completely different kinds of societies. We saw it in the 20th century -- we used exactly the same technology to build communist totalitarian regimes and liberal democracies.

It's the same with AI. We have a lot of choices about what to do with it -- if we remember that for the first time, we are dealing with agents and not tools, so it makes it much more complicated. But still, most of the agency is in our hands. And the question of how we develop the technology and, even more importantly, how we deploy it, we can make a lot of choices there.

The main problem is that now the companies and countries that lead the AI revolution have been locked into an arms-race situation. So even if they know that it would be better to slow down, to invest more in safety, to be careful about this or that potential development, they are constantly afraid that if we slow down and they don't slow down, they will take over the world.

Digital immigrants

AUDIENCE MEMBER: When we talk about AI, we're not talking about something that is monolithic, right? This is going to be multiple plethoras of AIs manifesting themselves. When there are all of these competing AIs that are evolving fast, what does that world look like?

HARARI: That's a very, very important point. The AI will not be one big AI. We are talking about, potentially, millions or billions of new AI agents with different characteristics and produced by different companies, different countries everywhere. And we just have no idea what the outcome will be.

We have zero experience in what happens in AI societies when millions of AIs compete with each other. This is not something you can simulate. So in a way, it's the biggest social experiment in human history. And nobody has any idea how it will develop.

One analogy to keep in mind -- we now have this immigration crisis in the U.S., in Europe, elsewhere. Lots of people are worried about immigrants. Why are people worried about immigrants? There are three main things that come to people's mind: They will take our jobs. They come with different cultural ideas; they will change our culture. They may have political agendas; they might try to take over the country politically.

Now you can think about the AI revolution as simply a wave of immigration of millions and billions of AI immigrants that will take people's jobs, that have very different cultural ideas, and that might try to gain some kind of political power.

And these AI immigrants, these digital immigrants, they don't need visas. They don't cross the sea in some rickety boat in the middle of the night. They come at the speed of light.

And I look, for instance, at far right parties in Europe. If they care about the sovereignty of their country, if they care about the economic and cultural future of their country, they should be far more worried about the digital immigrants than about the human immigrants.

Write to reports@wsj.com

 

(END) Dow Jones Newswires

June 29, 2025 11:00 ET (15:00 GMT)

Copyright (c) 2025 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10