How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED

How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED

 

https://www.wired.com/video/watch/yuval-harari-tristan-harris-humans-get-hacked

 

Yuval Noah Harari, historian and best-selling author of Sapiens, Homo Deus and 21 Lessons for the 21st Century, and Tristan Harris, co-founder and executive director of the Center for Humane Technology, speak with WIRED Editor in Chief Nicholas Thompson.

 

Hello I’m Nicholas Thompson,

I’m the editor-in-chief of Wired magazine.

I’m here with two of my favorite thinkers.

Yuval Noah Harari.

He’s the author of three number one best-selling books

including 21 Lessons for the 21st Century

which came out this week

and which I just finished this morning

which is extraordinary.

And Tristan Harris,

who’s the man who got the whole world

to talk about time well spent

and has just founded the Center for Humane Technology.

I like to think of Yuval as the man

who explains the past and the future

and Tristan as the man who explains the present.

We’re gonna be here for a little while talking

and it’s a conversation inspired

by one of the things that matters most to me

which is that Wired magazine

is about to celebrate its 25th anniversary.

And when the magazine was founded,

the whole idea was that it was a magazine

about optimism and change,

and technology was good and change is good.

25 years later you look at the world today,

you can’t really hold the entirety of that philosophy.

So I’m here to talk with Yuval and Tristan.

Hello!

[Yuval] Hello. Thank you.

Good to be here.

Tristan why don’t you tell me a little bit about

what you do and then Yuval you tell me too.

I am a director of the Center for Humane Technology

where we focus on realigning technology

with a clear-eyed model of human nature.

And I was before that a design ethicist at Google,

where I studied the ethics of human persuasion.

I’m a historian and I try to understand

where humanity’s coming from and where we are heading.

Let’s start by hearing about how you guys met each other

’cause I know it goes back a little while,

so when did the two of you first meet?

Funnily enough on an expedition to Antarctica.

(laughing)

Not with Scott and Amundsen,

just we were invited by the Chilean government

to the congress of the future,

to talk about the future of humankind

and one part of the congress

was an expedition to the Chilean base in Antarctica

to see global warming with our own eyes.

It was still very cold and it was interesting

and so many interesting people on this expedition.

A lot of philosophers,

Nobel Laureates and I think we particularly connected

with Michael Sandel.

He’s a really amazing philosopher in moral philosophy.

It’s almost like a reality show.

I would have loved to be able to see the whole thing.

Let’s get started with one of the things

that I think is one of the most interesting continuities

between both of your work.

You write about different things

you talk about different things

but there are a lot of similarities.

And one of the key themes is the notion

that our minds don’t work the way

that we sometimes think they do.

We don’t have as much agency over our minds

as perhaps we believed.

Or we believed until now.

So Tristan why don’t you start talking about that

and then Yuval,

jump in and we’ll go from here.

[Tristan] I actually learned a lot of this

from one of Yuval’s early talks

where he talks about democracy as the,

where should we put authority in a society?

And we should put it in the opinions and feelings of people.

But my whole background,

I actually spent the last 10 years studying persuasion.

Starting when I was a magician as a kid where you learned

that there’s things that work on all human minds.

It doesn’t matter whether they have a PhD

or what education level they have,

whether they’re nuclear physicists,

what age they are.

It’s not like if you speak Japanese

I can’t do this trick on you,

it’s not going to work.

Or if you have a PhD.

It works on everybody.

So somehow there’s this discipline

which is about universal exploits on all human minds.

And then I was at a lab called the Persuasive Technology Lab

that teaches at Stanford that teaches engineering students

how do you apply the principles

of persuasion to technology.

Could technology be hacking human feelings,

attitudes, beliefs,

behaviors to keep people engaged with products.

And I think that’s the thing we both share

is that the human mind is not the total secure enclave

root of authority that we think it is.

And if we want to treat it that way

we’re gonna have to understand

what needs to be protected first,

is my perspective.

Yeah I think that we are now facing

not just a technological crisis

but a philosophical crisis

because we have built our society,

certainly liberal democracy with elections

and the free market and so forth,

on philosophical ideas from the 18th Century

which are simply incompatible

not just with the scientific findings of the 21st Century

but above all with the technology

we now have at our disposal.

Our society’s built on the ideas that the voter knows best,

that the customer is always right,

that ultimate authority as Tristan said

is the feelings of human beings.

And this assumes that human feelings and human choices

are this sacred arena which cannot be hacked,

which cannot be manipulated.

Ultimately my choices,

my desires reflect my free will

and nobody can access that or touch that.

And this was never true

but we didn’t pay a very high cost

for believing in this myth in the 19th or 20th Century

because nobody had the technology to actually do it.

Now some people,

corporations,

governments,

they are gaining the technology to hack human beings.

Maybe the most important fact

about living in the 21st Century

is that we are now hackable animals.

Explain what it means to hack human being

and why what can be done now is different

from what could be done a hundred years ago

with religion or with the book

or with anything else that influences what we see

and changes the way we think about the world.

To hack a human being

is to understand what’s happening inside you

on the level of the body,

of the brain,

of the mind so that you can predict

what people will do.

You can understand how they feel.

And once you understand and predict

you can usually also manipulate

and control and even replace.

Of course it can’t be done perfectly,

and it was possible to do it to some extent a century ago.

But the difference in the level is significant.

I would say the real key

is whether somebody can understand you

better than you understand yourself.

The algorithms that are trying to hack us,

they will never be perfect.

There is no such thing

as understanding perfectly everything

or predicting everything.

You don’t need perfect.

You just need to be better than the average human being.

And are we there now?

Or are you worried that we’re about to get there?

I think Tristan might be able to answer

where we are right now better than me

but I guess that if we are not there now

we are approaching very very fast.

I think a good example of this is YouTube.

Relatable example.

You open up that YouTube video your friend sends you

after your lunch break.

You come back to your computer.

And you think okay I know those other times

I end up watching two or three videos

and I end up getting sucked into it.

But this time it’s gonna be really different.

I’m just gonna watch this one video

and then somehow that’s not what happens.

You wake up from a trance three hours later

and you say what the hell just happened

and it’s because you didn’t realize

you had a supercomputer pointed at your brain.

So when you open up that video

you’re activating Google Alphabet’s

billions of dollars of computing power.

And they’ve looked at what has ever gotten

two billion human animals to click on another next video.

And it knows way more about

what’s gonna be the perfect chess move

to play against your mind.

If you think of your mind as a chessboard

and you think you know the perfect move to play,

I’ll just watch this one video.

But you can only see so many moves ahead on the chessboard.

But the computer sees your mind and it says no no no,

I’ve played a billion simulations of this chess game before

on these other human animals watching YouTube

and it’s gonna win.

Think about when Garry Kasparov loses against Deep Blue.

Garry Kasparov can see so many moves ahead on the chessboard

but he can’t see beyond a certain point.

Like a mouse can see so many moves ahead in a maze,

but a human can see so way more moves ahead

and then Garry can see even more moves ahead.

But when Garry loses against IBM Deep Blue,

that’s checkmate against humanity for all time

because he was the best human chess player.

So it’s not that we’re completely losing human agency.

You walk into YouTube and it always addicts you

for the rest of your life

and you never leave the screen.

But everywhere you turn on the internet

there’s basically a supercomputer pointed at your brain

playing chess against your mind

and it’s gonna win a lot more often than not.

[Nicholas] Let’s talk about that metaphor

because chess is a game with a winner and a loser.

So you set up the technology fully as an opponent.

But YouTube is also gonna,

I hope,

please gods of YouTube,

recommend this particular video to people

which I hope will be elucidating and illuminating.

So is chess really the right metaphor?

A game with a winner and a loser?

The question is what is the game that’s being played?

If the game being played was,

Hey Nick go meditate in a room for two hours

then come back to me and tell me

what do you really want right now in your life?

And if YouTube is using two billion human animals

to calculate based on everybody who’s ever wanted

how to learn how to play ukulele,

they can say here’s the perfect video I have

to teach you how to play ukulele.

That could be great.

The problem is it doesn’t actually care about what you want.

It just cares about what will keep you next on the screen.

And we’ve actually found,

we have an ex-YouTube engineer who works with us,

who’s shown that there’s a systemic bias in YouTube.

So if you airdrop a human animal and they land on,

let’s say a teenage girl and she watches a dieting video,

the thing that works best at keeping that girl

who’s watching a dieting video on YouTube the longest

is to say here’s an anorexia video.

Because that’s between,

here’s more calm stuff and true stuff

and here’s the more insane divisive

outrageous conspiracy intense stuff.

YouTube always if they want to keep your time

they want to steer you down that road.

And so if you airdrop a person on a 9/11 video

about the 9/11 news event,

just a fact-based news video,

the autoplaying video is the Alex Jones Infowars video.

So what happens to this conversation?

What follows us?

Ray Kurtzweil?

(laughing)

Yeah I guess it’s gonna really depend.

(laughing)

And the problem is you can also kind of hack these things.

There’s governments who actually can manipulate

the way that the recommendation system works

by throwing thousands of headless browsers,

versions of Firefox to watch one video

and then get it to search for another video

so that we search for Yuval Hirari,

we’ve watched that one video,

then we get thousands of computers

to simulate people going from Yuval Hirari

to watching The Power of Putin or something like that.

And then that’ll be the top recommended video.

And so as Yuval says,

these systems are kind of out of control

and algorithms are running

where two billion people spend their time.

70% of what people watch on YouTube

is driven by recommendations from the algorithm.

People think what you’re watching on YouTube is a choice.

People are sitting there,

they sit there,

they think and then they choose.

But that’s not true.

70% of what people are watching

is the recommended videos on the right hand side.

Which means 70% of where 1.9 billion users,

that’s more than the number of followers of Islam,

about the number of followers of Christianity,

of what they’re looking at on YouTube for 60 minutes a day.

That’s the average time people spend on YouTube.

60 minutes and 70% is populated by a computer.

So now the machine is out of control.

Because if you thought 9/11 conspiracy theories

were bad in English try,

what are 9/11 conspiracies

in Burmese and Sri Lankan and Arabic.

No-one’s looking at that.

And so it’s kind of a digital Frankenstein.

It’s pulling on all these levers

and steering people in all these different directions.

And Yuval we got into this point

by you saying that this scares you for democracy.

It makes you worry whether democracy can survive

or I believe you say,

the phrase you use in your book

is democracy will become a puppet show.

[Yuval] Explain that. Yeah.

If it doesn’t adapt to these new realities

it will become just an emotional puppet show.

If you go on with this illusion

that human choice cannot be hacked,

cannot be manipulated

and we can just trust it completely

and this is the source of all authority

then very soon you end up with an emotional puppet show.

This is one of the greatest dangers that we are facing

and it really is the result of philosophical impoverishment.

Of taking for granted philosophical ideas

from the 18th Century and not updating them

with the findings of science.

It’s very difficult because you go to people,

people don’t want to hear this message

that they are hackable animals.

That their choices,

their desires,

their understanding of who am I?

What is my most authentic aspirations?

This can actually be hacked and manipulated.

To put it briefly,

my amygdala may be working for Putin.

I don’t want to know this.

I don’t want to believe that.

No I am a free agent.

If I am afraid of something this is because of me.

Not because somebody implanted this fear in my mind.

If I choose something this is my free will

and who are you to tell me anything else?

I’m hoping that Putin will soon be working for my amygdala

but that’s a side project I have going.

It seems inevitable from what you wrote in your first book

that we would reach this point

where human minds would be hackable

and where computers and machines and AI

would have a better understanding of us.

But it’s certainly not inevitable

that it would lead us to negative outcomes,

to 9/11 conspiracy theories and to a broken democracy.

Have we reached the point of no return?

How do we avoid the point of no return

if we haven’t reached there?

What are the key decision points along the way?

Nothing is inevitable in that.

The technology itself is going to develop.

You can’t just stop all research in AI

and you can’t stop all research in biotech.

And the two go together.

I think that AI gets too much attention now

and we should put equal emphasis

on what’s happening on the biotech front.

Because in order to hack human beings you need biology.

Some of the most important tools and insights,

they are not coming from computer science.

They’re coming from brain science.

And many of the people who design

all these amazing algorithms,

they have a background in psychology and in brain science.

This is what you’re trying to hack.

But what we should realize is

we can use the technology in many different ways.

For example we’re now using AI

mainly in order to surveil individuals

in the service of corporations and governments

but it can be flipped to the opposite direction.

We can use the same surveillance systems

to monitor the government in the service of individuals.

To monitor for example government officials,

that they are not corrupt.

The technology is willing to do that,

the question is whether we’re willing

to develop the necessary tools to do it.

One of Yuval’s major points here

is that the biotech lets you understand,

by hooking up a sensor to someone,

features about that person

that they won’t know about themselves.

And we’re increasingly reverse-engineering the human animal.

One of the interesting things that I’ve been following

is also the ways you can ascertain those signals

without an invasive sensor.

We were talking about this a second ago.

There’s something called Eulerian Video Magnification

where you point a computer camera at a person’s face

and a human being can’t,

I can’t look at your face and see your heart rate.

My intelligence doesn’t let me see that.

You can see my eyes dilating right?

[Tristan] I can see your eyes dilating–

‘Cause I’m terrified of you.

(laughing)

But if I put a supercomputer behind the camera

I can actually run a mathematical equation

and I can find the micropulses of blood to your face

that I as a human can’t see but the computer can see.

So I can pick up your heart rate.

What does that let me do?

I can pick up your stress level

because heart rate variability gives you your stress level.

There’s a woman named Poppy Crum

who gave a TED Talk this year

about the end of the poker face.

We have this idea that there can be a poker face.

We can actually hide our emotions from other people.

But this talk is about the erosion of that.

That we can point a camera at your eyes

and see when your eyes dilate

which actually detects cognitive strains,

when you’re having a hard time understanding something

or an easy time understanding something.

We can continually adjust this based on your heart rate,

your eye dilation.

One of the things with Cambridge Analytica

is the idea that if we have,

which is all about the hacking of Brexit

and Russia and all the US elections,

that was based on,

if I know your big five personality traits,

if I know Nick Thompson’s personality

through his OCEAN,

openness,

conscientiousness,

extravertedness,

agreeableness and neuroticism.

That gives me your personality

and based on your personality

I can tune a political message to be perfect for you.

Now the whole scandal there was that Facebook

let go of this data to be stolen by a researcher

who used to have people fill in questions to figure out

what are Nick’s big five personality traits.

But now there’s a woman named Gloria Mark at UC Irvine

who has done a research showing

you can actually get people’s big five personality traits

just by their click patterns alone with 80% accuracy.

So again the end of the poker face,

the end of the hidden parts of your personality.

We’re gonna be able to point AIs at human animals

and figure out more and more signals from them

including their microexpressions,

when you smirk and all these things.

We’ve got face ID cameras on all of these phones.

So now if you have a tight loop

where I can adjust the political messages

in real time to your heart rate and to your eye dilation

and to your political personality,

that’s not a world we want to live in.

It’s a kind of dystopia.

There are many contexts you can use that.

It can be used in class to figure out

that the student isn’t getting the message,

that the student is bored which can be a very good thing.

It can be used by lawyers like you negotiate a deal

and if I can read what’s behind

your poker face and you can’t

that’s a tremendous advantage for me.

So it can be done in a diplomatic setting

like two prime ministers are meeting to,

I don’t know,

resolve the Israeli-Palestinian conflict

and one of them has an earbud

and the computer is whispering in his ear

what is the true emotional state.

What’s happening in the brain,

in the mind of the person on the other side of the table.

And what happens when two sides have this?

And you have kind of an arms race

and we have absolutely no idea how to handle these things.

I’ll give a personal example

when I talked about this in Davos.

For me maybe my entire approach to these issues

is shaped by my experience of coming out.

That I realized that I was gay when I was 21

and ever since then I’m haunted by this thought,

what was I doing for the previous five or six years?

How is it possible,

I’m not talking about something small

that you don’t know about yourself.

Everybody there is something you don’t know about yourself.

But how can you possibly not know this about yourself?

And then the next thought is,

a computer,

an algorithm could have told me that when I was 14

so easily just by something as simple

as following the focus of my eyes.

I don’t know,

I walk on the beach or I even watch television and there is,

what was in the 1980s?

Baywatch or something,

and there is a guy in a swimsuit

and there is a girl in a swimsuit

and which way my eyes are going,

it’s as simple as that.

And then I think,

what would my life have been like,

first if I knew when I was 14,

secondly if I got this information from an algorithm.

There is something incredibly deflating for the ego

that this is the source of this deep wisdom about myself?

An algorithm that followed my eye movement?

[Nicholas] And there’s an even creepier element

which you write about in your book,

what if Coca-Cola had figured it out first and

[Yuval] was selling you Coke Exactly!

with shirtless men when you didn’t even know you were gay.

Exactly although Coca-Cola versus Pepsi,

Coca-Cola knows this about me

and shows me a commercial with a shirtless man,

Pepsi doesn’t know this about me

because they are not using these sophisticated algorithms.

They go with the normal commercials

with the girl in the bikini.

And naturally enough I buy Coca-Cola

and I don’t even know why.

Next morning when I go to the supermarket

I buy Coca-Cola and I think,

this is my free choice!

I chose Coke!

But no I was hacked.

[Nicholas] And so this is inevitable.

[Tristan] This is the crux of the whole issue.

This is everything is what we’re talking about.

And how do you trust something

that can pull these signals off of you?

If the system is asymmetric,

if you know more about me than I know about myself,

we usually have a name for that in law.

For example when you deal with a lawyer,

you hand over your very personal details to a lawyer

so they can help you.

But then they have this knowledge of the law

and they know about your vulnerable information

so they could exploit you with that.

Imagine a lawyer who took all the personal information

and then sold it to somebody else.

But they’re governed by a different relationship

which is the fiduciary relationship.

They can lose their license

if they don’t actually serve your interests.

It’s similar to a doctor or psychotherapist.

There’s this big question of

how do we hand over information about us

and say I want you to use that to help me.

On whose authority can I guarantee

that you’re going to help me?

There is no moment when we are handing information.

With the lawyer there is this formal setting like,

okay I hire you to be my lawyer,

this is my information and we know this.

But I’m just walking down the street,

there is a camera looking at me,

I don’t even know that,

and they are hacking me through that.

So I don’t even know it’s happening.

That’s the most duplicitous part.

We often say it’s like imagine a priest,

if you want to know what Facebook is,

imagine a priest in a confession booth

and they listen to two billion people’s confessions

but they also watch you round your whole day,

what you click on,

which ads,

Coca-Cola or Pepsi,

the shirtless men and the shirtless women,

and all your conversations that you have

with everybody else in your life

’cause they have Facebook Messenger,

they have that data too.

But imagine that this priest in a confession booth,

their entire business model is to sell access

to the confession booth to another party.

So someone else can manipulate you.

Because that’s the one way

this priest makes money in this case.

They don’t make money any other way.

There are two giant entities that will have,

I mean there are a million entities that will have this data

but there’s large corporations,

you mentioned Facebook,

and there will be governments.

Which do you worry about more?

It’s the same.

Once you reach beyond a certain point

it doesn’t matter how you call it.

This is the entity that actually rules.

Whoever has the data.

Whoever has this kind of data.

Even in a setting where you still have a formal government

but this data is in the hands of some corporation

then the corporation if it wants

can decide who wins the next elections.

So it’s not really a matter for choice.

There is choice.

We can design a different political and economic system

in order to prevent this immense concentration

of data and power in the hands

of either government or corporations that use it

without being accountable

and without being transparent about what they are doing.

The message is not okay it’s over,

humankind is in the dustbin of history.

That’s not the message. No that’s not the message.

Eyes have stopped dilating,

let’s keep this going.

(laughing)

The real question is,

we need to get people to understand this is real,

this is happening,

there are things we can do.

And you have midterm elections in a couple of months

so in every debate,

every time a candidate goes to meet the potential voters

in person or on television,

ask them this question.

What is your plan,

what is your take on this issue?

What are you going to do if we are going to elect you?

If they say I don’t know what you’re talking about,

that’s a big problem.

I think the problem is most of them

have no idea what we’re talking about

and one of the issues is

I think policy makers as we’ve seen

are not very educated on these issues.

They’re doing better.

They’re doing so much better this year than last year.

Watching the Senate hearings,

the last hearings with Jack Dorsey and Sheryl Sandberg

versus watching the Zuckerberg hearings

or watching the Colin Stretch hearings,

there’s been improvement.

[Tristan] It’s true.

There’s much more, though.

I think these issues open up a whole space of possibility.

We don’t even know yet the kinds of things

we’re gonna be able to predict.

We’ve mentioned a few examples that we know about

but if you have a secret way of knowing something

about a person by pointing a camera at them and AI,

why would you publish that?

So there’s lots of things that can be known about us

to manipulate us right now that we don’t even know about.

How do we start to regulate that?

I think the relationship we want to govern is,

when a supercomputer is pointed at you

that relationship needs to be protected

[Nicholas] and governed by some terms. Okay.

So there’s three elements in that relationship.

There’s the supercomputer.

What does it do,

what does it not do.

There’s the dynamic of how it’s pointed.

What are the rules over what I can collect?

What are the rules over what I can’t collect

and what I can store?

And there’s you.

How do you train yourself to act?

How do you train yourself to have self-awareness?

Let’s talk about all three of those areas

maybe starting with the person.

What should the person do in the future

to survive better in this dynamic?

One thing I would say about that

is I think self-awareness is important.

It’s important that people know the thing

we’re talking about and they realize

that we can be hacked but it’s not a solution.

You have millions of years of evolution

that guide your mind to make

certain judgments and conclusions.

A good example of this is if I put on a VR helmet

and now suddenly I’m in a space where there’s a ledge.

I’m at the edge of a cliff.

I consciously know I’m sitting here

in a room with Yuval and Nick.

I know that consciously.

I’ve got this self-awareness.

I know I’m being manipulated.

But if you push me I’m gonna not want to fall right?

‘Cause I have millions of years of evolution that tell me

you are pushing me off of a ledge.

So in the same way you can say,

Dan Ariely makes this joke actually,

the behavioral economist,

that flattery works on us

even if I tell you I’m making it up.

Like Nick I love your jacket right now.

[Nicholas] It’s a great jacket on you. It is.

It’s a really amazing jacket.

I actually picked it out ’cause I knew

from studying your carbon dioxide exhalation yesterday

that you would like this.

Exactly.

(laughing)

We’re manipulating each other now.

The point is that even if you know

that I’m just making that up,

it still actually feels good.

The flattery feels good.

And so it’s important,

I think of this as a new era,

kind of a new Enlightenment

where we have to see ourselves in a very different way.

And that doesn’t mean that’s the whole answer.

It’s just the first step.

We have to all walk around–

So the first step is recognizing

that we’re all vulnerable.

[Tristan] Hackable.

Vulnerable.

But there are differences.

Yuval is way less hackable than I am

because he meditates two hours a day

and doesn’t use a smartphone.

(laughing)

I’m super hackable.

The last one’s probably key.

(laughing)

What are the other things

that a human can do to be less hackable?

You need to get to know yourself as best as you can.

It’s not a perfect solution,

but somebody’s running after you,

you run as fast as you can.

It’s a competition.

Who knows you best in the world?

So when you’re two years old it’s your mother.

Eventually you hope to reach a stage in life

when you know yourself even better than your mother.

And then suddenly you have this corporation

or government running after you,

and they are way past your mother and they are at your back.

This is the critical moment.

They know you better than you know yourself.

So run a little.

Run a little faster.

There are many ways you can run faster,

meaning getting to know yourself a bit better.

Meditation is one way,

there are hundreds of techniques of meditation,

different works for different people.

You can go to therapy,

you can use art,

you can use sport,

whatever works for you.

But it’s now becoming much more important than ever before.

It’s the oldest advice in the book.

Know yourself. Yeah.

But in the past you did not have competition.

If you lived in Ancient Athens

and Socrates came along and said know yourself,

it’s good for you,

and you say nah I’m too busy,

I have this olive grove,

I don’t have time.

Okay you didn’t get to know yourself better

but there was nobody else who was competing with you.

Now you have serious competition.

So you need to get to know yourself better.

This is the first maxim.

Secondly as an individual,

if we talk about what’s happening to society,

you should realize you can’t do much by yourself.

Join an organization.

If you are really concerned about this,

this week join some organization.

50 people who work together are a far more powerful force

than 50 individuals who each of them is an activist.

It’s good to be an activist,

it’s much better to be a member of an organization.

Then there are other tested and tried methods of politics.

We need to go back to this messy thing

of making political regulations and choices.

Politics is about power

and this is where power is right now.

[Tristan] Out of that,

I think there’s a temptation to say,

okay how can we protect ourselves.

And when this conversation shifts into,

with my smartphone not hacking me,

you get things like,

oh I’ll set my phone to grayscale,

oh I’ll turn off notifications.

But what that misses is that

you live inside of a social fabric.

When we walk outside my life depends

on the quality of other people’s thoughts,

beliefs and lives.

So if everyone around me believes a conspiracy theory

because YouTube is taking 1.9 billion human animals

and tilting the playing field so everyone watches Infowars,

by the way YouTube has driven 15 billion recommendations

of Alex Jones’ Infowars and that’s recommendations

and then two billion views.

If only one in a thousand people

believed those 2 billion views,

[Yuval] that’s still two million? Two million.

Mathematics is not as strong as…

(laughing)

We’re philosophers.

And so if that’s two million people

that’s still two million new conspiracy theorists.

So if everyone else is walking around in the world

you don’t get to do that.

If you say hey I’m a kid,

I’m a teenager and I don’t wanna care

about the number of likes I get

so I’m gonna stop using Snapchat or Instagram.

I don’t want to be hacked

for my self-worth in terms of likes.

If I’m a teenager and I’m using Snapchat or Instagram

and I don’t want to be hacked for my self-worth

in terms of the number of likes I get,

I can say I don’t wanna use those things

but I still live in a social fabric

where all my other sexual opportunities,

social opportunities,

homework transmission where people talk about that stuff.

If they only use Instagram

I have to participate in that social fabric.

So I think we have to elevate the conversation from

how do I make sure I’m not hacked,

it’s not just an individual conversation.

We want society to not be hacked.

Which goes to the political point

in how do we politically mobilize

as a group to change the whole industry.

For me I think about the tech industry.

Alright so that’s step one in this three step question.

What can individuals do,

know yourself,

make society more resilient,

make society less able to be hacked.

What about the transmission

between the supercomputer and the human?

What are the rules and how should we think about

how to limit the ability of the supercomputer to hack you?

That’s a big one. That’s a big question.

That’s why we’re here!

In essence I think that we need to come to terms

with the fact that we can’t prevent it completely.

It’s not because of the AI, it’s because of the biology.

It’s just the type of animals that we are

and the type of knowledge that now we have

about the human body,

about the human brain.

We have reached a point when this is really inevitable.

You don’t even need a biometric sensor,

you can just use a camera in order to tell

what is my blood pressure,

what’s happening now,

and through that what’s happening to me emotionally.

I would say that we need to

reconceptualize completely our world

and this is why I began by saying

that we suffer from philosophical impoverishment.

That we are still running on the ideas of the 18th Century.

Which were good for two or three centuries,

which were very good but which are simply not adequate

to understanding what’s happening right now.

Which is why I also think that

with all the talk about the job market

and what should I study today that will be relevant

to the job market in twenty,

thirty years.

I think philosophy is one of the best bets maybe.

I sometimes joke,

my wife studied philosophy and dance in college.

Which at the time seemed like the two worst professions

’cause you can’t really get a job in either.

But now they’re like the last two things

that will get replaced by robots.

I think Yuval is right and I think

this conversation usually makes people conclude

that there’s nothing about human choice

or the human mind’s feelings that’s worth respecting.

And I don’t think that is the point.

I think the point is we need a new kind of philosophy

that acknowledges a certain kind of thinking

or cognitive process or conceptual process

or social process,

that we want that.

For example Lawrence Fishkin is a professor at Stanford

who’s done work on deliberative democracy

and shown that if you get a random sample of people

in a hotel room for two days

and you have experts come in

and brief them about a bunch of things

they change their minds about issues,

they go from being polarized to less polarized,

they can come to more agreement.

And there’s a process there that you can put in a bin

and say that’s a social cognitive sense-making process

that we might want to be sampling from that one

as opposed to an alienated lonely individual

who’s been shown photos of their friends

having fun without them all day

and then we’re hitting them with Russian ads.

We probably don’t want to be

sampling a signal from that person to be thinking about,

not that we don’t want it from that person,

but we don’t want that process

to be the basis of how we make collective decisions.

So I think we’re still stuck in a mind-body meat suit.

We’re not getting out of it.

So we better learn how do we use it in a way

that brings out the higher angels of our nature.

And the more reflective parts of ourselves.

So I think what technology designers need to do

is ask that question.

A good example just to make it practical,

let’s take YouTube again.

What’s the difference between a teenager,

let’s take an example of you watch a ukulele video.

It’s a very common thing on YouTube.

There’s lots of ukulele videos.

How to play ukulele.

What’s going on in that moment

when it recommends other ukulele videos?

There’s actually a value if someone wants to learn

how to play the ukulele.

But the computer doesn’t know that.

It’s just recommending more ukulele videos.

But if it really knew that about you,

instead of just saying

here’s infinite more ukulele videos to watch,

it might say here’s your ten friends

who know how to play ukulele that you didn’t know

know how to play ukulele

and you can go and hang out with them.

It could put those choices at the top of life’s menu.

The problem is when you watch,

like a teenager watches that dieting video,

the computer doesn’t know that the thing you’re really after

in that moment isn’t that you want to be anorexic.

It just knows that people who watch those

tend to fall for these anorexia videos.

It can’t get at this underlying value,

this thing that people want.

You can even think about it that we just need,

I mean the system in itself can do amazing thing for us,

we just need to turn it around

that it serves our interests whatever that is

and not the interests of the corporation or the government.

Actually to build,

okay now that we realize that our brains can be hacked,

we need an antivirus for the brain.

Just as we have one for the computer.

And it can work on the basis of the same technology.

Let’s say you have an AI sidekick

who monitors you all the time,

24 hours a day,

what you write,

what you’ve seen,

everything.

But this AI is serving you.

It has this fiduciary responsibility.

And it gets to know your weaknesses

and by knowing your weaknesses it can protect you

against other agents trying to hack you

and to exploit your weaknesses.

So if you have a weakness for funny cat videos

and you spend an enormous amount of time,

an inordinate amount of time just watching,

you know it’s not very good for you

but you just can’t stop yourself clicking,

then the AI will intervene

and whenever this funny cat video tries to pop up the AI,

no no no no.

And it will just show maybe a message

that somebody just tried to hack you.

You get these messages about

somebody just tried to infect your computer with a virus.

The hardest thing for us is to admit

our own weaknesses and biases and it can go all ways.

If you have a bias against Trump or against Trump supporters

so you very easily believe any story,

however farfetched and ridiculous.

So I don’t know,

Trump thinks that the world is flat.

Trump is in favor of killing all the Muslims.

You would click on that.

This is your bias.

And the AI will know that so it’s completely neutral,

it doesn’t serve any entity out there.

It just gets to know your weaknesses and biases

and tries to protect you against them.

[Nicholas] But how does it learn

that it’s a weakness and a bias and not something you like?

How come it knows when you click the ukulele video,

that’s good,

and when you click the Trump–

[Tristan] This is where I think we need

a richer philosophical framework because if you have that

then you can make that understanding.

So if a teenager’s sitting there in that moment,

watching the dieting video

then they’re shown the anorexia video,

imagine instead of a 22 year old male engineer

who went to Stanford,

computer scientist thinking about

what can I show them that’s the perfect thing?

You had a 80 year old child developmental psychologist

who studied under the best child developmental psychologists

and thought about in those kinds of moments

the thing that’s usually going on for a teenager aged 13

is a feeling of insecurity,

identity development,

experimentation and what would be best for them?

So we think about this is,

the whole framework of humane technology

is we think this is the thing,

we have to hold up the mirror to ourselves

to understand our vulnerabilities first,

and you design starting from a view

of what we’re vulnerable to.

I think from a practical perspective,

I totally agree with this idea of an AI sidekick.

But if we’re imagining,

we live in the scary reality

that we’re talking about right now.

It’s not like this is some sci-fi future,

this is the actual state.

So if we’re actually thinking about how do we navigate

to an actual state of affairs that we want,

we probably don’t want an AI sidekick

to be this kind of optional thing

that some people who are rich can afford

and other people who don’t can’t.

We probably want it to be baked in

to the way technology works in the first place

so that it does have a fiduciary responsibility

to our best subtle compassionate vulnerable interests.

So we will have government-sponsored AI sidekicks?

We will have corporations that sell us AI sidekicks

but subsidize them so it’s not just the affluent

that have really good AI sidekicks?

This is a business model conversation but…

One thing is to change the way that,

if you go to university or college

and learn computer science

then an integral part of the course

is to learn about ethics.

About the ethics of coding.

I think it’s extremely irresponsible

that you can finish,

you can have a degree in computer science,

in coding and you can design all these algorithms

that now shape people’s lives

and you just don’t have any background

in thinking ethically and philosophically

about what you’re doing.

You’re just thinking in terms of pure technicality

or in economic terms.

So this is one thing which kind of bakes it

into the cake from the first place.

Let me ask you something that has come up a couple times

I’ve been been wondering about.

So when you were giving the ukulele example,

you talked about well maybe you should

go see ten friends who play ukulele,

you should visit them offline.

And in your book you say that one of the crucial moments

for Facebook will come when an engineer realizes

that the thing that is better for the person

and for community is for them to leave their computer.

And then what will Facebook do with that?

So it does seem from a moral perspective that a platform,

if it realizes it would be better for you to go offline,

they should encourage you to do that.

But then they will lose their money

and they will be out-competed.

[Yuval] Mm-hmm. Yep.

So how do you actually get to the point where the algorithm,

the platform push somebody in that direction.

This is where this business model conversation comes in.

It’s so important.

And also why Apple and Google’s role is so important.

Because they are before the business model of all these apps

that want to steal your time and maximize attention.

Apple doesn’t need to–

Google’s before and after and during

but it is also before. But anyway.

Specifically the Android case.

So Android and iOS,

not to make this too technical

or an industries-focused conversation,

but they should theoretically,

that layer,

you have just the device.

Who should that be serving?

Whose best interest are they serving?

Do they want to make the apps as successful as possible?

And make the addictive maximizing loneliness

and alienation and social comparison,

all that stuff?

Or should that layer be a fiduciary,

as the AI sidekick to our deepest interests,

to our physical embodied lives,

to our physical embodied communities.

We can’t escape this instrument.

It turns out that being inside of community

and having face-to-face contact is,

there’s a reason why solitary confinement

is the worst punishment we give human beings.

And we have technology that’s basically maximizing isolation

because it needs to maximize

that time we spend on the screen.

So I think one question is

how can Apple and Google move their entire businesses

to be about embodied local fiduciary

responsibility to society.

And this is what we think of as humane technology.

That’s the direction that it can go.

Facebook could also change its business model

to be more about payments and people transacting

based on exchanging things,

which is something they’re looking into

with the blockchain stuff

that they’re theoretically working on.

And also Messenger payments.

If they move from an advertising-based

business model to micropayments,

they could actually shift the design of some of those things

and there could be whole teams of engineers at Newsfeed

that are just thinking about what’s best for society

and then people would still ask these questions of,

well who’s Facebook to say what’s good for society?

But you can’t get out of that situation

because they do shape what two billion human animals

will think and feel every day.

So this gets me to one of the things

I most want to hear your thoughts on which is,

Apple and Google have both done this

to some degree in the last year

and Facebook has,

I believe every executive at every tech company has said

time well spent at some point in the last year.

We’ve had a huge conversation about it

and people have bought 26 trillion of these books.

Do you actually think that we are

heading in the right direction at this moment

because change is happening and people are thinking?

Or do you feel like we’re still

going in the wrong direction?

[Yuval] I think that in the tech world

we are going in the right direction in the sense

that people are realizing the stakes.

People are realizing the immense power

that they have in their hands.

I’m talking about the people in the tech world.

They are realizing the influence they have on politics,

on society,

and most of them react I think not in the best way possible

but certainly they react in the responsible way.

In understanding yes we have this huge impact on the world.

We didn’t plan that maybe but this is happening

and we need to think very carefully what we do with that.

They still don’t know what to do with that.

Nobody really knows.

But at least the first step has been accomplished

of realizing what is happening

and taking some responsibility.

The place where we see a very negative development

is on the global level because all this talk so far

has really been internal,

Silicon Valley,

California USA talk.

But things are happening in other countries.

All the talk we’ve had so far

relied on what’s happening in

liberal democracies and in free market.

In some countries maybe you have got no choice whatsoever.

You just have to share all your information and have to do

what the government-sponsored algorithm tells you to do.

So it’s a completely different conversation.

And another complication

is the AI arms race

that five years,

even two years ago,

there was no such thing.

And now it’s maybe the number one priority

in many places around the world,

that there is an arms race going on in AI

and our country needs to win this arms race.

And when you enter an arms race situation,

then it becomes very quickly a race to the bottom.

Because you very often hear this,

okay it’s a bad idea to do this,

to develop that but they’re doing it

and it gives them some advantage

and we can’t stay behind.

We’re the good guys!

We don’t want to do it!

But we can’t allow the bad guys to be ahead of us

so we must do it first.

And you ask the other people,

they will say exactly the same thing.

They don’t want to do it but they have to.

Yeah and this is an extremely dangerous development

in the last two years.

It’s a multipolar trap

No-one wants to build slaughterbot drones

but if I think you might be doing it

even though I don’t want to I have to build it

and you build it and we both hold them.

Even at a deeper level,

if you want to build some ethics

into your slaughterbot drones

but it’ll slow you down by one week

and one week you double the intelligence.

This is actually one of the things I think

we talked about when we first met

was the ethics of speed,

of clockrate.

We’re in essence competing on

who can go faster to make this stuff

but faster means more likely to be dangerous,

less likely to be safe so it’s basically

we’re racing as fast as possible

to create the things we should probably be going

as slow as possible to create.

And I think that much like

high-frequency trading in the financial markets,

if we had this open-ended thing of

who can beat who by trading a microsecond faster.

What that turns into,

this has been well documented,

is people blowing up whole mountains

so they can lay these copper cables

so they can trade a microsecond faster.

You’re not even competing based on

an Adam Smith version of what we value or something.

We’re competing based on who can blow up mountains

and make transactions faster.

When you add high-frequency trading to

who can trade hackable programming human beings faster

and who’s more effective at manipulating

culture wars across the world,

that just becomes this race to the bottom

of the brain stem of total chaos.

I think we have to say how do we slow this down

and create a sensible pace

and I think that’s also about a humane technology.

Instead of a child developmental psychologist,

ask someone like a psychologist,

what are the clockrates of human decision making

where we actually tend to make good thoughtful choices?

We probably don’t want a whole society revved-up

to making a hundred choices per hour

about something that really matters.

So what is the right clockrate?

I think we have to actually have technology

steer us towards those kinds of decision-making processes.

[Nicholas] So back to the original question,

you’re somewhat optimistic about some of the small things

that are happening in this very small place

but deeply pessimistic about

the complete obliteration of humanity?

I think Yuval’s point is right.

There’s a question about US tech companies,

which are bigger than many governments.

Facebook controls 2.2 billion people’s thoughts.

Mark Zuckerburg’s editor-in-chief

of 2.2 billion people’s thoughts.

But then there’s also world governments

or national governments

that are governed by a different set of rules.

I think the tech companies are

very very slowly waking up to this.

And so far with the time well spent stuff for example,

it’s let’s help people,

because they’re vulnerable to how much time they spend,

set a limit on how much time they spend.

But that doesn’t tackle any of these bigger issues

about how you can program the thoughts of a democracy,

how mental health and alienation

can be rampant among teenagers leading to

doubling the rates of teen suicide

for girls in the last eight years.

We’re going to have to have a much more comprehensive view

and restructuring of the tech industry

to think about what’s good for people.

There’s gonna be an uncomfortable transition.

I use this metaphor it’s like climate change when…

There’s certain moments in history

when an economy is propped up by something we don’t want.

The biggest example of this is slavery in the 1800s.

There is a point at which slavery

was propping up the entire world economy.

You couldn’t just say we don’t wanna do this anymore,

let’s just suck it out of the economy.

The whole economy would collapse if you did that.

But the British Empire when they decided to abolish slavery,

they had to give up 2% of their GDP every year for 60 years.

And they were able to make that transition

over a transition period.

I’m not equating advertising

or programming human beings to slavery.

I’m not.

But there’s a similar structure of the entire economy now,

if you look at the stock market,

a huge chunk of the value is driven by

these advertising programming human animals based systems.

If we wanted to suck out that model,

the advertising model,

we actually can’t afford that transition.

But there could be an awkward years

where you’re basically in that long transition path.

I think in this moment we have to do it much faster

than we’ve done it in other situations

because the threats are more urgent.

Yuval do you agree that that’s one of the things

we have to think about as we think about trying to

fix the world system over the next decades?

It’s one of the things but again

the problem of the world,

of humanity is not just the advertising model.

The basic tools were designed,

you had the brightest people in the world

10 or 20 years ago cracking this problem

of how do I get people to click on ads.

Some of the smartest people ever,

this was their job.

To solve this problem.

And they solved it.

And then the methods that they initially used

to sell us underwear and sunglasses and vacations

in the Caribbean and things like that.

They were hijacked and weaponized

and are now used to sell us all kinds of things

including political opinions and entire ideologies.

It’s now no longer under the control

of the tech giants in Silicon Valley

that pioneered these methods.

These methods are now out there.

So even if you get Google and Facebook to

completely give it up the cat is out of the dog.

People already know how to do it.

There is an arms race in this arena.

So yes we need to figure out this advertising business,

it’s very important but it won’t solve the human problem.

Now the only really effective way to do it

is on the global level and for that

we need global cooperation and regulating AI,

regulating the development of AI and of biotechnology

and we are of course heading in the opposite direction,

of global corporation.

I agree in that there’s this notion of the game theory.

Sure Facebook and Google could do it

but that doesn’t matter because the cat’s out of the bag

and governments are gonna do it

and other tech companies are gonna do it

and Russia’s tech infrastructure’s gonna do it.

So how do you stop it from happening?

Not to equate slavery in a similar way but

when the British Empire decided to abolish slavery

and subtract their dependence on that for their economy,

they actually were concerned that if we do this

France’s economy is still gonna be powered by slavery

and they’re gonna soar way past us.

So from a competition perspective we can’t do this.

But the way they got there was by turning it into

a universal global human rights issue

that took a longer time but I think like Yuval says

this is a global conversation

about human nature and human freedom,

if there is such a thing,

but at least kinds of human freedom

that we want to preserve.

That I think is something that is actually

in everyone’s interest and it’s not necessarily

equal capacity to achieve that end

because governments are very powerful

but we’re gonna move in that direction

by having a global conversation about it.

Let’s end this with giving some advice

to someone who is watching this video.

They’ve just watched an Alex Jones video

and the YouTube algorithm has changed

and they sent ’em here and they somehow got to this point.

They’re 18 years old,

they want to devote their life to making sure

that the dynamic between machines and humans

does not become exploitative and becomes one in which

we continue to live our rich fulfilled lives.

What should they do or what advice could you give them?

I would say get to know yourself much better

and have as little illusions about yourself as possible.

If a desire pops in your mind don’t just say,

well this is my free will,

I chose this therefore it’s good,

I should do it.

Explore much deeper.

Secondly as I said join an organization.

There is very little you can do

just as an individual by yourself.

These are the two most important advices I could give

an individual who is watching us now.

[Tristan] And I think your earlier suggestion of,

understand that the philosophy of

simple rational human choice.

We have to move from an 18th Century model

of how human beings work

to a 21st Century model of how human beings work.

Speaking personally our work is trying to coordinate

a global movement towards fixing some of these issues

around humane technology and I think like Yuval says

you can’t do it alone.

It’s not a let me turn my phone grayscale

or let me petition my Congress member by myself.

This is a global movement.

The good news is no-one kind of wants the dystopic end point

of the stuff that we’re talking about.

It’s not like someone says no I’m really excited

about this dystopia.

I just wanna keep doing what we’re doing!

No-one wants that so it’s really a matter of,

can we all unify in the thing that we do want

and it’s somewhere in this vicinity

of what we’re talking about

and no-one has to capture the flag but we have to move away

from the direction that we’re going.

And I think everyone should be on the same page on that.

We started this conversation by talking about

whether we’re optimistic and I am certainly optimistic

that we have covered some of the hardest questions

facing humanity and that you have offered brilliant insights

into them so thank you for talking

and thank you for being here.

Thank you Tristan,

thank you Yuval.

Thank you. Thanks.