Watch – Yuval Noah Harari on the Rise of Homo Deus

Yuval Noah Harari on the Rise of Homo Deus

https://youtu.be/JJ1yS9JIJKs

“Studying history aims to loosen the grip of the past… It will not tell us what to choose, but at least it gives us more options.” – Yuval Noah Harari

Yuval Noah Harari is the star historian who shot to fame with his international bestseller ‘Sapiens: A Brief History of Humankind’. In that book Harari explained how human values have been continually shifting since our earliest beginnings: once we placed gods at the centre of the universe; then came the Enlightenment, and from then on human feelings have been the authority from which we derive meaning and values. Now, using his trademark blend of science, history, philosophy and every discipline in between, Harari argues in his new book ‘Homo Deus: A Brief History of Tomorrow’, our values may be about to shift again – away from humans, as we transfer our faith to the almighty power of data and the algorithm.

In conversation with Kamal Ahmed, the BBC’s economics editor, Harari examined the political and economic revolutions that look set to transform society, as technology continues its exponential advance. What will happen when artificial intelligence takes over most of the jobs that people do? Will our liberal values of equality and universal human rights survive the creation of a massive new class of individuals who are economically useless? And when Google and Facebook know our political preferences better than we do ourselves, will democratic elections become redundant? As the 21st century progresses, not only our society and economy but our bodies and minds could be revolutionised by new technologies such as genetic engineering, nanotechnology and brain-computer interfaces. After a few countries master the enhancement of bodies and brains, will they conquer the planet while the rest of humankind is driven to extinction?

As AI develops engineers and software developers will be forced to employ philosophers in their product development as they are already doing with biological experts.

https://youtu.be/JJ1yS9JIJKs?t=1597

Watch – How Thomas Friedman and Yuval Noah Harari Think About The Future of Humanity

How Thomas Friedman and Yuval Noah Harari Think About The Future of Humanity

Two of the greatest thought leaders of the 21st century– Yuval Noah Harari and Thomas L. Friedman – discuss the Future of Humanity on March 19, 2018, with moderator Rachel Dry, The New York Times. “How To Understand Our Times” is an event series collaboration between The New York Times and how to: Academy bringing together New York Times journalists and leading figures in diverse fields to examine pressing issues in a changing world, including gender equality, artificial intelligence, and alternatives to fossil fuels, among others. For upcoming events, visit timesevents.nytimes.com.

From the beginning: https://youtu.be/5chp-PRYq-w

… Mom Dad never asked your kids today
31:26
what you want to be when you grow up
31:28
because whatever it is not
31:29
be here unless it’s policemen or firemen
31:32
okay only ask your kid today how you
31:36
want to be when you grow up. Will you
31:38
have an agile learning mindset will you
31:39
be predisposed to be a lifelong learner
31:42
long after you’ve left home and mom and
31:44
dad are not there to say Yuval all have
31:46
you done your homework and that leads to
31:49
what I think is really roiling societies
31:51
today and and and you’ve all touched on
31:53
this with these people might be out of
31:55
work which is something I learned from
31:57
marina gorebyss who runs the institute

Full Transcript

Yuval Noah Harari of course is a

00:01

best-selling author and thinker whose

00:04

work engages us in the history of

00:07

humanity and where we’re heading

00:10

Thomas Friedman is also a best-selling

00:12

author and columnist who for decades has

00:15

been a guide to the world for readers of

00:18

his columns and his books were in very

00:21

good hands for the evening without

00:23

further ado please welcome to the stage

00:25

you’ve all her Aria and Thomas Friedman

00:30

[Applause]

00:43

so you’ve all we’re gonna begin with you

00:46

obviously we think about the future we

00:48

think about what’s happening in the

00:49

world and what is setting the global

00:52

agenda and if you could speak about the

00:54

global agenda yeah I think the first

00:58

thing to say about the global agenda is

01:00

that it exists there is a global agenda

01:04

which is not self-evident these days

01:07

because with all the talk at least about

01:11

the rise of nationalism and tribalism

01:14

and the clash of civilizations and so

01:16

forth we sometimes tend to forget that

01:19

in a very deep sense all of humanity

01:22

today constitutes a single civilization

01:26

yes we have a lot of conflicts but every

01:29

civilization every community every

01:31

family has a lot of conflicts the people

01:35

you fight most with are your family

01:38

members not with strangers because they

01:40

are there so the fact that the world is

01:44

full of conflict doesn’t mean that we

01:47

are not a single community or a single

01:50

civilization and I think in a deep sense

01:55

almost all humans today or at least

01:58

almost all countries today understand

02:01

the fundamentals of reality in the same

02:05

way they understand politics in the same

02:09

way if you think about China the USA

02:12

Iran or Israel they understand the

02:15

basics of politics in the same way the

02:18

basics of economics in the same way and

02:21

the basics of nature in the same way

02:24

they argue about a lot of things but

02:27

when it comes time to build a hospital

02:31

or an economy or a nuclear bomb they do

02:36

it in the same way and just as we have a

02:40

set of similar ideas and practices we

02:45

also all of humanity we have a set of

02:50

common problems global problems

02:54

can only be solved on a global level and

02:57

of these global problems the three most

03:01

important of nuclear war climate change

03:04

and technological disruption now the

03:07

first two are quite familiar by now the

03:11

third technological disruption is the

03:15

most mysterious most people don’t really

03:20

understand what’s coming even most

03:23

experts cannot really say what kinds of

03:27

threats what kind of dangers the new

03:30

technologies especially AI artificial

03:33

intelligence and bioengineering will

03:37

create there are a lot of scenarios

03:41

scary scenarios like if you think about

03:44

artificial intelligence so one scary

03:46

scenario is that it will lead to the

03:49

emergence to the rise of a global

03:52

useless class just as the Industrial

03:56

Revolution of the 19th century created

03:59

the urban working class so the

04:02

automation revolution of the 21st

04:05

century might create the useless class

04:08

and much of the political and social

04:11

history of the coming decades might

04:14

revolve around the problems and the

04:16

hopes and the fears of this new class

04:19

another danger is that new technologies

04:24

might lead to the collapse of liberal

04:27

democracy especially if you think about

04:31

the combination the merger of biotech

04:35

and Infotech they might very soon reach

04:40

the point when they create systems they

04:45

create algorithms that understand us

04:48

better than we understand ourselves and

04:52

once you have an external algorithm that

04:56

understands you better than you

04:58

understand yourself liberal democracy as

05:01

we have known it for the last century or

05:04

so is

05:05

doomed it will have to adapt to the new

05:09

conditions it will have to reinvent

05:11

itself in a radical new form or it will

05:14

collapse because you can say that the

05:18

Achilles heel of liberal democracy is

05:21

the heart liberal democracy trusts in

05:26

the feelings of human beings and that

05:29

worked as long as nobody could

05:33

understand your feelings better than you

05:35

yourself or your mother but if there is

05:41

an algorithm out there that understands

05:44

your feelings better than your mother

05:46

and can press your emotional buttons

05:49

better than your mother and you won’t

05:51

even understand that this is happening

05:54

then liberal democracy will become an

05:57

emotional puppet show and we have these

06:00

you know these slogans of listen to your

06:04

heart follow your heart but what happens

06:07

if your heart is a foreign agent is a

06:10

double agent serving somebody else who

06:14

knows how to press your emotional

06:17

buttons who knows how to make you angry

06:20

how to make you bold how to make you

06:24

joyful this is the kind of threat that

06:27

we are already beginning to see emerging

06:30

today for example elections and

06:33

referendums so really I would say that

06:38

the three big challenges the three top

06:41

items on our global agenda is how to

06:45

prevent nuclear war how to prevent

06:49

climate change and how to learn to

06:53

control the new technology before it

06:57

learns to control us thank you we think

07:03

about the future we are the future of

07:05

humanity we obviously have to think

07:06

about our understanding

07:08

of the world I wondered if you could

07:09

talk a little bit about how you

07:11

understand the world today

07:13

well first of all Rachel’s great to be

07:14

with you and devolopment thank you all

07:16

for coming out this is a real treat so

07:20

in my last you know as a columnist one

07:23

of the things I’m always asking myself

07:24

is um how does the Machine work what are

07:26

the biggest gears employees shaping or

07:29

reshaping the world today and in my last

07:32

book thank you for being late I picking

07:34

up really on some of the themes you’ve

07:35

all spoke about I argued that what is

07:37

shaping more things in more places in

07:39

more ways on more days is that we’re in

07:41

the middle three nonlinear accelerations

07:44

with the three largest forces on the

07:45

planet which I call the market mother

07:48

nature and Moore’s law so a mother

07:51

nature for me is climate change

07:52

biodiversity loss and population growth

07:55

in the developing world if you put that

07:58

on a graph it actually looks like a

07:59

giant hockey stick the market for me is

08:02

globalization but not your grandfather’s

08:05

globalization that was containers on

08:07

ships and planes that’s actually flat to

08:09

going down right now but digital

08:11

globalization so everything’s being

08:12

digitized and globalized put that on a

08:14

graph whether it’s measuring data

08:16

consumed per month or cellphones it

08:18

looks like a hockey stick and lastly

08:21

Moore’s Law coined by Gordon Moore in

08:23

1965 the co-founder of Intel argued that

08:26

the speed and power of microchips will

08:28

double every 24 months it’s closer to 30

08:30

months now but never mind Moore’s law

08:32

has held up for 53 years put it on a

08:36

graph it looks like a giant hockey stick

08:39

so we’re actually in the middle of three

08:41

hockey stick accelerations all at the

08:44

same time and I believe it’s the

08:45

interaction between them that really is

08:48

not just changing our world it’s it’s

08:50

reshaping our world and it’s reshaping

08:51

five realms in particular politics

08:54

geopolitics ethics the community in the

08:58

workplace so as I think about politics

09:02

right now that some of these on

09:04

everybody’s mind you know one of the

09:06

things you really see is that political

09:08

parties all over the world here in the

09:10

UK in the United States they’re blowing

09:11

up some are in power so they think

09:14

they’re alive but they’re all basically

09:15

dead and that’s because they in my view

09:20

they were all

09:21

warned of an industrial age model that

09:24

the central theme was capitalism versus

09:27

labor or big government versus small

09:30

government and the axis of politics was

09:32

left to right and right to left um what

09:35

I would argue and this is gets to how I

09:37

think about the world today is that um

09:40

that model is no longer relevant

09:42

I think the way to think about politics

09:45

today is through the model of climate

09:46

change but I think we’re in the middle

09:48

of three climate changes at once a first

09:51

friend the change of the climate of the

09:52

climate we’re going from what I call

09:54

later to now so when I was growing up in

09:57

Minnesota in the 50s and 60s later was

10:00

when I could clean that Lake repair that

10:02

River

10:02

save that for us rescue that orangutan I

10:05

could do it now or I could do it later

10:07

well today later is officially over

10:10

later will now be too late so whatever

10:13

you’re gonna save please save it now

10:15

that’s a climate change we’re going

10:17

through a change in the climate of

10:18

globalization I think we’re going from

10:20

an interconnected world to an

10:22

interdependent world and an

10:24

interdependent world you get a kind of

10:26

geopolitical invert inversion where

10:29

you’re first of all your friends your

10:31

friends start to be able to kill you

10:32

faster than your enemies um you have

10:35

Greek and Italian banks go under tonight

10:37

this room is half-full a Greece Italy

10:40

wait a min NATO there in the EU in an

10:42

interdependent world they can kill us

10:44

and an interdependent world your rivals

10:47

falling is actually more dangerous than

10:49

your rivals rising so if China take six

10:52

more islands in the South China Sea

10:53

tonight don’t quote me on this couldn’t

10:56

care less

10:56

um if China loses 6% growth tonight this

11:01

room is empty

11:02

that’s a climate change and lastly we’re

11:05

going through a change in the climate of

11:07

business and technology I’m a big

11:09

believer that um one reason I focus on

11:12

technology so much I’m a big believer

11:14

that whatever can be done will be done

11:16

the only question in business is will it

11:18

be done by you or to you but just don’t

11:21

think it won’t be done so I’m going to

11:23

ask you what can be done and when you

11:24

look at AI and some of the themes that

11:27

you’ve all talked about I think every

11:29

company they can therefore must analyze

11:32

optimize

11:33

sighs customize socialize and digitize /

11:37

autumn Atty virtually any job product or

11:39

service so they can analyze now thanks

11:42

to big data they can find the needle in

11:44

the haystack of their data as the norm

11:46

not the exception they can optimize I

11:49

flew here on British Airways rolls-royce

11:51

engines those engines actually connected

11:53

by sensor to rolls-royce and they could

11:55

tell ba exactly what altitude to fly

11:57

every mile to optimize their energy

11:59

efficiency they can prophesize you may

12:02

have seen the IBM Watson ad where the

12:04

IBM Watson repairman comes to a

12:06

high-rise building says I’m here to fix

12:07

the elevator and the doorman says the

12:10

elevators not broken and he says I know

12:11

but it will be in six weeks two three

12:13

days okay you can do predictive

12:15

analytics on anything now you can

12:17

socialize that is you could connect now

12:19

to your customers your suppliers your

12:21

employees on a horizontal way like never

12:23

before

12:24

you can customize just for guys from

12:26

Minnesota with brown eyes and a mustache

12:28

and you can digitize / autumn Atty

12:31

virtually any job product or service you

12:33

put all those together and every

12:36

business today finds himself in the

12:38

middle of the climate change so as I

12:40

thought about that I thought well what

12:42

do you want when the climate changes I

12:44

think you want two things you want

12:45

resilience maybe I’ll take a blow

12:47

because you get disruptive behavior when

12:48

the climate changes but you also want

12:50

propulsion you want to be able to move

12:51

ahead you don’t be curled up in a ball

12:53

under your bed waiting for the climate

12:55

change to pass so as I thought about

12:57

that I said who do I go to to find how

13:01

you get resilience and propulsion when

13:03

the climate changes then I realize I

13:05

knew this woman she was 3.8 billion

13:07

years old

13:08

her name was Mother Nature and she dealt

13:09

with more climate changes than anybody

13:11

so I called her up made an appointment

13:13

went out to see her um and I sat down I

13:17

said mother nature how do you produce

13:19

resilience in propulsion and when the

13:22

climate changes she said well Tom

13:24

everything I do I have to tell you I do

13:26

unconsciously but um these are my

13:28

strategies um first of all she said I’m

13:31

incredibly adaptive in my world it’s not

13:33

the smartest that survive it’s not the

13:34

strongest it’s actually the most

13:36

adaptive that that bet survived and I do

13:39

what she said through a rather brutal

13:40

mechanism I call natural selection

13:43

second she said I’m incredibly

13:45

entrepreneurial where

13:46

I see an opening in nature a blank space

13:48

I fill it with a planter animal

13:50

perfectly adapted for that niche third

13:53

she said I’m incredibly pluralistic Oh

13:55

Tom she said I’m the most pluralistic

13:58

person you’ve ever met

13:59

I tried 20 different species of

14:00

everything see who wins and she did tell

14:03

me something interesting she told me her

14:04

most diverse ecosystems are her most

14:06

resilient and propulsive ecosystems of

14:10

course she told me she’s totally

14:12

sustainable in a circular way everything

14:14

is food eat food poop seed eat food poop

14:17

seed nothing is wasted um v she said I’m

14:20

incredibly high bred and heterodox in my

14:22

thinking nothing dogmatic about me I’ll

14:25

try any trees with any soils any bees

14:27

with any flowers and lastly she did

14:29

mention that she does believe in the

14:31

laws of bankruptcy she told me she kills

14:34

all her failures returns them to the

14:35

great manufacturer in the sky and takes

14:38

their energy to nourish her successes

14:40

well my argument is that the community

14:42

the country the government and the

14:46

business that most closely mirrors

14:48

mother nature strategies for building

14:50

resilience and propulsion when the

14:52

climate changes is the one that will

14:54

thrive in this age of acceleration and

14:55

since when I was writing my book it was

14:58

the 216 election I actually imagine what

15:00

if Mother Nature was running against

15:02

Donald Trump and Hillary Clinton in 2016

15:04

and so I created mother nature’s

15:06

political party based on these

15:08

strategies I won’t go into it I’m just

15:11

close by saying that on some issues

15:13

Mother Nature she’s out there on the

15:15

left with Bernie Sanders um because she

15:18

believes in universal health care and

15:19

making lifelong learning completely tax

15:22

free cuz she understands in this world

15:25

that you vols describing it’s gonna be

15:28

too damn fast for a lot of people so she

15:30

wants to strengthen our safety nets to

15:31

bounce people back into the game

15:33

and protect him but at the same time

15:34

mother nature would be out there on the

15:37

right with the Wall Street Journal

15:38

editorial page she’d actually be for

15:40

abolishing all corporate taxes only

15:42

unlike our Republican Party she’d

15:44

replaced them with a carbon tax a tax on

15:47

sugar attacks on bullets and a small

15:49

financial transaction tax she would get

15:52

radically entrepreneurial over here to

15:54

pay for our safety nets over here

15:56

unfortunately in our old industrial age

15:58

model of politics if you’re

16:00

for stronger safety nets he almost never

16:03

for radical entrepreneurship if your for

16:05

radical entrepreneurship you’re almost

16:07

never for stronger safety nets what

16:09

would mother nature call that stupid

16:12

that’s what she’d call it because she

16:14

would understand you will never produce

16:17

resilience

16:18

unless you’re a hybrid of these two and

16:20

because our current political parties

16:22

are not built on that model I think

16:25

they’re all struggling now to find a way

16:27

to talk about politics we’d also be

16:33

hearing from mother nature this evening

16:35

so there’s the three of us on stage and

16:37

a variety of perspective problems also

16:44

not our problems as you mentioned she is

16:49

quite keen on extinction and she does

16:54

believe in that and she wouldn’t care if

16:58

we are unable to cope with our problems

17:02

and go extinct also she wouldn’t care

17:05

very much if humankind splits and say a

17:10

small percentage becomes a new species

17:13

better adapted to the new conditions and

17:17

a couple of billions just go in the way

17:20

of the Neanderthals and the mammoths and

17:23

all that so it’s very good to learn from

17:27

mother nature but copying her methods

17:31

too closely would be I think very bad

17:34

news for a lot of people my you’ve

17:37

always get the best out of her and

17:39

cushion the worst because and I I do

17:42

agree with you your mother nature my one

17:45

of my science teachers talked about this

17:50

that she just chemistry biology and

17:52

physics that’s all she is

17:54

you can’t talk her up you can’t talk her

17:56

down can’t say mother nature were we’re

17:59

having a recession this year could we

18:01

take a year off on the climate um she’s

18:03

gonna actually do whatever chemistry

18:05

biology and physics dictate and to put

18:07

it in American baseball terms mother

18:09

nature always bats last

18:11

and she always bats a thousand so do not

18:14

mess with mother nature which is exactly

18:16

what we’re doing

18:17

I wonder where obviously I’m talking on

18:21

a long-term framework here but of course

18:27

I imagine many of you came here tonight

18:29

thinking about you know what’s

18:31

immediately in front of you

18:32

what news alerts are on your phones what

18:36

what tom I believe you’ve referred to

18:41

the American president as a brain-eating

18:43

disease perhaps what he might be up to

18:46

what else is going on both speak to how

18:50

we deal with what is unrelenting in

18:54

front of us while thinking about the

18:57

broader challenges that you’ve outlined

18:58

how do we do both at once how do we

19:00

adapt to do both at once first of all

19:05

can we have a bit more light on the

19:07

audience because it’s very difficult to

19:09

see who I’m talking to

19:10

it’s just a sea of darkness and it’s

19:13

nice to see some faces after all it’s

19:16

really about you not about us you will

19:20

have to deal with the future also yeah

19:24

it’s it’s very difficult for for people

19:27

I mean humans have proven throughout

19:29

history that they are very good when it

19:32

comes to short-term problems and

19:35

solutions but it’s extremely difficult

19:37

to foresee the long-term consequences

19:40

and one of the things that happened if

19:42

we talk about then various climate

19:45

changes is that time is accelerating so

19:49

thousands of years ago something like

19:52

the Agricultural Revolution takes

19:54

centuries even thousands of years and

19:57

the consequences of our decision today

20:01

to start growing wheat we will see or

20:06

not we somebody our descendants will see

20:09

the consequences of these this decision

20:12

in a couple of centuries maybe even in

20:15

thousands of years but now time is

20:18

accelerating so the long term is not

20:21

2,000 years or 200 years the long

20:24

term is 20 years we are really in an

20:28

unprecedented situation in history when

20:31

nobody knows the basics about how the

20:35

world would look like in 20 or 30 years

20:38

not just the basics of geopolitics who

20:41

would be the big superpowers in 20 or 30

20:44

years or what will be the major

20:46

alliances in the world in 20 30 years we

20:50

don’t know much more basic stuff such as

20:53

what the job market would look like what

20:56

kind of skills people will need what

20:59

family structure would look like what

21:01

general relations would look like so

21:04

it’s really the first time in history

21:05

when we have no idea how human society

21:10

will be like in a couple of decades and

21:13

this means among other things that for

21:16

the first time in history we have no

21:18

idea what to teach in schools and so we

21:25

focus on the short term and not just on

21:28

the short term but actually we should

21:30

then go back and focus on the past

21:33

connecting to what you said about the

21:35

crisis of most political parties that

21:39

still think in terms of the 20th century

21:41

and right versus left and capitalism

21:44

versus socialism and all that I think

21:47

that politics and government in most of

21:49

the world today they are doing a far

21:51

better job than ever before in running

21:55

the day-to-day business of the of the

21:57

country it may not look like this but

21:59

I’m a medievalist so I constantly

22:02

compare the government of today to the

22:05

government of a doll the third or low

22:07

st. Louis or something like that and

22:09

it’s wonderful the world we’re living in

22:13

is really wonderful

22:14

so they are doing an excellent job in in

22:17

in the day to day business of the

22:19

country but what they have almost lost

22:22

completely is the ability to have a long

22:26

term plan for the future because they

22:30

can’t see they have no realistic vision

22:35

of against base

22:37

things like the job market in 30 years

22:39

so what you see in more and more

22:42

countries is that they look to the Past

22:45

instead of to the future and instead of

22:49

formulating meaningful visions for the

22:53

word humankind will be in 2050 they

22:57

repackage nostalgic fantasies about the

23:01

past and there is a kind of competition

23:04

who can look back farthest so you have

23:08

Donald Trump wanting to go back to the

23:11

1950s or something like that and you

23:14

have put in basically wanting to go back

23:17

to the Tsarist Empire a century after

23:20

the Bolshevik Revolution and you have

23:22

Isis that wants to go back to the

23:25

seventh century Arabia and in my country

23:27

in Israel they beat everybody they want

23:30

to go back 2500 years to the age of the

23:34

Bible so we win we have the Bell the

23:37

longest term vision backwards and this

23:43

is a as a historian I can tell you two

23:47

things about the past the past wasn’t a

23:50

very good time you don’t really want to

23:52

go back there and secondly it is not

23:56

coming back no matter what you do you

23:59

can’t bring it back and so we are facing

24:03

really a crisis of the inability of the

24:07

political system to produce meaningful

24:09

visions for the future maybe the only

24:12

place in the world where there is

24:15

serious work on producing a meaningful

24:18

vision for the future is in China

24:20

whether it’s a good vision or a bad

24:22

vision it’s a different question but

24:24

this is the one place I think where the

24:27

government is seriously thinking in

24:30

future terms and in long terms of

24:32

decades and not in terms of one or two

24:36

years and certainly not in terms of

24:38

going back decades and centuries so just

24:43

to pick up on what you’ve all said

24:45

Richie that we’re starting with Trump I

24:47

described Trump as a brain-eating

24:49

disease

24:50

because as a columnist you’re always in

24:53

this position everyday where he says or

24:56

does something so outrageous you feel if

24:58

you don’t write about it you’re

25:00

normalizing him but if you do write

25:02

about it he stoled your brains for a day

25:04

now if you do that twice a week four

25:07

times or eight times a month you’ll wake

25:09

up after a year and discover all you’ve

25:11

written about is that knucklehead and um

25:13

and he’s actually sucked your brains out

25:16

so it’s a real it’s a real challenge um

25:19

so you know my the subtitle of my book

25:22

is is an optimist guide to thriving in

25:24

the age of acceleration so everything’s

25:27

sped up and the reason it’s called thank

25:30

you for being late as the title comes

25:32

from meeting people in Washington DC for

25:34

breakfast over the years and every once

25:36

in a while someone would come 15 20

25:39

minutes late they say Tom I’m really

25:40

sorry it was the weather the traffic the

25:42

subway the dog ate my homework and um

25:44

one day three and a half years ago an

25:46

Energy entrepreneur Peter Carr cell came

25:48

three enough minutes 15 minutes late and

25:51

said I’m really sorry whether the

25:52

traffic the subway the dog ate my

25:53

homework

25:54

and I just spontaneously said to him

25:56

actually Peter thank you for being late

26:00

because you were late I’ve been

26:04

eavesdropping on their conversation

26:07

fascinating I’ve been people watching

26:10

the lobby fantastic and best of all best

26:15

of all I just connected to ideas I’ve

26:17

been struggling with for a month so

26:20

thank you for being late people started

26:24

to get into it they’d say well you’re

26:27

welcome because they understood I was

26:32

actually giving them permission to pause

26:33

to slow down in fact my favorite quote

26:35

from the front of the book is from my

26:36

teacher and friend of Simon who says you

26:38

know when you press the pause button on

26:40

a computer it stops but when you press

26:44

the pause button on a human being

26:45

it starts that’s when it starts to

26:49

reflect rethink and reimagine and boy

26:52

don’t we need to do a lot of that right

26:54

now now to pick up on you Vols point

26:58

about leadership when the world is fast

27:01

small errors in navigation

27:03

can have huge consequences when we just

27:06

needed to go fifty miles at five miles

27:09

an hour well if you had a bad president

27:11

or prime minister for governor or mayor

27:13

you’d get off track but the pain of

27:15

getting back on track was fairly

27:17

tolerable but when you need to feel like

27:20

you’re going fifty thousand miles at

27:22

five thousand miles an hour when you

27:24

have a bad leader now you can get so far

27:26

off track it’s like a 747 pilot just

27:29

changing two digits as he enters the

27:32

navigation of his jet and suddenly

27:34

you’re halfway across the world in the

27:36

wrong direction and so leadership really

27:40

matters more right now now I you know I

27:46

I think I would agree with with what

27:48

you’ve all said about China in this

27:50

sense I think China’s leaders do wake up

27:53

every day more than the average leader

27:55

in the world and start the day by asking

27:57

what world am i living in what are the

27:59

biggest trends in this world and how do

28:01

i align myself with those trends unlike

28:04

I think a lot of leaders in the world

28:05

but I would find I would tell you I’m

28:07

seeing amazing leadership in America

28:10

today in two places you’ve a one is at

28:13

the corporate level and the other is at

28:16

the local level so at the corporate

28:20

level as I think about the workplace

28:24

challenge the way I put it I think our

28:26

central challenge is how do we turn a I

28:27

into ia how do we take artificial

28:31

intelligence and turn it into

28:32

intelligent assistance ance intelligent

28:36

assistance a NTS and intelligent

28:39

algorithms so more people can learn

28:42

faster and govern smarter so I’ll give

28:45

you example of intelligent assistance um

28:47

that I use it’s the HR department

28:50

resources department at AT&T are giant

28:52

telecom so you know what’s interesting

28:55

on AT&T three hundred thirty thousand

28:57

employees in one of the most competitive

28:59

businesses and world global telecom

29:01

pretty good chance that whatever is

29:03

going on in their HR department is

29:05

coming to a neighborhood near you so

29:07

what’s going on in HR at AT&T well they

29:10

begin their year now where their leader

29:11

Randall Stephenson he starts the year

29:13

with a pretty radically transparent

29:14

speech about where the companies

29:16

going what businesses they’re gonna be

29:17

in and what skills you need as a worker

29:20

at 80 that year filters down through the

29:24

company then they put all their managers

29:26

a hundred ten thousand people on their

29:28

own in-house LinkedIn system so I’m

29:31

there it’s Tom Friedman you know and it

29:33

has my academic background and the jobs

29:35

I’ve had in the company then they match

29:37

that up with the skill sets I’m making

29:40

up the number cuz I don’t remember it

29:41

exactly but it’s probably ten skill sets

29:43

you need that year to be a rising

29:45

employee at AT&T they’ve got my CV

29:47

they’re on LinkedIn and they realize

29:49

I’ve got seven of the ten but I’m

29:51

missing three then they partnered with

29:53

Sebastian Thrun from Udacity the online

29:55

learning University and he created

29:57

nanodegrees for all ten skill sets then

30:00

they came to me and said Tom here’s the

30:03

deal um we will give you up to eight

30:06

thousand dollars a year to take the Nano

30:08

degrees for the skill sets you’re

30:09

missing that we heard that you’re

30:11

interested in computer science we just

30:13

created an online computer science

30:15

degree for six thousand dollars a year

30:16

with Georgia Tech fact we heard you’re

30:18

interested in history you can take an

30:21

online course from that guy yeah you’ve

30:22

all Hariri will pay for that as well

30:24

yeah just one condition mr. Tom you have

30:28

to take these courses at home at night

30:30

on your own time not on company time now

30:34

if I say to them you know what mr. AT&T

30:36

I’ve actually climbed up one too many

30:38

telephone poles I’m just not into this

30:39

anymore

30:40

um they now have a wonderful severance

30:43

package for me okay but I will not be

30:46

working there much longer

30:47

so they flush out now about thirty

30:49

thousand people they take in about

30:51

thirty thousand people they advance

30:52

about ten thousand every year

30:53

what is AT&T social contract today with

30:56

their employees it’s a you can be a

30:59

lifelong employee still today if you’re

31:01

at AT&T but now only if you’re a

31:03

lifelong learner

31:04

if you are not ready to be a lifelong

31:07

learner you can no longer be a lifelong

31:09

employee at AT&T and that is the social

31:12

contract coming to a neighborhood near

31:14

you and that’s why one of my teacher is

31:17

Heather McGowan there’s an education

31:19

expert and this picks up on something

31:21

that you’ve all said Heather likes to

31:23

say mom dad never asked your kids today

31:26

what you want to be when you grow up

31:28

because whatever it is not

31:29

be here unless it’s policemen or firemen

31:32

okay only ask your kid today how you

31:36

want to be when you grow up will you

31:38

have an agile learning mindset will you

31:39

be predisposed to be a lifelong learner

31:42

long after you’ve left home and mom and

31:44

dad are not there to say you’ve all have

31:46

you done your homework and that leads to

31:49

what I think is really roiling societies

31:51

today and and and you’ve all touched on

31:53

this with these people might be out of

31:55

work which is something I learned from

31:57

marina gorebyss who runs the institute

31:58

of the future if we were having this

32:00

conversation 15 years ago one of the

32:02

themes we’d be talking about is the

32:04

digital divide

32:05

you know London’s got Internet

32:06

Manchester dozen Europe’s got it Africa

32:09

doesn’t digital divide it was huge um I

32:11

believe that digital byte is rapidly

32:13

disappearing I don’t know when it’ll be

32:14

gone but I’m sure in a decade it’ll be

32:17

gone and when it is the most important

32:18

divide in the world is going to be the

32:21

self-motivation divide whose kids have

32:23

the self-motivation to be a lifelong

32:25

learner long after they’ve left home and

32:28

mom and dad are not there to ask them to

32:30

do their homework is what you learned in

32:31

your first year now could be outdated by

32:34

your fourth year of college the idea

32:36

that you can get a four-year degree

32:38

Undine out on that for 30 years is like

32:40

so 1950s and that that has a lot of

32:44

people really unnerved because a lot of

32:47

people were actually born and bred to do

32:49

what they were told and God bless and

32:51

they built your country in mind and you

32:52

Falls but just doing what you’re told

32:54

now will not bring you average income

32:57

and an average lifestyle and I think

32:58

that has a lot of people really

33:00

frightened I think what you’re

33:03

describing is extremely stressful I mean

33:07

I just hear you and you know there is so

33:11

much stress and reinventing yourself

33:16

again and again throughout your life

33:19

sounds terrible to most people because

33:24

you know when you’re 15 you’re 16 then

33:26

you’re inventing yourself and it’s still

33:29

stressful when you’re 15 but it’s still

33:31

doable when you reach 4050 you don’t

33:35

want to change yes I want to keep on

33:37

learning new things and to gain

33:39

experience and to go into new places and

33:42

so forth but

33:43

really change the deep structures of my

33:47

personalities of my professional skills

33:50

to learn things afresh it sounds you

33:54

know very exciting and then very like

33:56

good but it’s actually extremely

33:58

difficult and if this is what we are

34:02

heading and we are heading in the

34:04

direction we will be facing a stress

34:06

epidemic even far worse than then today

34:10

and then other things with all these

34:13

algorithms that again are watching us

34:15

all the time in our learning our

34:18

abilities and our problems and whether

34:20

we are self-motivated or not once the

34:23

algorithms reached the conclusion that

34:26

you are not going to make it you will

34:29

not go you will not be able to make it I

34:31

mean we are used to this problem of

34:35

discrimination against people based on

34:38

wrong statistics like in the 20th

34:41

century discrimination against people

34:44

usually took the form of discriminating

34:47

against entire groups based either on

34:51

faulty statistics or based on just

34:54

religious biases and racism and so forth

34:58

so as the world if you were gay you had

35:00

discrimination against all gays if

35:02

you’re a woman then all all women and

35:05

one of the things about it is that you

35:08

could actually do something about it

35:10

because most of the time the biases were

35:13

not true and because many people

35:16

suffered from them they could join

35:19

together and have some a political

35:21

action against the discrimination now in

35:25

the coming years in the coming decades

35:27

we will face individual discrimination

35:30

and it might actually be based on a good

35:33

assessment of who you are I mean if 88

35:36

NT if the algorithms and the big data

35:39

algorithms of AT&T they follow you

35:42

around they look up your Facebook

35:44

profile your DNA your records from

35:47

kindergarten until today they will be

35:51

able to figure out quite accurately who

35:53

you are and if they for example find out

35:56

that I lack

35:57

motivation on the on the X scale on the

36:00

Harare scale of the Freedmen’s scale of

36:02

motif of of self-motivation 0 to 10

36:05

he is just 7.1 and we don’t want to

36:09

accept to our company

36:11

people of less than 8.2 and we know from

36:16

experience that yes we can give you a

36:18

little push but you just lack what we

36:21

need and you will not be able to do

36:25

anything or almost anything about this

36:27

discrimination first of all because it’s

36:30

just you they don’t discriminate against

36:33

your me because you’re Jewish or gay or

36:35

black or whatever because you are you

36:37

and the worst thing is there will be it

36:41

will be true I mean they got me i I

36:45

really lack self motivation they really

36:50

got me so what do I do about it and it

36:54

sounds funny in a way but if you think

36:56

about it deeply it’s terrible everybody

36:59

on what everybody has something and you

37:04

will not be able to do much about it so

37:07

let me give you the flip side of that

37:09

because everything about these systems

37:11

you’ve all is everything and it’s

37:13

opposite so you you just described the

37:16

downside of that but let me talk about

37:19

intelligent assistant for a second

37:22

example I give him the book so the

37:23

example I use is on the janitorial staff

37:26

at Qualcomm big American tech company in

37:30

San Diego they have 64 billion building

37:33

campus they they built the inside your

37:35

iPhone not Apple that’s why Apple is

37:36

always suing them over patents and um

37:39

they three years ago they took six of

37:43

their buildings they put sensors on

37:44

everything every door window light pipe

37:47

faucet drain computer and they beamed

37:49

all that data up to the cloud and now

37:50

they beam it down onto an iPad with this

37:52

incredibly user-friendly interface for

37:55

their janitorial staff so if you leave

37:57

your computer on or a pipe bursts above

37:58

my head

37:59

the janitor knows it before you or I do

38:01

and they just swipe down to see who to

38:03

call or how to fix it themselves

38:05

they’ve actually turned their janitors

38:07

into

38:08

it’s technologists they’re janitors now

38:10

give tours to foreign visitors what do

38:13

you think that does for the dignity of a

38:14

janitor because he or she now has an

38:16

intelligent assistant enabling them to

38:18

learn you know faster and work smarter I

38:20

will give you another example

38:23

intelligent algorithm so um those of you

38:27

American students here know that an 11th

38:29

grade way to take the PSAT exam the

38:31

practice SAT exam to take the SAT exam

38:34

to measure our math and verbal skills to

38:36

get into the college of our choice so we

38:40

also know in America that a lot of

38:41

parents go out in 11th grade and hire a

38:43

tutor for $200 an hour to Goose your

38:45

scores in math and verbal a completely

38:48

rigged game because if you come from a

38:50

family or neighborhood where you can’t

38:52

afford that you’re really at a

38:53

disadvantage so three years ago

38:55

the College Board that administers the

38:57

PSAT and SAT exam your a-levels and

38:59

o-levels partnered with Khan Academy the

39:02

online learning platform to create free

39:05

PSAT and SAT prep so the way it works

39:08

now is I take my PSAT and 11th grade I

39:10

get the results back I did really well

39:13

in verbal it says Tom you you could be a

39:15

journalist actually um but um but it

39:18

says I have a problem with math it

39:20

actually says I Tom Friedman personally

39:23

because it knows me have a problem with

39:25

fractions and right angles then it takes

39:28

me to a practice site just for fractions

39:30

and right angles doesn’t waste any time

39:32

on my weaknesses if I do well there

39:35

takes me to another site that says Tom

39:36

you could be an AP math Wow you need to

39:39

be met I mean no one in my family is an

39:42

AP math no one in my neighborhood

39:43

yeah you could be an AP math if I do

39:45

well there text me another site with 180

39:47

college scholarships last year 3 million

39:51

American kids got free PSAT and SAT prep

39:54

on this intelligent algorithm and I’ll

39:57

give you another one that’s very

39:58

relevant to the point you raised

39:59

we have about 32 million people who

40:01

start a college but never finished they

40:04

go one year two years two and a half

40:05

three three and a half years they drop

40:06

out go to bite get a job or do it online

40:09

the algorithm says you have no BA no job

40:12

so a whole new set of intelligent

40:15

algorithms have emerged one eye profiles

40:17

opportunity at work so what they do now

40:19

is you can go to them with your

40:21

year two year two and a half of

40:22

knowledge they will badge what you

40:24

actually know and what you can do with

40:26

what you know and they partner with

40:28

companies to slot you in without a BA so

40:31

I profiled a young african-american

40:32

woman LaShonda Lewis

40:33

she went to Michigan Tech for three and

40:35

a half years studied computer science

40:37

had to drop out for family reasons she

40:40

went back home was driving a school bus

40:42

to and from a computer school couldn’t

40:44

make that up and working at a law firm

40:46

on the helpdesk helping lawyers

40:48

rediscover their lost passwords okay she

40:51

was discovered by opportunity at work

40:53

they partnered with MasterCard slotted

40:56

her in as a stay measured her knowledge

40:57

slotted her in as a systems engineer at

41:00

MasterCard she’s now a senior systems

41:02

engineer at MasterCard and as she says

41:05

in the last line of her interview and

41:07

mr. Friedman I still don’t have a BA so

41:11

that’s an intelligent help that’s the

41:13

other side of this and and what I found

41:16

is there is enormous innovation going on

41:20

on the other side of this you’re

41:22

absolutely right on the downside but for

41:24

every downside of this somebody’s

41:27

invented an upside I would just add one

41:30

other point you know what was the

41:32

fastest growing restaurant chain in

41:34

America according to Entrepreneur

41:35

Magazine in 2015 and you never guess it

41:38

it’s actually called paint nite fastest

41:41

burn restaurant chain in America what is

41:42

paint nite it’s paint by numbers for

41:44

adults and bars

41:46

turns out idols like to get together in

41:48

a bar have an artist draw a design for

41:50

them and they paint by numbers together

41:53

according to that design and have a

41:55

drink it’s amazing how many adults like

41:58

to paint by numbers in bars okay who

42:00

knew okay that is there all these jobs

42:04

out there and that’s why I would close

42:06

by saying if you really want to blow

42:08

your mind

42:09

go to Airbnb x’ website you’ll notice

42:12

now there are two icons on the front

42:13

page ones homes that’s because I’m

42:17

coming to London like my sister did this

42:19

week and I want to get an apartment here

42:21

you know we all know that but now the

42:22

other ones called experiences and click

42:26

if you want to have some fun

42:27

click experiences it’s people monetizing

42:31

their passions I will give you a tour of

42:35

three man basketball games in Havana at

42:37

night with a mojito at the end read that

42:40

one the American mother who said I send

42:41

my 18 year old on this he didn’t come

42:43

back till 2:00 in the morning he was

42:44

having so much fun I’ll teach you how to

42:46

make falafel you know in job I’ll teach

42:48

you how to make it you know this is it

42:50

full time employment maybe maybe not

42:52

it’s the fastest growing part of Airbnb

42:55

is website and I predict in five years

42:57

it’ll be the biggest job site in the

43:00

world people monetizing their passions

43:03

sticking with this theme we’ve been

43:05

talking a lot about individuality we’ll

43:07

be able to learn individually just how

43:10

unmotivated we are again perhaps

43:13

motivated to go paint plates by numbers

43:16

so we’ll know so much more about

43:19

ourselves as individuals how is that

43:21

going to affect how we all live together

43:24

Tom you’ve written about I believe you

43:26

called yourself a pluralism supremacist

43:28

how does increase knowledge it’s

43:31

increased knowledge of our individuality

43:33

exactly just how

43:35

well-suited we are for a job or poorly

43:38

suited for any job what does that mean

43:41

and how we all live together and and are

43:43

we moving more inward in this moment or

43:46

where do you see floral ISM going it’s

43:50

very hard to say I mean of course as you

43:52

said I mean every technology has good

43:55

potential and in bad potential this is

43:58

what is different about disruptive

44:00

technologies compared to nuclear war and

44:03

climate change nuclear war is this is

44:05

obviously terrible nobody nobody wants

44:08

it the question is just how to prevent

44:10

it with disruptive technology the danger

44:13

in a way is far greater because it has

44:16

some wonderful potential so there are a

44:18

lot of forces that for some very good

44:21

reasons are pushing us faster and faster

44:24

to develop and adopt these disruptive

44:28

technologies and it’s very difficult to

44:31

know in advance what the consequences

44:34

will be in terms of community in terms

44:38

of relations between people in terms of

44:40

politics 20 years ago in the high days

44:43

of internet optimism

44:45

you had all this extremely optimistic

44:48

and today we say naive dreams and

44:53

visions that the internet will bring

44:56

everybody closer together you could have

44:58

friends from all over the world in the

45:00

end there will be freedom of expression

45:02

and all the dictators will fall and the

45:05

world will turn into one big happy and

45:08

peaceful community and this didn’t

45:11

happen and we look back today and we say

45:14

oh this was extremely naive I mean if

45:17

people forget about human nature did we

45:19

learn nothing from history and the

45:22

answer is yes we learn very little from

45:24

history does it mean that every new

45:27

technology will just make things worse

45:30

no obviously not but it extremely

45:33

difficult to know which way it will go I

45:39

think that history is just not

45:41

deterministic and again when you look to

45:45

the past when you look at the 20th

45:46

century and what people could do with

45:50

new technologies and you could build you

45:53

can use the trains and radio to build

45:55

Nazi Germany or you could use the same

45:58

technology to build liberal democracy

45:59

and it’s it’s kind of touching goal who

46:04

wins I don’t think there is any

46:07

predetermined or preordained winner in

46:12

these competitions so again with AI we

46:16

can sit here all evening and a couple of

46:19

more evenings and spin all kinds of

46:22

likely scenarios which are all possible

46:25

what will happen some very good and some

46:28

very bad and some in between and we just

46:32

don’t know I think as a story in the the

46:37

best thing the most important thing we

46:40

need to realize is that there is no

46:43

predetermined

46:44

story which is in a way very frightening

46:47

and you know we are now living with the

46:53

collapse of the last story of

46:57

inevitability

46:59

and in the 1990s in the same era of the

47:04

extremely optimistic vision of the

47:07

internet we also had this story this

47:12

idea that history is over that we know

47:16

who won the great ideological battle of

47:20

the 20th century liberal democracy and

47:22

in free-market capitalism came out on

47:25

out on top and now it’s just a question

47:28

of time until it will spread and take

47:31

over the whole world and again this now

47:34

seems extremely naive and the moment we

47:39

are at now is a moment of extreme

47:45

disillusionment and bewilderment because

47:48

we have no idea where things will will

47:53

go from here this is why I think it’s

47:56

it’s very important to be aware of the

47:59

of the downside of the dangerous

48:02

scenarios of the new technologies I mean

48:06

obviously the the corporation’s the

48:09

engineers the people in the laboratories

48:11

they naturally focus on all the enormous

48:16

benefits that these technologies might

48:20

bring us and it folds to historians and

48:24

to philosophers and to social scientists

48:27

to think about all the ways in which

48:29

things can go wrong so when Frank

48:33

Okayama wrote the end of history I at

48:36

the same time wrote a book called Lexus

48:38

and the olive tree and the argument of

48:40

the book was that I think what is going

48:42

to shape the future is a tension between

48:44

all of these things that are old faith

48:47

community religion sect tribe all things

48:50

that anchor us in the world olive trees

48:52

and the interaction between them and

48:54

technology and I still believe that that

48:57

is that’s certainly for me a helpful

48:59

framework that it’s a because what we do

49:01

with those passions how we govern them

49:03

how we mobilize them it can be for good

49:05

or for ill and that for it for me

49:09

you know it’s a good segue to talk about

49:11

the ethics question and one you wrote a

49:14

whole book about Homo dias

49:16

you know so uh III just did a little

49:18

chapter on it and and let me give mine

49:21

and then you give yours because I think

49:23

to be an interesting contrast between

49:25

the two so my version of the argument

49:29

you made the chapter on it is called is

49:33

God in cyberspace he’s God in cyberspace

49:37

best question ever got on book tour 1990

49:41

I was selling Lexus the Ala tree in

49:42

Portland Oregon question time came young

49:44

man stood up in the balcony said mr.

49:46

Friedman I have a question he is god in

49:48

cyberspace I said I have no idea

49:59

I felt like an idiot so I got home I

50:03

called my spiritual teacher he was a

50:05

rabbi I got to know at the Hartman

50:06

Institute in Jerusalem when I was the

50:08

New York Times correspondent there great

50:09

tome u2 scholar three marks

50:11

now there’s an Amsterdam married to a

50:12

Dutch priest interesting character and

50:14

um I called him up in Amsterdam I said

50:19

see I got a question I’ve never had

50:20

before is God in cyberspace

50:23

what should I said and I he said well

50:26

Tom in our faith tradition we actually

50:28

have two concepts of the Almighty a

50:29

biblical concept and a post biblical

50:31

concept so the biblical concept is that

50:33

the almighty is almighty he smites evil

50:37

and rewards good and if that’s your view

50:39

of God he sure isn’t in cyberspace which

50:43

is full of pornography gambling cheating

50:44

lying people smearing one another and

50:46

Twitter and now we know fake news so um

50:49

fortunately though he said we have a

50:51

post biblical view of God and the post

50:54

biblical view of God is that God

50:55

manifests himself by how we behave so if

50:58

we want God to be in cyberspace we have

51:00

to bring him there by how we behave

51:02

there I really like this answer I put it

51:04

into the paperback edition of Lexus the

51:06

olive tree in 2000 where none of you saw

51:08

it and it sat there for 16 years

51:09

anyways I started working on this book

51:11

and I found myself

51:13

spontaneously retelling that story I

51:15

said why are you retelling that story

51:17

and it became obvious to me for two

51:18

reasons and one just happened I think in

51:21

the last couple of years in the

51:23

developed world we began living 51

51:25

scent of our lives in cyberspace it’s

51:28

not where you go to find a date find us

51:29

out spouse buy a house buy a car write a

51:31

book buy a book get a mortgage give

51:34

alone get your news generate your news

51:36

we’re now living do your banking your

51:38

brokerage we’re now living 51% of our

51:41

lives in cyberspace and my definition of

51:44

cyberspace is that it’s a realm where

51:45

we’re all connected and no one’s in

51:47

charge so there are no courts in

51:50

cyberspace no lovely ceman no stoplights

51:52

no no 1-800 please stop Putin from

51:56

hacking my election but that’s where

51:59

we’re living our lives another way to

52:01

describe it we’re living 51% of our

52:03

lives in a realm that is fundamentally

52:06

God free at the same time because of

52:09

these accelerations you and I both have

52:11

talked about I think we’re standing at a

52:13

moral intersection we have never stood

52:15

at before as a species in 1945 we

52:18

entered the world where one country

52:20

could kill all of us possi regime and

52:23

that was the United States I’m glad it

52:25

had to be one country but it was the

52:26

United States I think we’re entering a

52:29

world where one person can kill all of

52:30

us and at the same time at the same time

52:33

where all of us could actually fix

52:36

everything because these accelerated

52:38

powers for the first time are creating

52:40

world where one of us could kill all of

52:41

us and all of us now if we actually put

52:43

our minds to it we have the tools to

52:45

feed house clothe and educate every

52:48

person on the planet we have never been

52:50

to this intersection before where one of

52:53

us can kill all of us and all of us

52:54

could fix everything and what does that

52:57

mean means we’ve never been more godlike

52:59

as a species than we are today well put

53:02

those two together we’ve never lived

53:03

more of our lives in a realm that’s

53:05

Godfrey and we have never been more

53:08

godlike and what that means is that what

53:11

every person thinks feels and believes

53:13

really matters it means everyone needs

53:17

to be in the grip of sustainable values

53:18

it means at a minimum everyone needs to

53:22

be in the embrace of the Golden Rule and

53:24

every faith and culture has their

53:25

version of it doing to others as you

53:27

wish them to do unto you because you now

53:28

live in a world where more people can do

53:30

unto you farther faster deeper cheaper

53:33

than ever before Putin did unto us in

53:35

our election and we can do unto others

53:37

farther faster deeper cheaper than ever

53:39

for everyone needs to be in the embrace

53:42

of the golden rule I know what you’re

53:45

thinking actually gave this thing as a

53:48

commencement address at Olin College of

53:50

Engineering two years ago and I said to

53:52

the parents there I know what you’re

53:55

thinking

53:55

you paid two hundred thousand dollars

53:58

for your kid to get an engineering

54:00

degree and who do they bring us the

54:02

commencement speaker but a knucklehead

54:05

promoting the golden rule is there

54:08

anything more naive and what I told them

54:12

is what I would say again tonight I

54:14

think in this age of acceleration

54:16

naivete is the new realism because

54:19

what’s really naive is thinking we’re

54:21

gonna be okay in a world that is this

54:24

interdependent we’re men women and

54:26

machines get this super empowered if

54:29

everyone is not in the embrace of the

54:32

golden rule where does the golden rule

54:34

come from I think two places primarily

54:36

strong families and healthy communities

54:39

and that’s why my focus and my work

54:42

today is so much on healthy communities

54:45

but I would say that maybe the big

54:49

problem is not so much morality as it is

54:52

causality that we just cause a little I

54:57

mean the ability to understand the

54:58

change of causes and effects in the

55:00

world I think there is no lack of values

55:03

today in the world but to really act

55:07

well it’s not enough to have good values

55:09

you need to have a good understanding of

55:12

the chains of causes and effects like if

55:15

you think about the commandment like

55:17

don’t steal so okay let’s everybody

55:20

agree it’s not good to steal but the big

55:23

problem today is not that somebody says

55:25

hey I want to steal what will you do to

55:27

me

55:27

the big problem is that stealing has

55:30

become so complicated that I’m steaming

55:33

all the time and I’m not even aware of

55:35

it the commandment don’t steal was

55:39

formalized in an era when stealing meant

55:42

meant breaking myself I’m breaking into

55:45

somebody’s house and snatching some gold

55:48

coins or a goat or whatever and it was

55:51

easy – at least honest

55:52

what I’m doing and what the potential

55:55

consequences are for the owner of the

55:58

gold coins of the gold but how do I

56:01

still today well I put like ten thousand

56:04

and I have a pension fund and ten

56:07

thousand dollars out of my pension fund

56:10

are invested in some big oil corporation

56:14

or chemical corporation that brings

56:16

profits of say four or five percent

56:19

every years with a very good investment

56:20

and how does the corporation makes such

56:24

huge profits for example by dumping

56:27

toxic waste into a river and polluting

56:31

the entire water resources of the area

56:34

and hurting the health of the local

56:36

population and the wildlife and so forth

56:39

but the cooperation is so rich that it

56:43

can retain an army of lawyers that

56:46

protects it against all lawsuits and

56:49

also a small brigade of people in the

56:55

capital that block any attempt to have

56:59

stronger environmental regulations now

57:02

am i guilty of stealing a river I’m not

57:06

even a word that part of my pension fund

57:09

is invested in this cooperation and even

57:12

if I am aware I don’t know how the

57:15

cooperation makes its money it will take

57:18

me months maybe years to find out where

57:22

my money

57:23

what my money is doing and during that

57:26

time I will be guilty of so many other

57:29

crimes which I know nothing about and

57:32

the really the problem is that our sense

57:36

of morality our sense of justice like

57:39

our other senses was evolved in the

57:45

ancient African savanna when your

57:48

pension funds you had just one pension

57:50

funds which was your kids and you knew

57:53

what your pension fund was you was doing

57:56

it was playing in the mud or something

57:59

and so the entire the ability that the

58:04

problem is no

58:05

agreeing on basic morality the problem

58:10

is on understanding the extremely

58:12

complicated change of cause and effect

58:14

in the world and again my fear is that

58:18

maybe Homo sapiens is just not up to it

58:21

we have created such a complicated world

58:23

that we have no longer able to make

58:26

sense of what is happening and if I

58:30

looked at politics in the u.s. again

58:33

from the vantage point of a medievalist

58:36

Republicans and Democrats seems almost

58:39

identical I just don’t understand what’s

58:41

the difference

58:42

if you can enlighten me on this what’s

58:44

the big difference between them in

58:47

ethical in their ethical view in their

58:50

view of the world they have a big

58:52

difference in their understanding of

58:54

cause-and-effect relations but when it

58:56

comes down to two basic values I think

58:59

the difference is is not big but again

59:01

the problem is that maybe we are no

59:03

longer able like the engineers you gave

59:06

the talk to so they could all agree yes

59:10

we should keep the Golden Rule but then

59:13

when they go to design some I don’t know

59:15

bridge of software they don’t understand

59:19

what they are what are the consequences

59:22

of what they are doing so how can they

59:24

act morally without this understanding

59:27

well you just described why we need a

59:30

free press um I think that’s one roll

59:33

the free press really plays today and

59:37

again what’s the upside of this age of

59:39

acceleration is now an individual can go

59:42

take a picture of that waste dumping by

59:44

that factory put it up on the internet

59:47

and it’ll go around the world in in 30

59:49

minutes competing against funny cat

59:51

videos ah no actually if you’re in my

59:54

business you’ll find that if I take a

59:56

picture of General Electric doing that

59:58

and put it up on the New York Times a

60:00

General Electric will stop doing that I

60:02

can assure you that will not compete

60:03

with cat videos so there’s an upside to

60:06

all of these I think you’ve all that

60:08

that I’m gonna we’re playing a very

60:11

useful function here I’ll do the outside

60:12

and but but but what I your people ask

60:16

me what I do for a living

60:18

tell them I am a translator from English

60:19

to English that’s what I do I try to

60:22

take complex things and break them down

60:24

first so I can understand them and then

60:25

hopefully explain them to others and I

60:28

am really my motto I’ve adopted from

60:31

Marie Curie who once said now is the

60:33

time to understand more so we may fear

60:36

less and now it’s truly I this is never

60:40

good journalism I think that practice by

60:43

the New York Times and many others has

60:46

never been more important to understand

60:49

more so people will fear less because we

60:51

now have a president who is actually in

60:53

the fear business backed up by a Pravda

60:56

like Network called Fox television

60:58

that’s in the business of making people

61:00

stupid and you put those two together

61:03

you know it’s really dangerous and and

61:06

the good news is we are finding at the

61:09

New York Times more people that we know

61:11

Donald Trump toys clients are failing

61:13

New York Times I assure you we are

61:15

anything but that today because so many

61:17

people are coming to not just the New

61:19

York Times but to trusted new sites

61:22

because they want to understand more so

61:23

they may fear less and and so many

61:27

individuals now can go out and actually

61:30

you know be citizen journalists like

61:33

never before and I would say this the

61:37

political side of that is that you know

61:40

so which like if you want to be an

61:45

optimist about America today I tell

61:47

people stand on your head because the

61:49

country looks so much better from the

61:51

bottom up than the top down okay so I

61:54

think that as we go into this age of

61:56

acceleration national governments with a

61:59

few exceptions are really too slow

62:02

certainly the big democracies are

62:04

because we’re too tribal eyes partisan

62:05

eyes now they they can’t move at the

62:07

pace of change because government moves

62:09

at the pace of trust and there’s no

62:10

trust the single individual single

62:13

family way too weak against these forces

62:16

so I think it’s the healthy community

62:19

that is going to be the proper of

62:21

governing unit of the 21st century and

62:23

if you want to know what makes me an

62:25

optimist in America is that our country

62:28

you know the cliche about America is

62:30

that we’re divided by two

62:32

so these two coasts everyone is

62:34

pluralizing diversifying globalizing and

62:36

modernizing and in between them is

62:38

flyover for America where everyone’s

62:40

high on opioids voted for Trump and

62:43

waiting for 1950 okay that’s kind of the

62:45

cliché so um well you only have to be

62:48

from Minnesota you only have to be from

62:49

flyover America – no that is not true

62:51

America is actually a checkerboard today

62:54

of communities that are collapsing from

62:57

the bottom down and communities that are

62:59

rising from the bottom up so I did a

63:02

trip a year ago to um I was invited to

63:05

give a talk at our national lab at Oak

63:07

Ridge Tennessee so I got the map out Oak

63:08

Ridge Tennessee

63:09

hey it’s down here southern tip of

63:11

Appalachia haven’t been to Appalachia I

63:13

think I’ll do a car trip across

63:15

Appalachia reading about all these

63:16

people voted for drum so I started the

63:19

trip in Austin Indiana so it’s a

63:21

southern Indiana northern tip of

63:23

Appalachian I went to excite read about

63:25

the town 4400 people and a 5% of the

63:28

town is HIV positive which is just the

63:33

worst possible levels of epidemic you

63:35

can imagine what was the story two

63:36

factories in the town one closed the

63:38

other got automated a lot of white

63:40

working-class men and women got

63:41

unemployed very quickly um

63:44

the they couldn’t adapt and I fell into

63:46

drug use and you had son father

63:49

grandfather all shooting up together

63:51

it’s a terrible store and I went there

63:53

to interview the one doctor in the town

63:54

then I got on my car and drove 40

63:57

minutes south on i-70 to Louisville

63:59

Kentucky Louisville Kentucky has 30,000

64:02

open jobs anybody looking for a job

64:04

Louisville Kentucky so what’s going on

64:07

there so which organisms thrive when the

64:10

climate changes they call complex

64:12

adaptive organisms what’s happening at

64:15

the community level the commutes that

64:16

are rising they’re creating complex

64:18

adaptive coalition’s and what you see in

64:21

Louisville and I can show you

64:23

communities all over the country these

64:25

complex adaptive coalition’s you have

64:27

the business community you’re not

64:28

plugging directly into the public school

64:31

system k12 community college four-year

64:33

college translating in real-time their

64:36

skills needs and demands okay not

64:38

waiting for the schools to figure it out

64:40

then you have the philanthropic

64:41

community

64:42

coming in supplementing it with

64:44

scholarships after-school programs

64:46

supplemental learning opportunities then

64:48

you have the local government catalyzing

64:51

at all and hiring global recruiters to

64:53

go into the world and find global

64:55

investors for their local attributes so

64:58

in the case of Louisville Louisville

64:59

happens to be the capital of Bourbon

65:01

tourism so Louisville is de Bourbon what

65:04

Napa Valley is – red wine and they’re

65:06

now distilleries and bed-and-breakfast

65:07

you go you know across just they’ve

65:10

created a tourism industry Louisville

65:12

happens to be the headquarters of ups so

65:14

you fly into Louisville Airport all you

65:16

see are factories everywhere because

65:18

when Jeff Bezos of Amazon com says

65:21

you’ve all get to that product in 24

65:23

hours it’s because he’s doing end of

65:24

runway assembly and manufacturing now in

65:27

Louisville and Louisville is a

65:29

headquarters of Humana wellness company

65:31

so the mayor’s equipped any young person

65:33

in the town who wants with a web

65:35

neighbor cloud connected breathalyzer

65:37

and kids got in the morning trying to

65:39

create citizen scientists and they map

65:41

the air quality in their neighborhood

65:43

and they feed it all into a website in

65:44

the city they’ve created a complex

65:46

adaptive coalition and this is happening

65:49

all over the country and so we’ve got

65:53

communities like Austin that opioid

65:55

crisis is real they’re collapsing but

65:57

those were you get this leadership

65:59

together are creating complex adaptive

66:01

coalition’s come to my hometown of

66:03

Minneapolis two and a half percent

66:05

unemployment I mean really thriving

66:07

they’re not waiting for Washington DC

66:09

because there’s a much higher trust

66:11

there and my my teacher Duff Seidman

66:14

always says you know Trust is the only

66:16

legal performance-enhancing drug okay so

66:19

where there’s trust in the room you can

66:21

go really fast you can go at the speed

66:23

of visits and when there’s no trust like

66:25

in Washington DC right now you can’t

66:27

move two inches so how do you make sense

66:30

of this extremely complex and checkered

66:34

reality I mean my job is much easier

66:36

than yours because as a historian who

66:39

looks mainly the past and also at long

66:41

periods of centuries and thousands of

66:44

years so the like that the main trains

66:47

jumps jump at you yeah but how do you

66:50

manage to make sense of such a

66:53

complicated and contradictory

66:55

reality and how do you know that you’re

66:58

not just you know following your biases

67:01

and seeing what you want to see so it’s

67:04

a combination it’s a very good question

67:05

of data I mean I can show you the

67:11

employment statistics you know the

67:13

economies of these towns and I can show

67:15

you the proliferation of them and then

67:19

obviously reporting and then anything is

67:22

going to be a guess you know but if I

67:24

look at the country I see the National

67:26

Statistics what’s going on to me the

67:29

question is and this I can’t do I can

67:32

only report on what’s going on is what

67:36

is the balance between these two trends

67:38

but as I’m not a historian I’m a

67:41

journalist what I’m trying to do is by

67:42

highlighting the positive trend because

67:45

I think one good example is worth a

67:47

thousand theories that people will

67:49

follow examples when they see people

67:51

like them doing it so my idealism is to

67:55

say here’s what’s working you know and

67:58

these people are just like you so you

68:00

can do it just like them I Israeli

68:03

general loozy Diane you know once said

68:05

to me Tom I know why you’re an optimist

68:08

I said why he said it’s because you’re

68:11

short and I said I’m not that sure he

68:15

said you can only see the part of the

68:17

glass that’s half-full okay so um I’m

68:20

actually not that short but I I do

68:24

believe in the Emil Evans the physicists

68:29

who helped me with all the physics in my

68:30

book you vote he likes to say when

68:33

people say Aimee are you an optimist or

68:35

a pessimist says I’m neither because

68:37

they’re just two different forms of

68:38

fatalism everything will be great

68:40

everything will be awful he said I

68:41

believe in applied hope don’t know if

68:45

it’s gonna work but I believe in applied

68:46

hope yeah I’m very interested in how you

68:49

ball has interrogated your optimism and

68:51

optimism of course it’d be the natural

68:53

note to end on but I want to care a tiny

68:55

bit more about your pessimism and

68:58

hopefully we can all think about how to

69:01

walk out of here holding both of those

69:03

ideas in our mind you wrote in sapience

69:05

I believe that there’s no

69:07

that I’m sorry I have no proof human

69:09

well-being inevitably improves as

69:12

history rolls along just a cheery

69:14

thought for all of us as we wind down

69:16

our time together

69:18

I wonder if you could help us think

69:21

about that what you’ve discussed this

69:22

evening and and Tom’s very convincing

69:26

data rich argument that when you’re

69:29

doing yoga and standing on your head you

69:31

really can see roots of communities

69:33

pulling together even in this

69:35

disorienting moment so help us leave

69:37

here both pessimists and optimists well

69:43

I try not to think in terms of pessimism

69:46

and optimism

69:49

it’s just that history just doesn’t

69:52

unfold in such a way usually you have

69:56

terrible things and wonderful things

69:58

happening at the same time maybe in

70:00

different places but happening at the

70:01

same time usually the same revolution

70:04

the same development it’s very rare when

70:07

you have a big revolution in history

70:09

which is doing only good or which is

70:11

doing only bad and of course you have

70:13

the added problem that those who lose

70:17

who lose the most and those who get

70:20

extinct and those who disappear they are

70:23

not there to tell their story

70:25

so in history there is always a certain

70:27

a certain bias towards the optimistic

70:30

side here we are here so it couldn’t

70:32

have been that bad the people for whom

70:36

it was very bad they are just not here

70:41

but you know so and also is as somebody

70:52

who tries to see the big picture and

70:55

look at the global picture there is

70:57

always the danger that you’re always

71:01

going to notice the agenda and the

71:06

opinions and the interests of the of the

71:10

hegemonic powers of the more powerful

71:12

people and societies and in classes and

71:15

whatever because they dominate

71:18

the conversation so even if you oppose

71:21

them even if you think you’re they’re

71:23

wrong you’re not going to miss their

71:26

ideas you might object their ideas you

71:30

might fight against them but you’re not

71:32

going to ignore them the problem of the

71:36

people who are like push to the side or

71:38

push down is that they are very often

71:42

just ignored not that you don’t agree

71:45

with what they say not that you think

71:47

their interests don’t count you just

71:50

don’t remember to even notice their

71:55

point of view or there are other

71:58

interests so also the question of of

72:01

pessimism and optimism it’s always a

72:04

question of who are you talking about I

72:07

think one of the main problems in

72:11

talking about the global agenda or the

72:15

problems of humanity or and the kind of

72:17

things that are that I try to doom is

72:20

that maybe there is no single future for

72:25

the whole of humankind

72:26

maybe the basic understanding of the

72:31

world is just that different groups are

72:34

going to have very different futures

72:37

maybe I mentioned earlier the question

72:40

of what to teach your kids so if you

72:43

live in one place and belong to a

72:46

particular community or to a particular

72:48

group so you teach your kids to be

72:51

resilient and you teach your kids

72:53

computer code and you teach your kids to

72:56

play the violin and you live in another

72:59

place maybe not very far away and the

73:01

best thing to teach your kids is how to

73:04

shoot a Kalashnikov and it’s happening

73:08

on the same on the same planet at the

73:10

same time and what’s more true or what’s

73:14

more important it’s it’s it’s kind of an

73:17

empty question it really boils down to

73:20

the question of perspective so this I

73:26

think is kind of a historical low or an

73:28

historical truth that there

73:31

never just a single story going around

73:35

and part of the responsibility part of

73:39

the difficulty I think of being a

73:42

journalist or being a historian is how

73:45

do you bring at least some justice to

73:49

this situation and how do you give at

73:52

least some attention to all the

73:55

different viewpoints and not just to the

73:57

to the dominant one um before you go

74:01

close you will just talk a little bit

74:04

about your next book and give us a

74:05

little tease I want to hear I’m gonna be

74:07

very sad for a second and then I’ll do

74:10

my so my next book is coming in August

74:15

September

74:16

it’s called 21 lessons for the 21st

74:19

century but it’s not really a book of

74:22

concrete lessons like do this go there

74:24

whatever it’s more an invitation to take

74:30

part in the major debates and

74:33

discussions of the world of the current

74:36

moment continuing what I said earlier I

74:40

think one of the problem problems that

74:43

most people today face is that they just

74:49

don’t have the time and the energy to be

74:52

part of the global debate of the debate

74:56

about the future of humanity there are

74:58

all these big questions of climate

75:02

change and artificial intelligence and

75:04

bioengineering and it’s going to have an

75:07

impact on the life of every single

75:10

individual on the planet but most people

75:13

they’re too busy going to work and

75:17

feeding their kids and taking care of

75:21

elderly parents and so forth they just

75:23

don’t have it’s a luxury to be able to

75:27

think about these issues to investigate

75:30

them to engage in the debate and the

75:34

problem was in one of the problems again

75:36

with history is that history never makes

75:40

any concessions and never gives any

75:43

discounts

75:44

just because you’re in difficulty oh

75:47

just because you’re poor or just because

75:50

you’re too busy taking care of your kids

75:53

if you don’t have the time and the

75:56

energy and the really the luxury to be

75:59

part of the debate it doesn’t mean that

76:03

you won’t suffer from the consequences

76:07

because in in this sense history’s

76:09

completely unfair and I see my job as a

76:15

historian as trying to help at least a

76:19

few more people take part in the debate

76:23

and this is the main purpose of the

76:27

coming book so I guess I see my job is

76:31

obviously you know reporting whatever

76:34

situation I’m assigned to report to but

76:36

I am always looking for examples of

76:39

what’s working and sharing them with

76:41

people so so because I think there’s a

76:44

power in that and that’s my version of

76:46

idealism it’s why I went into journalism

76:48

young people often come to me say I want

76:50

to do what you do you know what do I

76:53

need to know and you know I say you

76:57

build a type fast I can type real fast

76:59

um actually went to London secretarial

77:01

school to learn how to type back in on

77:03

my day here but I think that the most

77:06

important thing you need is a journalist

77:10

today is that you have to be a good

77:16

listener and for two reasons and the

77:18

second reason is more important than the

77:20

first the first is what you learn when

77:23

you listen you know but the second

77:26

reason is what you say when you listen

77:28

listening is a sign of respect and my

77:31

method to my madness if you travel with

77:34

me is I really do try to listen to

77:36

people whether on you know a little

77:39

Jewish guy from Minnesota in the Arab

77:40

world or I’m in Russia or I’m here

77:43

because I find that if you just listen

77:47

to people it’s amazing what they’ll let

77:50

you say back and if you don’t listen to

77:52

them it’s amazing you cannot tell them

77:55

it’s dark outside

77:56

and that’s why I’ve often said um before

77:59

I retire I’m gonna change my business

78:01

card it now says Thomas L Friedman New

78:03

York Times Foreign Affairs columnist and

78:05

I want to change it to Thomas L Friedman

78:07

New York Times humiliation and dignity

78:10

correspondent because I basically spent

78:12

my whole career covering people acting

78:14

out on their humiliation whether it’s in

78:16

the Middle East you know we all know the

78:19

stories they’re Russians feeling

78:20

committable Chinese you know and

78:21

questing for for dignity but I may add

78:26

also diversity correspondent and that’s

78:29

where I would end you know Rachel too

78:35

you know as a columnist sometimes you’re

78:37

in the right place at the right time and

78:39

sometimes you’re in the wrong place at

78:42

the wrong time especially when you’re a

78:44

once a week columnist as I am now

78:46

so less summer the head of the US Air

78:48

Force invited me to join him on a tour

78:50

of all America’s air bases in the Middle

78:53

East it’s a great opportunity to see

78:56

this perspective of the world in the

78:58

military and I found myself an Altoid

79:01

aid air base in Qatar the night Donald

79:05

Trump was giving his press conference

79:07

about the charlottesville disturbances

79:11

and talking about how there were good

79:13

white supremacist and bad white

79:15

supremacist and like that’s all the

79:18

world or in America was talking about

79:20

and I was in a load eight airbase at

79:23

Qatar and my column was due in a few

79:25

hours so I staring at a blank blank

79:28

screen thinking about what do I write

79:32

and then it just popped into my head I

79:35

looked around at my traveling party the

79:39

head of the US Air Force Dave goal find

79:41

his Jewish we are traveling with the Air

79:43

Force US Air Force secretary she’s a

79:45

woman Heather Wilson her chief executive

79:48

officer is an African American woman Air

79:51

Force lieutenant colonel there guards

79:53

name was one the head of the air base

79:56

and it was in Armenian American his

79:58

deputy was a lebanese american and our

80:00

intelligence briefers name was yang mr.

80:04

trump which part of this sentence don’t

80:07

you understand

80:08

okay that that is the real strength of

80:12

America our ability to make out of many

80:15

one you know and in a world where we’re

80:19

all getting so mixed up now I believe

80:23

that virtue that strength is so

80:24

important for every society now it’s

80:27

more important than ever and so I pray

80:31

this man will be a one-term president

80:33

because we can take four years of him we

80:37

cannot take eight years of him he will

80:39

destroy institutions in eight years but

80:42

I know that underneath you know there’s

80:47

still a really powerful idea of America

80:50

and diversity out there that I think

80:54

even Donald Trump cannot crush and

80:56

that’s why I is it shared also by the

80:59

average Trump voter I mean are you able

81:02

also to listen to them and I don’t think

81:05

there is an average Trump voter and I

81:07

think that because I think people came

81:09

to him for so many reasons

81:10

some people came because they were

81:12

humiliated Hillary Clinton said you’re

81:14

deplorable

81:14

I’m deplorable that I’m gonna wear a

81:16

t-shirt that says I’m a deplorable okay

81:18

some came because things you’ve talked

81:20

about you’ve all they want a wall to

81:22

stop the pace of change some came for

81:25

many reasons but my way of approaching

81:27

them because I’m a Wednesday columnist

81:29

it means I write Tuesday for Wednesday

81:31

means I have the first column after

81:32

every election hmm so I had the column

81:36

then I from one and I’m sorry the week

81:40

before he won I wrote my last column and

81:43

it was addressed to Trump voters and it

81:46

began dear fellow Americans treat people

81:49

with respect it’s amazing you know if

81:52

you start there how much you can peel

81:56

peel back you know just listen to people

81:59

and we have so many people broadcasting

82:01

now you know and not listening

82:04

particularly in politics that I think

82:09

that that’s truly the

82:12

optimism so I don’t feel we should go

82:13

too deep into the 26 women yes well to

82:19

comment actually about it one I think

82:21

that I mean the the Trump voters of

82:25

still the future of America I mean if

82:27

you don’t have them then America is

82:30

going nowhere so if you need to be

82:33

optimistic about something then you need

82:35

to be optimistic about about them as

82:37

well that I think they’re they’re all

82:40

people that you could take somewhere

82:42

with a different message not all but

82:44

many of them and secondly I would say

82:49

about about journalism I agree that it

82:55

is immensely important especially today

82:59

especially for the viability of liberal

83:02

democracies because you know democracy

83:06

is to some extent based on Lincoln’s

83:09

maxim that you can fool some people some

83:13

of the time all the time and you can

83:14

fool all the people some of the time but

83:16

not all the people all the time and this

83:19

is really just wishful thinking you can

83:22

fool people I mean not for eternity

83:24

nothing is for eternity but you can fool

83:27

all the people for a very very long time

83:29

and the the way to do it is to control

83:34

the information they get with the basic

83:37

idea of democracy is ok we elect a bunch

83:39

of people to govern the country and if

83:42

they do a bad job if they fail then

83:45

sooner or later enough people will

83:48

realize it and they will change the

83:50

government and this works fine as long

83:54

as you have free press and free

83:56

journalism if the government controls in

83:59

some way or the other

84:00

directly or indirectly if it controls

84:03

the media if it controls journalism then

84:06

it can always blame somebody else for

84:09

its failures it can always direct the

84:12

attention towards all kinds of enemies

84:15

either real or imaginary and there will

84:19

never be a day of reckoning so in in

84:23

this

84:23

there is no future to democracy without

84:27

a strong and free journalism I think yes

84:33

journalism

84:44

I was gonna say on behalf of the New

84:47

York Times a rousing defense of a strong

84:50

and free press works in very nicely to

84:53

remind you that we were here heard this

84:56

evening putting on this event

84:57

what a luxury called it to engage in

85:00

this debate and and to listen as Tom

85:03

described is so important as we do

85:05

figure out and make our way toward the

85:07

future we are going to call in an

85:10

evening here I want to thank all of you

85:12

for joining us thank the New York Times

85:14

and how to academy for putting a loss

85:16

event and please of course thank you of

85:18

all Harare and Thomas Friedman

85:21

[Applause]

85:23

[Music]

85:24

[Applause]

English (auto-generated)

Watch – Why fascism is so tempting — and how your data could power it | Yuval Noah Harari

Why fascism is so tempting — and how your data could power it | Yuval Noah Harari

https://youtu.be/xHHb7R3kx40

Politics becomes the struggle to control the flows of data.
And dictatorship now means that too much data is being concentrated in the hands of the government or of a small elite.
The greatest danger that now faces liberal democracy is that the revolution in information technology will make dictatorships more efficient than democracies.
In the 20th century, democracy and capitalism defeated fascism and communism because democracy was better at processing data and making decisions. Given 20th-century technology, it was simply inefficient to try and concentrate too much data and too much power in one place.
But it is not a law of nature that centralized data processing is always less efficient than distributed data processing. With the rise of artificial intelligence and machine learning, it might become feasible to process enormous amounts of information very efficiently in one place, to take all the decisions in one place, and then centralized data processing will be more efficient
than distributed data processing.
09:55
And then the main handicap of authoritarian regimes
09:58
in the 20th century —
10:00
their attempt to concentrate all the information in one place —
10:05
it will become their greatest advantage.
10:10
Another technological danger that threatens the future of democracy
10:15
is the merger of information technology with biotechnology,
10:20
which might result in the creation of algorithms
10:24
that know me better than I know myself.
10:29
And once you have such algorithms,
10:31
an external system, like the government,
10:34
cannot just predict my decisions,
10:38
it can also manipulate my feelings, my emotions.
10:42
A dictator may not be able to provide me with good health care,
10:47
but he will be able to make me love him
10:51
and to make me hate the opposition.
10:55
Democracy will find it difficult to survive such a development
11:00
because, in the end,
11:02
democracy is not based on human rationality;
11:06
it’s based on human feelings.
11:10
During elections and referendums,
11:12
you’re not being asked, “What do you think?”
11:15
You’re actually being asked, “How do you feel?”
11:20
And if somebody can manipulate your emotions effectively,
11:24
democracy will become an emotional puppet show.
11:30
So what can we do to prevent the return of fascism
11:34
and the rise of new dictatorships?
11:37
The number one question that we face is: Who controls the data?
11:44
If you are an engineer,
11:46
then find ways to prevent too much data
11:50
from being concentrated in too few hands.
11:53
And find ways to make sure
11:56
the distributed data processing is at least as efficient
12:01
as centralized data processing.
12:04
This will be the best safeguard for democracy.
12:07
As for the rest of us who are not engineers,
12:11
the number one question facing us
12:14
is how not to allow ourselves to be manipulated
12:19
by those who control the data.
12:23
The enemies of liberal democracy, they have a method.
12:27
They hack our feelings.
12:30
Not our emails, not our bank accounts —
12:32
they hack our feelings of fear and hate and vanity,
12:38
and then use these feelings
12:40
to polarize and destroy democracy from within.
12:44
This is actually a method
12:46
that Silicon Valley pioneered in order to sell us products.
12:52
But now, the enemies of democracy are using this very method
12:57
to sell us fear and hate and vanity.
13:01
They cannot create these feelings out of nothing.
13:06
So they get to know our own preexisting weaknesses.
13:10
And then use them against us.
13:13
And it is therefore the responsibility of all of us
13:17
to get to know our weaknesses
13:19
and make sure that they do not become a weapon
13:23
in the hands of the enemies of democracy.
Entire Transcript

Hello, everyone.00:15
It’s a bit funny, because I did write that humans will become digital,
00:20
but I didn’t think it will happen so fast

00:23
and that it will happen to me.
00:25
But here I am, as a digital avatar,
00:28
and here you are, so let’s start.
00:32
And let’s start with a question.
00:34
How many fascists are there in the audience today?
00:38
(Laughter)
00:39
Well, it’s a bit difficult to say,
00:42
because we’ve forgotten what fascism is.
00:46
People now use the term “fascist”
00:49
as a kind of general-purpose abuse.
00:52
Or they confuse fascism with nationalism.
00:56
So let’s take a few minutes to clarify what fascism actually is,
01:02
and how it is different from nationalism.
01:05
The milder forms of nationalism have been among the most benevolent
01:10
of human creations.
01:12
Nations are communities of millions of strangers
01:17
who don’t really know each other.
01:19
For example, I don’t know the eight million people
01:22
who share my Israeli citizenship.
01:26
But thanks to nationalism,
01:28
we can all care about one another and cooperate effectively.
01:32
This is very good.
01:34
Some people, like John Lennon, imagine that without nationalism,
01:39
the world will be a peaceful paradise.
01:44
But far more likely,
01:45
without nationalism, we would have been living in tribal chaos.
01:50
If you look today at the most prosperous and peaceful countries in the world,
01:55
countries like Sweden and Switzerland and Japan,
02:00
you will see that they have a very strong sense of nationalism.
02:05
In contrast, countries that lack a strong sense of nationalism,
02:09
like Congo and Somalia and Afghanistan,
02:13
tend to be violent and poor.
02:16
So what is fascism, and how is it different from nationalism?
02:21
Well, nationalism tells me that my nation is unique,
02:27
and that I have special obligations towards my nation.
02:31
Fascism, in contrast, tells me that my nation is supreme,
02:37
and that I have exclusive obligations towards it.
02:42
I don’t need to care about anybody or anything other than my nation.
02:48
Usually, of course, people have many identities
02:52
and loyalties to different groups.
02:55
For example, I can be a good patriot, loyal to my country,
02:59
and at the same time, be loyal to my family,
03:03
my neighborhood, my profession,
03:05
humankind as a whole,
03:06
truth and beauty.
03:09
Of course, when I have different identities and loyalties,
03:13
it sometimes creates conflicts and complications.
03:17
But, well, who ever told you that life was easy?
03:21
Life is complicated.
03:23
Deal with it.
03:26
Fascism is what happens when people try to ignore the complications
03:32
and to make life too easy for themselves.
03:36
Fascism denies all identities except the national identity
03:41
and insists that I have obligations only towards my nation.
03:47
If my nation demands that I sacrifice my family,
03:51
then I will sacrifice my family.
03:54
If the nation demands that I kill millions of people,
03:58
then I will kill millions of people.
04:01
And if my nation demands that I betray truth and beauty,
04:07
then I should betray truth and beauty.
04:11
For example, how does a fascist evaluate art?
04:16
How does a fascist decide whether a movie is a good movie or a bad movie?
04:22
Well, it’s very, very, very simple.
04:27
There is really just one yardstick:
04:29
if the movie serves the interests of the nation,
04:33
it’s a good movie;
04:34
if the movie doesn’t serve the interests of the nation,
04:37
it’s a bad movie.
04:39
That’s it.
04:40
Similarly, how does a fascist decide what to teach kids in school?
04:45
Again, it’s very simple.
04:47
There is just one yardstick:
04:49
you teach the kids whatever serves the interests of the nation.
04:55
The truth doesn’t matter at all.
05:00
Now, the horrors of the Second World War and of the Holocaust remind us
05:05
of the terrible consequences of this way of thinking.
05:10
But usually, when we talk about the ills of fascism,
05:15
we do so in an ineffective way,
05:18
because we tend to depict fascism as a hideous monster,
05:22
without really explaining what was so seductive about it.
05:27
It’s a bit like these Hollywood movies that depict the bad guys —
05:32
Voldemort or Sauron or Darth Vader —
05:36
as ugly and mean and cruel.
05:38
They’re cruel even to their own supporters.
05:41
When I see these movies, I never understand —
05:45
why would anybody be tempted to follow a disgusting creep like Voldemort?
05:52
The problem with evil is that in real life,
05:56
evil doesn’t necessarily look ugly.
05:59
It can look very beautiful.
06:02
This is something that Christianity knew very well,
06:05
which is why in Christian art, as [opposed to] Hollywood,
06:08
Satan is usually depicted as a gorgeous hunk.
06:13
This is why it’s so difficult to resist the temptations of Satan,
06:17
and why it is also difficult to resist the temptations of fascism.
06:22
Fascism makes people see themselves
06:25
as belonging to the most beautiful and most important thing in the world —
06:30
the nation.
06:32
And then people think,
06:33
“Well, they taught us that fascism is ugly.
06:37
But when I look in the mirror, I see something very beautiful,
06:40
so I can’t be a fascist, right?”
06:43
Wrong.
06:44
That’s the problem with fascism.
06:45
When you look in the fascist mirror,
06:48
you see yourself as far more beautiful than you really are.
06:53
In the 1930s, when Germans looked in the fascist mirror,
06:57
they saw Germany as the most beautiful thing in the world.
07:02
If today, Russians look in the fascist mirror,
07:05
they will see Russia as the most beautiful thing in the world.
07:09
And if Israelis look in the fascist mirror,
07:12
they will see Israel as the most beautiful thing in the world.
07:18
This does not mean that we are now facing a rerun of the 1930s.
07:24
Fascism and dictatorships might come back,
07:27
but they will come back in a new form,
07:31
a form which is much more relevant
07:33
to the new technological realities of the 21st century.
07:38
In ancient times,
07:40
land was the most important asset in the world.
07:44
Politics, therefore, was the struggle to control land.
07:49
And dictatorship meant that all the land was owned by a single ruler
07:55
or by a small oligarch.
07:57
And in the modern age, machines became more important than land.
08:03
Politics became the struggle to control the machines.
08:07
And dictatorship meant
08:09
that too many of the machines became concentrated
08:13
in the hands of the government or of a small elite.
08:17
Now data is replacing both land and machines
08:21
as the most important asset.
08:24
Politics becomes the struggle to control the flows of data.
08:29
And dictatorship now means
08:32
that too much data is being concentrated in the hands of the government
08:38
or of a small elite.
08:40
The greatest danger that now faces liberal democracy
08:45
is that the revolution in information technology
08:49
will make dictatorships more efficient than democracies.
08:54
In the 20th century,
08:56
democracy and capitalism defeated fascism and communism
09:01
because democracy was better at processing data and making decisions.
09:07
Given 20th-century technology,
09:09
it was simply inefficient to try and concentrate too much data
09:15
and too much power in one place.
09:19
But it is not a law of nature
09:23
that centralized data processing is always less efficient
09:29
than distributed data processing.
09:32
With the rise of artificial intelligence and machine learning,
09:36
it might become feasible to process enormous amounts of information
09:41
very efficiently in one place,
09:44
to take all the decisions in one place,
09:47
and then centralized data processing will be more efficient
09:52
than distributed data processing.
09:55
And then the main handicap of authoritarian regimes
09:58
in the 20th century —
10:00
their attempt to concentrate all the information in one place —
10:05
it will become their greatest advantage.
10:10
Another technological danger that threatens the future of democracy
10:15
is the merger of information technology with biotechnology,
10:20
which might result in the creation of algorithms
10:24
that know me better than I know myself.
10:29
And once you have such algorithms,
10:31
an external system, like the government,
10:34
cannot just predict my decisions,
10:38
it can also manipulate my feelings, my emotions.
10:42
A dictator may not be able to provide me with good health care,
10:47
but he will be able to make me love him
10:51
and to make me hate the opposition.
10:55
Democracy will find it difficult to survive such a development
11:00
because, in the end,
11:02
democracy is not based on human rationality;
11:06
it’s based on human feelings.
11:10
During elections and referendums,
11:12
you’re not being asked, “What do you think?”
11:15
You’re actually being asked, “How do you feel?”
11:20
And if somebody can manipulate your emotions effectively,
11:24
democracy will become an emotional puppet show.
11:30
So what can we do to prevent the return of fascism
11:34
and the rise of new dictatorships?
11:37
The number one question that we face is: Who controls the data?
11:44
If you are an engineer,
11:46
then find ways to prevent too much data
11:50
from being concentrated in too few hands.
11:53
And find ways to make sure
11:56
the distributed data processing is at least as efficient
12:01
as centralized data processing.
12:04
This will be the best safeguard for democracy.
12:07
As for the rest of us who are not engineers,
12:11
the number one question facing us
12:14
is how not to allow ourselves to be manipulated
12:19
by those who control the data.
12:23
The enemies of liberal democracy, they have a method.
12:27
They hack our feelings.
12:30
Not our emails, not our bank accounts —
12:32
they hack our feelings of fear and hate and vanity,
12:38
and then use these feelings
12:40
to polarize and destroy democracy from within.
12:44
This is actually a method
12:46
that Silicon Valley pioneered in order to sell us products.
12:52
But now, the enemies of democracy are using this very method
12:57
to sell us fear and hate and vanity.
13:01
They cannot create these feelings out of nothing.
13:06
So they get to know our own preexisting weaknesses.
13:10
And then use them against us.
13:13
And it is therefore the responsibility of all of us
13:17
to get to know our weaknesses
13:19
and make sure that they do not become a weapon
13:23
in the hands of the enemies of democracy.
13:27
Getting to know our own weaknesses
13:29
will also help us to avoid the trap of the fascist mirror.
13:36
As we explained earlier, fascism exploits our vanity.
13:40
It makes us see ourselves as far more beautiful than we really are.
13:46
This is the seduction.
13:48
But if you really know yourself,
13:50
you will not fall for this kind of flattery.
13:54
If somebody puts a mirror in front of your eyes
13:58
that hides all your ugly bits and makes you see yourself
14:04
as far more beautiful and far more important
14:08
than you really are,
14:09
just break that mirror.
14:13
Thank you.
14:14
(Applause)
14:22
Chris Anderson: Yuval, thank you.
14:24
Goodness me.
14:25
It’s so nice to see you again.
14:27
So, if I understand you right,
14:29
you’re alerting us to two big dangers here.
14:32
One is the possible resurgence of a seductive form of fascism,
14:36
but close to that, dictatorships that may not exactly be fascistic,
14:41
but control all the data.
14:43
I wonder if there’s a third concern
14:45
that some people here have already expressed,
14:47
which is where, not governments, but big corporations control all our data.
14:52
What do you call that,
14:54
and how worried should we be about that?
14:56
Yuval Noah Harari: Well, in the end, there isn’t such a big difference
14:59
between the corporations and the governments,
15:02
because, as I said, the questions is: Who controls the data?
15:05
This is the real government.
15:07
If you call it a corporation or a government —
15:09
if it’s a corporation and it really controls the data,
15:12
this is our real government.
15:14
So the difference is more apparent than real.
15:18
CA: But somehow, at least with corporations,
15:21
you can imagine market mechanisms where they can be taken down.
15:24
I mean, if consumers just decide
15:26
that the company is no longer operating in their interest,
15:29
it does open the door to another market.
15:31
It seems easier to imagine that
15:33
than, say, citizens rising up and taking down a government
15:36
that is in control of everything.
15:37
YNH: Well, we are not there yet,
15:39
but again, if a corporation really knows you better than you know yourself —
15:44
at least that it can manipulate your own deepest emotions and desires,
15:50
and you won’t even realize —
15:51
you will think this is your authentic self.
15:54
So in theory, yes, in theory, you can rise against a corporation,
15:58
just as, in theory, you can rise against a dictatorship.
16:02
But in practice, it is extremely difficult.
16:07
CA: So in “Homo Deus,” you argue that this would be the century
16:11
when humans kind of became gods,
16:14
either through development of artificial intelligence
16:17
or through genetic engineering.
16:20
Has this prospect of political system shift, collapse
16:26
impacted your view on that possibility?
16:29
YNH: Well, I think it makes it even more likely,
16:32
and more likely that it will happen faster,
16:35
because in times of crisis, people are willing to take risks
16:40
that they wouldn’t otherwise take.
16:42
And people are willing to try
16:45
all kinds of high-risk, high-gain technologies.
16:49
So these kinds of crises might serve the same function
16:54
as the two world wars in the 20th century.
16:57
The two world wars greatly accelerated
17:00
the development of new and dangerous technologies.
17:04
And the same thing might happen in the 21st century.
17:07
I mean, you need to be a little crazy to run too fast,
17:11
let’s say, with genetic engineering.
17:13
But now you have more and more crazy people
17:17
in charge of different countries in the world,
17:19
so the chances are getting higher, not lower.
17:23
CA: So, putting it all together, Yuval, you’ve got this unique vision.
17:27
Roll the clock forward 30 years.
17:28
What’s your guess — does humanity just somehow scrape through,
17:31
look back and say, “Wow, that was a close thing. We did it!”
17:35
Or not?
17:36
YNH: So far, we’ve managed to overcome all the previous crises.
17:40
And especially if you look at liberal democracy
17:43
and you think things are bad now,
17:46
just remember how much worse things looked in 1938 or in 1968.
17:52
So this is really nothing, this is just a small crisis.
17:56
But you can never know,
17:58
because, as a historian,
18:00
I know that you should never underestimate human stupidity.
18:05
(Laughter) (Applause)
18:06
It is one of the most powerful forces that shape history.
18:11
CA: Yuval, it’s been an absolute delight to have you with us.
18:14
Thank you for making the virtual trip.
18:16
Have a great evening there in Tel Aviv.
18:18
Yuval Harari!
18:19
YNH: Thank you very much.
18:20
(Applause)

Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson

Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson

https://www.wired.com/video/watch/will-artificial-intelligence-enhance-of-hack-humanity#intcid=_wired-video-watch-page-playlist-default_5e728c94-7aaf-4f7a-b4f4-57a928172776_cral2-2

In a discussion that covers ethics in technology, hacking humans, free will, and how to avoid potential dystopian scenarios, historian and philosopher Yuval Noah Harari speaks with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University’s Human-Centered AI Institute — in a conversation moderated by Nicholas Thompson, WIRED’s Editor-in-Chief.

Transcript

My name is Rob Reich, I’m delighted to welcome you here to Stanford University for an evening of conversation with Yuval Harari, Fei-Fei Li, and Nick Thompson.

I’m a professor of political science here

and the Faculty Director of

the Stanford Center for Ethics and Society,

which is a co-sponsor of tonight’s event,

along with the Stanford Institute

for Human Centered Artificial Intelligence

and the Stanford Humanities Center.

Our topic tonight is a big one.

We’re going to be thinking together

about the promises and perils of artificial intelligence.

The technology quickly reshaping our economic,

social, and political worlds, for better or for worse.

The questions raised by the emergence of AI

are by now familiar, at least to many people

here in Silicon Valley but, I think it’s fair

to say that their importance is only growing.

What will the future of work look like

when millions of jobs can be automated?

Are we doomed or perhaps blessed to live in a world

where algorithms make decisions instead of humans.

And these are smaller questions in the big scheme of things.

What, might you ask you’re the large ones?

Well, here are three.

What will become of the human species

if machine intelligence approaches

or exceeds that of an ordinary human being?

As a technology that currently relies

on massive centralized pools of data,

does AI favor authoritarian centralized governments

over more decentralized democratic governance?

And are we at the start now of an AI arms race?

And what will happen if powerful systems of AI,

especially when deployed for purposes

like facial recognition, are in the hands

of authoritarian rulers?

These challenges only scratch the surface when it comes

to fully wrestling with the implications of AI,

as the technology continues to improve

and its use cases continue to multiply.

I want to mention the format of the evening event.

First, given the vast areas of expertise

that Yuval and Fei-Fei have,

when you ask questions via Slido,

those questions should pertain

or be limited to the topics under discussion tonight.

So, this web interface that we’re using,

Slido allows people to upvote and downvote questions.

So, you can see them now if you have

an internet communication device.

If you don’t have one, you can take one of these postcards,

which hopefully you got outside

and on the back you can fill in a question you might have

about the evening event and collect it at the end,

and the Stanford Humanities Center

will try to foster some type of conversation

on the basis of those questions.

Couple housekeeping things,

if you didn’t purchase one already,

Yuval’s books are available for sale

outside in the lobby after the event.

A reminder to please turn your cell phone ringers off.

And we will have 90 minutes

for our moderated conversation here

and will end sharp after 90 minutes.

Now, I’m going to leave the stage in just a minute

and allow a really amazing undergraduate student

here at Stanford to introduce our guests.

Her name is Anna-Sofia Lesiv,

let me just tell you a bit about her.

She’s a junior here at Stanford majoring in Economics

with a minor in Computer Science

and outside the classroom, Anna-Sofia is a journalist

whose work has been featured in The Globe and Mail,

Al Jazeera, The Mercury News, The Seattle Times,

and this campuses paper of record, The Stanford Daily.

She’s currently the Executive Editor of The Daily

and her daily magazine article

from earlier in the year called CS Plus Ethics,

examined the history of computer science

and ethics education at Stanford

and it won the student prize for best journalism of 2018.

She continues to publish probing examinations

of the ethical challenges faced by technologists here

and elsewhere so, ladies and gentlemen

I invite you to remember this name

for you’ll be reading about her

or reading her articles, or likely both,

please welcome Stanford junior, Anna-Sofia Lesiv.

[audience clapping]

Thank you very much for the introduction, Rob.

Well it’s my great honor now,

to introduce our three guests tonight,

Yuval Noah Harari, Fei-Fei Li, and Nicholas Thompson.

Professor Yuval Noah Harari is a historian,

futurist, philosopher, and professor at Hebrew University.

The world also knows him for authoring some of

the most ambitious and influential books of our decade.

Professor Harari’s internationally best-selling books,

which have sold millions of copies worldwide,

have covered a dizzying array of subject matter

from narrativizing the entire history

of the human race in Sapiens,

to predicting the future awaiting humanity,

and even coining a new faith called Dadaism, in Homo Deus.

Professor Harari has become a beloved figure

in Silicon Valley, whose readings are assigned

in Stanford’s classrooms and whose name

is whispered through the hallways

of the comparative literature

and computer science departments, alike.

His most recent book is 21 Lessons for the 21st Century,

which focuses on the technological,

social, political, and ecological challenges

of the present moment.

In this work, Harari cautions

that as technological breakthroughs

continue to accelerate, we will have less

and less time to reflect upon the meaning

and consequences of the changes they bring.

And this urgency, is what charges

Professor Fei-Fei Li’s work everyday,

in her role as the Co-Director of Stanford’s

Human-Centered AI Institute.

This institute is one of the first

to insist that AI is not merely the domain of technologists

but a fundamentally interdisciplinary

and ultimately human issue.

Her fascination with the fundamental questions

of human intelligence is what piqued her interest

in neuroscience, as she eventually became

one of the world’s greatest experts

in the fields of computer vision, machine learning,

and cognitive and computational neuroscience.

She’s published over a hundred scientific articles

in leading journals and has had research supported

by the National Science Foundation, Microsoft,

and the Sloan Foundation.

From 2013 to 2018, Professor Fei-Fei Li served as

the Director of Stanford’s AI lab

and between January, 2017 and September, 2018,

Professor Fei-Fei Li served as Vice President at Google

and Chief Scientist of AI and Machine Learning

at Google Cloud.

Nicholas Thompson is the Editor-In-Chief of Wired magazine,

a position he’s held since January, 2017.

Under Mr. Thompson’s leadership,

the topic of artificial intelligence

has come to hold a special place at the magazine.

Not only has Wired assigned more feature stories

on AI than on any other subject,

but it is the only specific topic

with a full-time reporter assigned to it.

It’s no wonder then, that Professors Harari

and Li are no strangers to its pages.

Mr. Thompson has led discussions

with the world’s leaders in technology and AI,

including Mark Zuckerberg on Facebook and Privacy,

French President, Emmanuel Macron on France’s AI strategy,

and Ray Kurzweil on the ethics and limits of AI.

Mr. Thompson is a Stanford University graduate

who earned his BA, double majoring

in earth systems and political science

and impressively even completed a third degree in economics.

Of course, I would be remiss if I did not mention

that Mr. Thompson cut his journalistic teeth

in the opinions section of the Stanford Daily so,

Nick, that makes both of us.

Like all our guests today, I’m at once fascinated

and worried by the challenges

that artificial intelligence poses for our society.

One of my goals at Stanford has been

to write about and document the challenge

of educating a generation of students whose lives

and workplaces, will eventually be transformed by AI.

Most recently, I published an article

called Complacent Valley, with the Stanford Daily.

In it I critiqued our propensity

to become overly comfortable with the technological

and financial achievements that Silicon Valley

has already reached, lest we become complacent

and lose our ambition and momentum

to tackle the greater challenges the world has in store.

Answering the fundamental questions

of what we should spend our time on,

how we should live our lives,

has become much more difficult,

particularly on the doorstep of the AI revolution.

I believe that the kind of crisis of agency

that Author JD Vance wrote of in Hillbilly Elegy,

for example, is not confined to Appalachia

or the de-industrialized Midwest

but is emerging even at elite institutions like Stanford.

So conversations like hours this evening,

hosting speakers that aim to re-center

the individual at the heart of AI,

will show us how to take responsibility

in a moment when most decisions

can seemingly be made for us, by algorithms.

There are no narratives to guide us through a future

with AI, no ancient myths or stories

that we may rely on to tell us what to do.

At a time when Humanity is facing

its greatest challenge yet,

somehow we cannot be more at a loss for ideas or direction.

It’s this momentous crossroads in human history

that pulls me towards journalism and writing in the future.

And it’s why I’m so eager to hear

our three guests discuss exactly such a future, tonight.

So, please join me in giving them

a very warm welcome this evening.

[audience applause]

Wow, thank you so much Anna-Sofia, thank you, Rob.

Thank you, Stanford for inviting us all here.

I’m having a flashback to the last time

I was on a stage at Stanford,

which was playing guitar at the coho

and I didn’t have either Yuval or Fei-Fei with me

so, there were about six people in the audience,

one of whom had her headphones on but, I did meet my wife.

[audience croons] Isn’t that sweet?

All right so, a reminder, housekeeping,

questions are going to come in, in Slido.

You can put them in, you can vote up questions,

we’ve already got several thousand

so please vote up the ones you really like.

If someone can program an AI that can get

a really devastating question in

and stump Yuval, I will get you

a free subscription to Wired.

[audience laughs]

I want this conversation to kind of have three parts.

First, lay out where we are,

then talk about some of the choices

we have to make now, and last talk about some advice

for all the wonderful people in the halls.

So, those are the three general areas,

I’ll feed in questions as we go.

We may have a specific period for questions

at the end but, let’s get cracking.

Yuval.

[Yuval] Yeah.

So, the last time we talked you said many,

many brilliant things but one that stuck out,

it was a line where you said,

We are not just in a technological crisis,

we are in a philosophical crisis.

So, explain what you meant, explain how it ties to AI,

and let’s get going with a note of existential angst.

[all laughing]

Yes, I think what’s happening now

is that the philosophical framework of the modern world

that has been established in the 17th and 18th century,

around ideas like human agency and individual free will,

are being challenged like never before.

Not by philosophical ideas but by practical technologies.

And we see more and more questions,

which used to be, you know, the bread and butter

of the philosophy department, being moved

to the engineering department.

And that’s scary, partly because, unlike philosophers,

who are extremely patient people,

they can discuss something for thousands of years

without reaching any agreement and they are fine with that,

[light audience laughter] the engineers won’t wait

and even if the engineers are willing to wait,

the investors behind the engineers, won’t wait.

So, it means that we don’t have a lot of time

and in order to encapsulate what the crisis is,

I know that, you know engineers,

especially in a place like Silicon Valley,

they like equations so, maybe I

can try to formulate an equation [laughing]

to explain what’s happening.

And the equation is B times C times D equals ah.

Which means, biological knowledge

multiplied by computing power multiplied by data

equals the ability to hack humans.

And the AI revolutional crisis is not just AI,

it’s also biology, it’s biotech.

We haven’t seen anything yet

because the link is not complete.

There is a lot of hype now around AI in computers

but just that it is just half the story.

The other half is the abilities,

the biological knowledge coming from brain science

and biology and once you link that to AI,

what you get is the ability to hack humans.

And maybe I’ll explain what it means,

the ability to hack humans to create an algorithm

that understands me better than I understand myself

and can therefore manipulate me, enhance me, or replace me.

And this something that our philosophical baggage

and all our belief in, you know, human agency,

and free will, and the customer is always right,

and the voter knows best, this just falls apart

once you have this kind of ability.

Once you have this kind of ability

and it’s used to manipulate or replace you,

not if it’s used to enhance you?

Also when it’s used to enhance you,

the question is, who decides what is a good enhancement

and what is a bad enhancement.

So, our immediate fallback position

is to fall back on the traditional humanist ideas

that the customer is always right,

the customers will choose the enhancement,

or the voter is always right.

The voters will vote.

There will be a political decision about enhancement,

or if it feels good, do it.

We’ll just follow our heart, we’ll just listen to ourselves.

None of this works when there is a technology

to hack human on a large scale.

You can’t trust your feelings,

or the voters, or the customers on that.

The easiest people to manipulate

are the people who believe in free will

because they think they cannot be manipulated.

So, how do you decide what to enhance if,

and this a very deep ethical and philosophical question.

Again, it philosophers have been debating

for thousands of years.

What is good?

What are the good qualities we need to enhance?

So, if you can’t trust the customer,

if you can’t trust the voter,

if you can’t trust your feelings, who do you trust?

What do you go by?

All right Fei-Fei, you have a PhD,

you have a CS degree, you’re Professor at Stanford.

Does A times B times C equal H? [laughing]

Is Yuvals theory the right way

to look at where we’re headed?

Wow, what a beginning, thank you, Yuval.

Well, one of the things, I’ve been reading Yuval’s book

for the past couple of years, and talking to you,

and I’m very envious of philosophers now,

because they can propose questions

and crisis but they don’t have to answer them.

[laughing loudly]

Now, as an engineer and scientist,

I feel like we have to now solve the crisis.

So, honestly I think I’m very thankful.

I mean, personally I’ve been reading your book

for two years and I’m very thankful

that Yuval, among other people,

have opened up this really important question

for us and it’s also quite a…

When you said the AI crisis

and I was sitting there thinking,

this a field I loved, and felt passionate about,

and researched for 20 years,

and that was just a scientific curiosity

of a young scientist entering PhD and AI.

What happened, that 20 years later, it has become a crisis?

And it actually speak of the evolution of AI

that got me where I am today

and got my colleagues at Stanford where we are today

with the Human-Center AI,

is that this a transformative technology.

It’s a nascent technology, it’s still a budding science

compared to physics, chemistry, biology but,

with the power of data, computing,

and the kind of diverse impact AI is making,

it is like you said, is touching human lives

and business in broad and deep ways.

And responding to that kind of questions

in crisis that’s facing humanity,

I think one of the proposed solution,

or if not solution at least a try

that Stanford is making an effort about,

is can we reframe the education,

the research, and the dialogue of AI

and technology in general, in a human centered way.

We’re not necessarily gonna find solution today but,

can we involve the humanists, the philosophers,

the historians, the political scientists,

the economists, the ethicist, the legal scholars,

the neuroscientists, the psychologists,

and many more other disciplines,

into the study and development of AI

in the next chapter, in the next phase.

Don’t be so certain we’re not gonna get an answer today.

I’ve got two of the smartest people in the world

glued to their chairs and I’ve got Slido

for 72 minutes so, let’s give it a shot.

But he said we have thousands of years.

[all laughing]

But let me go a little bit further in Yuval’s questions.

So, there are lots, or Yuval’s opening statement,

there are a lot of crises about AI

that people talk about, right?

They talk about AI becoming conscious

and what will that mean,

they talk about job displacement,

They talk about biases, when Yuval has very clearly laid out

what he thinks is the most important one,

which is the combination of biology plus

computing plus data leading to hacking.

He’s laid out a very specific concern.

Is that specific concern, what people

who were thinking about AI should be focused on?

So, absolutely.

So, alien technology humanity has created,

starting from fire, is a double-edged soul.

So, it can bring improvements to life and to work

and to society but it can bring the perils

and AI has the perils, you know?

I wake up every day worried

about the diversity inclusion issue in AI.

We worry about fairness or the lack of fairness,

privacy, the labor market so,

absolutely we need to be concerned

and because of that we need to expand the study,

the research, and the the development of policies,

and the dialogue of AI beyond just the codes

and the products into these human realms,

into these societal issues.

So, I absolutely agree with you on that,

that this the moment to open the dialogue,

to open the research in those issues.

Okay. I would just say that again,

part of my fear is that the dialogue,

I don’t fear AI experts talking with philosophers,

I’m fine with that, historians good,

literary critics wonderful, I fear the moment

you start talking with biologists.

[crowd chatter]

That’s my biggest fear.

When you and the biologist will,

Hey, we actually had a common language

and we can do things together.

And that’s when the really scary things, I think.

Can you elaborate on the what is scaring you

that we talk to biologists?

That’s the moment when you can really hack human beings,

not by collecting data about our search words,

or our purchasing habits, or where do we go about town,

but you can actually start peering inside

and collect data directly

from our hearts and from our brains.

Okay, can I be specific?

First of all, the birth of AI is AI scientist

talking to biologists, specifically neuro scientists.

Right, the birth of AI is very much inspired

by what the brain does.

Fast-forward to sixty years later,

today’s AI is making great improvement in healthcare.

There’s a lot of data from our physiology

and pathology being collected

and using machine learning to help us but,

I feel like you’re talking about something else.

That’s part of it, I mean,

if there wasn’t a great promise in the technology,

there would also be no danger

because nobody would go along that path.

I mean, obviously, there are enormously beneficial things

that AI can do for us, especially

when it is linked with how is biology.

We are about to get the best health care in the world,

in history, and the cheapest,

and available for billions of people via smartphones,

which today they have almost nothing.

And this is why it is almost impossible to resist

the temptation and with all the issue now, of privacy.

If you have a big battle between privacy and health,

health is likely to win hands down.

So, I fully agree with that and, you know,

my job as a historian, as a philosopher,

as a social critic, is to point out the dangers in that

because especially in Silicon Valley,

people are very much familiar with the advantages

but they don’t like to think so much

about the dangers and the big danger

is what happens when you can hack the brain

and that can serve not just your healthcare provider,

that can serve so many things from a crazy dictator, to–

Let’s focus on that, what it means to hack the brain.

Like what, right now in some ways,

my brain is hacked, right?

There’s an allure of this device,

it wants me to check it constantly.

Like, my brain has been a little bit hacked.

Yours hasn’t because you meditate two hours a day

but mine has and probably [laughter]

most of these people have.

But what exactly is the future brain hacking

going to be, that it isn’t today?

Much more of the same, but on a much larger scale.

I mean, the point when for example,

more and more of your personal decisions in lives

are being outsourced to an algorithm

that is just so much better than you.

So, you know we have two distinct dystopias

that kind of mesh together.

We have the dystopia of surveillance capitalism

in which there is no like, Big Brother dictator

but more and more of your decisions

are being made by an algorithm

and it’s not just decisions about what to eat,

or what to shop, but decisions like,

where to work, and where to study, and whom to date,

and whom to marry, and whom to vote for.

It’s the same logic and I would be curious to hear

if you think that there is anything in humans,

which is by definition un-hackable,

that we can’t reach a point when the algorithm

can make that decision better than me.

So, that’s one line of dystopia

which is a bit more familiar in this part of the world

and then you have the full-fledged dystopia

of a totalitarian regime

based on a total surveillance system.

Something like the totalitarian regimes

that we have seen in the twentieth century

but augmented with biometric sensors

and the ability to basically track

each and every individual, 24 hours a day.

And you know, which in the days of,

I don’t know, Stalin or Hitler, was absolutely impossible

because it didn’t have the technology

but maybe, might be possible in 20 years or 30 years.

So, we can choose which dystopia to discuss

but they are very close in–

Let’s choose the liberal democracy dystopia.

Fei-Fei, do you want answer Yuval’s specific question,

which is, is there something in dystopia,

a liberal democracy dystopia, is there something endemic

to humans that cannot be hacked?

So, when you ask me that question just two minutes ago,

the first word that came to my mind is love.

Is love hackable?

Ask Tinder, I don’t know.

[crowd and panel laughing]

Dating–

It depends–

Dating is not the entirety of love, I hope.

The question is which kind of love are you referring to?

If you are referring to this, you know I don’t know,

Greek philosophical love or the loving kindness of Buddhism,

that’s one question,

which I think it’s much more complicated.

If you are referring to the

biological mammalian courtship rituals,

then I think yes.

I mean, why not?

But humans– Why is it different

from anything else that is happening in the body?

But humans are humans because there’s some part of us

that are beyond the mammalian courtship, right?

So, is that part hackable?

That’s the question?

I mean, you know in in most science fiction books

and movies, they give your answer.

When the extra-terrestrial evil robots

are about to conquer planet Earth

and nothing can resist them, resistance is futile,

at the very last moment,

Humans win It’s just one thing,

Because the robots don’t understand love.

Last moment there’s one heroic white dude that saves us.

[audience cheering and applause] [laughter]

Why we do this?

No, no, it was a joke, don’t worry.

[audience and panel laughter]

But, okay so, the two dystopia,

I do not have answers to the two dystopias

but I want to keep saying is

this is precisely why this is the moment

that we need to seek for solutions.

This is precisely why this is the moment

that we believe the new chapter of AI needs to be written

by cross-pollinating efforts from humanists,

social scientists, to business leaders,

to civil society, to governments to come at the same table

to have that multilateral and cooperative conversation.

I think you really bring out the urgency

and the importance and the scale of this potential crisis

but I think in the face of that, we need to act.

Yeah, and I agree that we need cooperation,

that we need much closer cooperation

between engineers and philosophers

or engineers and historians

and also from a philosophical perspective,

I think there is something wonderful

about engineers, philosophically.

Thank you. [laughing]

That they you really cut the bullshit.

I mean, philosophers can talk and talk you know,

in cloudy in flowery metaphors

and then the engineers can really focus the question.

Like, I just had a discussion the other day

with an engineer from Google about this

and he said, Okay, I know how to maximize

people’s time on the website.

If somebody comes to me and tells me,

Look, your job is to maximize time on this application.

I know how to do it because I know how to measure it.

But if somebody comes along and tells me,

Well you need to maximize human flourishing

or You need to maximize universal love,

I don’t know what it means.

So, the engineers go back to the philosophers

and ask them, what do you actually mean.

Which, you know, a lot of philosophical theories

collapse around that because they can’t really explain

what and we need this kind of collaboration.

Yeah.

We need a equation for that. In order to move forward.

But then Yuval, is Fei-Fei right?

If we can’t explain and we can’t code love,

can artificial intelligence ever recreate it

or is it something intrinsic to humans

that the machines will never emulate.

I don’t think that machines will feel love

but you don’t necessarily need to feel it

in order to be able to hack it,

to monitor it, to predict it, to manipulate it.

I mean, machines don’t like to play candy crush.

But you think they can– But they can still–

This device, in some future

where it’s infinitely more powerful

than it is right now, could make me fall in love

with somebody in the audience?

Hmm, that goes to the question of consciousness

and in mind.

Let’s go there. I don’t think that we have

the understanding of what consciousness is

to answer the question, whether a non-organic consciousness

is possible or is not possible.

I think we just don’t know but again

the bar for hacking humans is much lower.

The machines don’t need to have consciousness of their own

in order to predict our choices

and manipulate our choices, they just need…

If you accept that something like love is,

in the end, a biological process in the body.

If you think that AI can provide us

with wonderful health care

by being able to monitor and predict

something like the flu or something like cancer,

what’s the essential difference between flu and love?

[audience applause]

In the sense of, is this biological

and this something else, which is so separated

from the biological reality of the body,

that even if we have a machine

that is capable of monitoring and predicting flu,

it still lacks something essential

in order to do the same thing with love.

Fei-Fei.

So, I want to make two comments

and this is where my engineering,

you know, personality is speaking.

We’re making two very important assumptions

in this part of the conversation.

One is that AI is so omnipotent

that it’s achieved to a state

that it’s beyond predicting anything physical,

its guarding to the consciousness level

and getting to the, even the ultimate,

the love level of capability

and I do want to make sure that we recognize

that we’re very, very, very far from that.

This technology is still very nascent.

Part of the concern I have about today’s AI

is that super-hyping of its capability so,

I’m not saying that, that’s not a valid question

but I think that part of this conversation

is built upon that assumption that this technology

has become that powerful and there’s,

I don’t even know how many decades we are from that.

Second related assumption, I feel we are,

our conversation is being based on this

that we’re talking about the world or state of the world

that owning that powerful AI exists

or that small group of people

who have produced the powerful AI

and is intended to hack human, are existing.

But in fact our human society is so complex

there’s so many of us, right?

I mean, humanity in its history,

have faced so many technology,

if we left it in the hands of a bad player,

alone without any regulation, multinational collaboration,

rules, laws, moral codes, that technology could have,

maybe not hack human but destroy human

or hurt human in massive ways.

It has happened but by and large,

our society in a historical view

is moving to a more civilized and controlled state.

So, I think it’s important to look at that greater society

and bringing other players and people into this dialogue

so we don’t talk like there is only this omnipotent AI,

you know, deciding it’s gonna hack everything to the end.

And that brings to your topic that in addition

of hacking human at that level that you’re talking about,

there are some very immediate concerns already.

Diversity, privacy, labor, legal changes,

you know, international geopolitics

and I think it’s critical to tackle those now.

I love talking to AI researchers

because five years ago, all the AI researchers were like,

It’s much more powerful than you think and now

they’re all like, It’s not as powerful as you think.

[audience and panel laughter]

All right so,

Let me ask, It’s because five years ago

you have no idea what AI is,

I’m not saying it’s wrong Now, you’re extrapolating

too much. [laughs]

I didn’t say it was wrong, I just said it was a thing.

I want to go into what you just said

but before you do that I want to take one question here

from the audience because once we move

into the second section, we won’t be able to answer it.

So, the question is, it’s for you Yuval,

this from Mara and Lucini, How can we avoid

the formation of AI power digital dictatorships?

So, how do we avoid dystopia number two?

Let’s answer that and then let’s go Fei-Fei,

into what we can do right now,

not what we can do in the future.

The key issue is how to regulate the ownership of data

because we won’t stop research in biology

and we won’t stop research in computer science and AI.

So, for the three components of biological knowledge,

computing power, and data, I think data is the easiest

and it’s also very difficult but still the easiest,

kind of, to regulate or to protect.

Place some protections there and there are efforts

now being made and they are not just political efforts but,

also philosophical efforts to really conceptualize,

what does it mean to own data

or to regulate the ownership of data

because we have a fairly good understanding

what it means to own land,

we had thousands of years of experience with that,

we have a very poor understanding

of what it actually means to own data

and how to regulate it.

But this the very important front

that we need to focus on in order to prevent

the worst dystopian outcomes

and I agree that AI is not nearly as powerful

as some people imagined but this why,

and again, I think we need to place the bar low

for to reach a critical threshold,

we don’t need the AI to know us perfectly,

which will never happen, we just need the AI

to know us better than we know ourselves.

Which is not so difficult because most people

don’t know themselves very well

and often make [laughter and audience applause]

huge mistakes in critical decisions.

So, whether it’s finance, or career, or love life,

to have this shift in authority

from humans to algorithm, they can still be terrible

but as long as they are a bit less terrible

than us, the authority will shift to them.

Yuv, in your book you tell a very illuminating story

about your own self and your own coming in terms

with you with who you are and how you could be manipulated.

Will you tell that story here,

about coming to terms with your sexuality

and the story you told about Coca-Cola

and your book because I think that will make it clear

what you mean here, very well.

Yes so, I said that I only realized

that I was gay when I was 21.

And I look back at the time when I was,

I don’t know, 15, 17 and it should’ve been so obvious.

And it’s not like a stranger like,

I’m with myself 24 hours a day [laughter]

and I just don’t notice any, of like,

the screaming signs that saying,

There, you were gay and I don’t know how

but the fact is, I missed it.

Now, an AI, even a very stupid AI,

today, will not miss it.

[audience and panel laughing] I’m not so sure.

So imagine, this not like, you know like,

a science fiction scenario of a century from now,

this can happen today, that you can write

all kinds of algorithms that, you know,

they are not perfect but they are still better,

say than the average teenager

and what does it mean to live in a world

in which you learn about something so important

about yourself, from an algorithm.

What does it mean?

What happens if the algorithm doesn’t

share the information with you

but it shares the information

with advertisers or with governments?

So, if you want to, and I think we should,

go down from the cloudy heights of,

you know, the extreme scenarios

to the practicalities of day-to-day life,

this a good example because this is already happening.

Yeah, all right well let’s take the elevator

down to the more conceptual level

of this particular shopping mall

that we’re shopping in today

and Fei-Fei, let’s talk about what we can do today

as we think about the risks of AI, the benefits of AI,

and tell us you know, sort of your punch list,

of what you think the most important things

we should be thinking about with AI are.

Wow, boy there are so many things we could do today

and I cannot agree more with Yuval,

that this is such an important topic.

Again I’m gonna try to speak about all the efforts

that’s being made at Stanford

because I think this a good representation

of what we believe, there are so many efforts we can do.

So, in human-centered AI in which,

this the overall theme we believe,

that the next chapter of AI should be, is human-centered.

We believe in three major principles.

One principle is to invest in the next generation

of AI technology that reflects more

of the kind of human intelligence we would like.

I was just thinking about your comment

about AI’s dependence on data and how that the policy

and governance of data should emerge

in order to regulate and govern the AI impact.

Well, we should be developing technology

that can explain AI, in technical field

we call it explainable AI or AI interpretability studies.

We should be focusing on technology that have

the more nuanced understanding of human intelligence.

We should be investing in the development

of less data dependent AI technology

that would take into considerations of intuition, knowledge,

creativity, and other forms of human intelligence.

So, that kind of human intelligence inspired AI

is one of our principles.

The second principle is to, again welcome in

the kind of multidisciplinary study

of AI cross-pollinating with economics,

with ethics, with law, with philosophy,

with history, cognitive science, and so on

because there is so much more we need to understand

in terms of AI’s social, human,

anthropological, ethical impact

and we cannot possibly do this alone as technologists.

Some of us shouldn’t even be doing this,

it’s the ethicist, philosophers should participate

and work with us on these issues.

So, that’s the second principle and the third principle…

Oh, and within this we work with policymakers,

we convene the kind of dialogues

of multilateral stakeholders.

Then the third, the last but not the least,

I think Nick, you said that at the very beginning

of this conversation that we need to promote

that the human enhancing and collaborative

and augmentative aspect of this technology.

You have a point, even there it can become manipulative

but we need to start with that sense of alertness,

understanding, but still promote

that kind of benevolent applications

and design of this technology.

At least these are the three principles

the Stanford’s Human-Centered AI Institute is based on

and I just feel very proud, within a short few months

of the birth of this Institute,

there are more than 200 faculty involved on this campus

in this kind of research dialog, you know,

study education and that number is still growing.

Wow.

Of those three principles let’s start digging into them.

So, let’s go to number one, explainability,

’cause this a really interesting debate

in artificial intelligence so,

there are some practitioners who say

you should have algorithms that can explain

what they did and the choices they made.

It sounds eminently sensible but how do you do that?

I make all kinds of decisions that I can’t entirely explain

like, why did I hire this person off that person?

I can tell a story about why I did it

but I don’t know for sure.

Like, we don’t know ourselves well enough

to always be able to truthfully

and fully explain what we did.

How can we expect a computer using AI, to do that?

And, if we demand that here in the West

then there are other parts of the world

that don’t demand that, who may be able to move faster.

So, why don’t we start, why don’t I ask you

the first part of that question,

Yuval the second part of that question.

So, the first part is, can we actually get explainability

if it’s super hard even within ourselves?

Well, it’s pretty hard for me to multiply two digits

but you know, computers can do that.

Yeah.

So, the fact that something is hard for humans

doesn’t mean we shouldn’t try to get the machines to do it,

especially, after all, these algorithms

are based on very simple mathematical logic.

Granted, we’re dealing with newer networks these days

of millions of nodes and billions of connections so,

explainability is actually tough, it’s an ongoing research.

But I think this is such a fertile ground

and it’s so critical when it comes to health care decisions,

financial decisions, legal decisions,

there’s so many scenarios where this technology

can be potentially, positively useful

but with that kind of explainable capabilities so,

we’ve gotta try and I’m pretty confident

with a lot of smart minds out there,

this a crackable thing

and on top of that– Got 200 professors on it.

Right, not all of them doing AI algorithms.

On top of that, I think you have a point that

if we have technology that can explain

the decision making process of algorithms,

it makes it harder for it to manipulate and cheat, right?

It’s a technical solution, not the entirety of the solution,

that will contribute to the clarification

of what this technology is doing.

But because the, presumably the AI,

makes decision in a radically different way than humans

then even if the AI explains its logic

the fear is it will make absolutely no sense to most humans.

Most humans, when they are asked to explain a decision

they tell a story in a narrative form,

which may or may not reflect

what is actually happening within them,

in many cases it doesn’t reflect.

It’s just a made-up rationalization and not the real thing.

Now, in AI it could be much better than a human

in telling me like, I applied to the bank for a loan

and the bank says no and I ask why not

and the bank says, Okay, we’ll ask our AI

and the AI gives this extremely long,

statistical analysis based,

not on one or two salient feature of my life

but on 2,517 different data points

which it took into account and gave different weights

and why did you give this, this weight

and why did you give oh, there is another book about that

and most of the data points would seem,

to a human, completely irrelevant.

You applied for a loan on Monday

and not on Wednesday and the AI discovered that

for whatever reason, it’s after the weekend, whatever,

people who apply for loans on a Monday

are 0.075 percent less likely to repay the loan.

So, it goes into the equation

and I get this book of the real explanation,

finally I get a real explanation.

It’s not like sitting with a human banker

that just bullshit’s me [audience laughing]

What do I do with a book? Are you rooting for AI?

Are you saying AI’s good in this case?

In many cases, yes I mean, I think in many cas…

I mean, it’s two sides of the coin.

I think that in many ways the AI in this scenario

will be an improvement over the human banker

because for example, you can really know

what the decision is based on presumably,

but it’s based on something that I,

as a human being, just cannot grasp.

I know how to deal with simple narrative stories.

I didn’t give you a loan because you’re gay,

that’s not good or because you didn’t repay

any of your previous loans.

Okay, I can understand that.

But my mind doesn’t know what to do

with the real explanation that the AI will give,

which is just this crazy statistical thing, which–

Okay so, there are two layers to your comment.

One, is how do you trust

and be able to comprehend AI’s explanation?

Second, is actually, can AI be used

to make humans more trustable

or be more trustable than the human’s?

On the first point, I agree with you.

If AI gives you two thousand dimensions

of potential features with probability,

it’s now human understandable

but the entire history of science in human civilization

is to be able to communicate the result of science

in better and better ways, right?

Like, I just had my annual physical

and the whole bunch of numbers came to my cell phone

and well, first of all, my doctors can,

the expert can help me to explain these numbers.

Now, even Wikipedia can help me

to explain some of these numbers.

But the technological improvements

of explaining these will improve.

It’s our failure as AI technologists

if we just throw two hundred or two thousand dimensions

of probability numbers at you.

But I mean, this the explanation and I think

that the point you raise

is very important but, I see differently.

I think science is getting worse and worse

in explaining its theories and findings to general public.

Which is the reason for things like,

doubting climate change and so forth

and it’s not really even the fault of the scientists

because the science is just getting more

and more complicated and reality is extremely complicated

and the human mind wasn’t adapted

to understanding the dynamics of climate change

or the real reasons for refusing to give somebody a lone.

That’s the point when you have…

Again, let’s put aside the whole question of manipulation

and how can I trust.

Let’s assume the AI is benign

and let’s assume that there are no hidden biases,

everything is okay but, still I can’t understand,

the decision of the AI. That’s why Nick,

people like Nick, the storyteller, says to expla…

What I’m saying, you’re right it’s very complex

but there are people like–

I’m gonna lose my job to computer like, next week

but I’m happy to have your confidence with me.

But that’s the job of the society collectively

to explain the complex science.

I’m not saying we’re doing a great job, at all but,

I’m saying there is hope if we try.

But my fear is that we just really can’t do it

because the human mind is not built

for dealing with these kinds of explanations

and technologies and it’s true for,

I mean, it’s true for the individual customer

who goes to the bank

and the bank refused to give them a loan

and it can even be on the level, I mean,

how many people today on earth

understand the financial system?

[silence followed by light laughter]

How many presidents and prime ministers

understand the financial system?

In this country at zero? [audience laughter and applause]

So, what does it mean to live in a society

where the people who are supposed

to be running the business, and again,

it’s not the fault of a particular politician

it’s just the financial system has become so complicated

and I don’t think that economies

are trying on purpose to hide something for general public,

it’s just extremely complicated.

You had the some of the wisest people in the world

go into the finance industry

and creating these enormously complex models

and tools, which objectively, you just can’t explain it

to most people unless first of all,

they study economics and mathematics

for 10 years or whatever so, I think this a real crisis.

And this again, this part of

the philosophical crisis we started with

and the undermining of human agency.

That’s part of what’s happening,

that we have these extremely intelligent tools

that are able to make, perhaps better decisions

about our health care, about our financial system,

but we can’t understand what they are doing

and why they are doing it and this undermines our autonomy

and our authority and we don’t know

as a society, how to deal with that.

Well, ideally, Fei-Fei’s Institute will help that.

Before we leave this topic though,

I want to move to a very closely related question,

which I think is one of the most interesting,

which is the question of bias in algorithms,

which is something you’ve spoken eloquently about

and let’s stay with the financial systems.

So, you can imagine a loan used by a bank

to determine whether somebody should get a loan

and you can imagine training it on historical data

and historical data is racist and we don’t want that,

so let’s figure out how to make sure the data isn’t racist

and that it gives loans to people regardless of race.

And we probably all, everybody in this room agrees that,

that is a good outcome but let’s say that

analyzing the historical data suggests

that women are more likely to repay their loans than men.

Do we strip that out or do we allow that to stay in?

If you allow it to stay in,

you get a slightly more efficient financial system.

If you strip it out,

you have a little more equality between men and women.

How do you make decisions about

what biases you want to strip

and which ones are okay to keep?

That’s a excellent question Nick, I mean,

I’m not gonna have the answers personally

but I think you touched on the really important question.

It’s, first of all, a machine learning system bias

is a real thing you know, like you said.

It starts with data, it probably starts

with the very moment we’re collecting data

and the type of data were collecting

all the way through the whole pipeline

and then all the way to the application

but biases come in very complex ways.

At Stanford, we have machine learning scientists

studying the technical solutions of bias like,

you know de-biasing data

and normalizing certain decision-making

but we also have humanists debating about what is biased,

what is fairness, when is bias good,

when it’s bias bad so, I think you

just opened up a perfect topic for research

and debate and conversation in this topic

and I also want to point out that Yuval,

you already used a very closely related example,

machine learning algorithm has a potential

to actually expose bias, right?

Like, one of my favorite study was a paper

a couple of years ago analyzing Hollywood movies

and using machine learning face recognition algorithm,

which is a very controversial technology these days,

to recognize Hollywood systematically gives more screen time

to male actors than female actors.

No human being can sit there

and count all the frames of faces

and gender bias and this a perfect example

of using machine learning to expose bias.

So, in general there’s a rich set of issues

we should study and again, bring the humanists,

bring the ethicists, bring the legal scholars,

bring the gender study experts.

Agree though, standing up for humans,

I knew Hollywood was sexist

even before that paper but yes, agreed.

You are a smart human. [light laughter]

Yuval, on that question of the loans,

do you strip out the racist data,

do you strip out the gender data,

what biases do you get rid of,

what biases do you not?

I don’t think there is a one-size-fits-all.

I mean, it’s a question…

we need this day-to-day collaboration

between engineers, and ethicists,

and psychologists, and political scientists–

But not biologists, right?

[laughter] But not biologists? and increasing– [laughter]

And increasingly, also biologists.

It goes back to the question, what should we do?

So, we should teach ethics

to coders as part of their curriculum.

The people today in the world,

that most need a background in ethics

is the people in the computer science departments,

so it should be an integral part of the curriculum

and it’s also in the big corporations,

which are designing these tools,

they should be embedded within the teams,

people with background in things like ethics,

like politics, that they always think

in terms of what biases might we inadvertently

be building into our system.

What could be the cultural or political implications

of what we are building?

It shouldn’t be a kind of afterthought

that you create this neat technical gadget,

it goes into the world, something bad happens,

and then you start thinking,

Oh, we didn’t see this one coming. What do we do now?

From the very beginning, it should be clear

that this is part of the process.

Yep, I do want to give a shout out to Rob Reich

who just introduced this whole event

He and my colleagues, Mehran Sahami

and a few other Stanford professors have opened this course

called Ethics Computation and sorry Rob,

I’m abusing the title of your course

but this exactly the kind of classes it’s…

I think this quarter, the offering

has more than 300 students signed up to that.

Fantastic.

I wish the course the existed when I was a student here.

Let me ask an excellent question

from the audience, it ties into this.

This is From Yu Jin Lee;

how do you reconcile the inherent trade-offs

between explainability and efficacy

and accuracy of algorithms?

Great question.

This question seems to be assuming if you can explain it,

you’re less good or less accurate.

Well, you can imagine that if you require explainability

you lose some level of efficiency,

you’re adding a little bit of complexity to the algorithm.

So okay, first of all,

I don’t necessarily believe in that,

there’s no mathematical logic to this assumption.

Second let’s assume there is a possibility

that an explainable algorithm suffers efficiency.

I think this a societal decision we have to make.

You know, when we put the seatbelt in our car,

driving that’s a little bit of an efficiency loss

’cause I have to do that seatbelt movement

instead of just hopping and drive

but as a society we decided

we can afford that loss of efficiency

because we care more about human safety.

So, I think AI is the same kind of technology

as we make these kind of decisions going forward

in our solutions, in our products,

we have to balance human wellbeing

and societal well-being with efficiency.

So Yuval, let me ask you,

the global consequences of this is something

that a number of people have asked about

in different ways and we’ve touched on

but we haven’t hit head-on.

There are two countries, imaginative country A,

and you have country B.

Country A says all of you AI engineers,

you have to make it explainable,

you have to take ethics classes,

you have to really think about

the consequences of what you’re doing,

you got to have dinner with biologists,

you have to think about love,

and you have to like, read you know, John Locke.

That’s group A.

Group B country says just go build some stuff, right?

These two countries, at some point,

are gonna come in conflict and I’m gonna guess

that country B’s technology might be ahead of country A’s.

Is that a concern?

Yeah, that’s always the concern with arms races,

which become a race to the bottom

in the name of efficiency and domination

and we are in, I mean…

What is extremely problematic or dangerous

about the situation now is, with AI,

is that more and more countries are waking up

to the realization that this could be

the technology of domination in the 21st century.

So, you’re not talking about just any economic competition

between the different textile industries

or even between different oil industries,

like one country decides, we don’t care

about environment at all, we’ll just go full gas ahead

and the other countries is much more environmentally aware.

The situation with AI is potentially much worse

because it could be really, the technology of domination

in the 21st century and those left behind

could be dominated, exploited,

conquered by those who forge ahead.

So, nobody wants to stay behind

and I think the only way to prevent

this kind of catastrophic arms race to the bottom

is greater global cooperation around AI.

Now this sounds utopian because we are now moving

in exactly the opposite direction,

of more and more rivalry and competition

but this is part of, I think, of our job

like with the nuclear arms race,

to make people in different countries realize that

this is an arms race, that whoever wins, humanity loses.

And it’s the same with AI, if AI becomes an arms race

then this is extremely bad news for all the humans

and it’s easy for say, people in the US,

to say we are the good guys in this race,

you should be cheering for us

but this becoming more and more difficult

in a situation when the motto of the day is, America first.

I mean, how can we trust the USA

to be the leader in AI technology

if ultimately it will serve only American interests

in American economic and political domination.

So it’s really, I think most people

when they think arms race in AI,

they think USA versus China

but there are almost 200 other countries in the world

and most of them are far, far behind

and when they look at what is happening

they are increasingly terrified and for a very good reason.

The historical example you’ve made is a little unsettling.

If I heard your answer correctly,

it’s that we need global cooperation

and if we don’t we’re gonna lead to an arms race.

In the actual nuclear arms race

we tried for global cooperation from,

I don’t know, roughly 1945 to 1950

and then we gave up and then we said

we’re going full-throttle the United States

and then why did the Cold War end the way it did?

Who knows, but one argument would be that the United States,

you know, build up and it’s relentless build up

of nuclear weapons helped to keep the peace

until the Soviet Union collapsed.

So, if that is the parallel, then what might happen here

is we’ll try for global cooperation in 2019,

2020, 2021, and then we’ll be off in an arms race.

A, is that likely and, B if it is,

would you say, well then the US,

it needs to really move full-throttle in AI

because it would better for the liberal democracies

to have artificial intelligence than totalitarian states.

Well, I’m afraid it is very likely

that cooperation will break down

and we will find ourselves in an extreme version

of an arms race and in a way,

it’s worse than the nuclear arms race

because with nukes, at least until today,

countries develop them but never use them.

AI will be used all the time.

It’s not something you have on the shelf

for some doomsday war.

It will be used all the time to create

potentially, total surveillance regimes

in extreme totalitarian systems,

in one way or the other.

From this perspective, I think the danger is far greater.

You could say that the nuclear arms race

actually saved democracy, and the free market,

and you know, rock and roll,

and Woodstock, and then the hippies.

They all owe a huge debt to nuclear weapons [smirking]

because if nuclear weapons weren’t invented,

there would have been a conventional arms race

and conventional military buildup

between the Soviet bloc and the American bloc

and that would have meant total mobilization of society.

If the Soviets are having total mobilization

the only way the Americans can compete is to do the same.

Now, what actually happened

was that you had an extreme totalitarian mobilized Society

in the communist bloc but thanks to nuclear weapons

you didn’t have to do it in the United States,

or in western Germany, or in France

because you relied on nukes.

You don’t need millions of conscripts in the army

and with AI it going to be just the opposite

that the technology will not only be developed,

it will be used all the time

and that’s a very scary scenario.

[Nick] So–

Wait, can I just add one thing?

I don’t know history like you do

but you said AI is different from nuclear technology.

I do want to point out, it is very different

because the same time as you are talking

about these more scarier situation,

this technology has a wide

international scientific collaboration basis

that is being used to make transportation better,

is to improve healthcare, to improve education and,

so it’s a very interesting, new time

that we haven’t seen before because while we have this,

kind of, competition we also have

massive international scientific community collaboration

on these benevolent users

and democratization of this technology.

I just think it’s important to see both side of this.

You’re absolutely right, there also,

as I said, there are also enormous benefits

to this technology.

And in a global collaborative way,

especially among the scientists.

The global aspect is more complicated

because the question is, what happens

if there is a huge gap in abilities

between some countries and most of the world?

Would we have a re-run of the 19th century

Industrial Revolution, when the few industrial powers

conquer, and dominate, and exploit the entire world,

both economically and politically?

What’s to prevent that from repeating?

So, even in terms of, you know,

without this scary war scenario

we might still find ourselves

with a global exploitation regime

in which the benefits, most of the benefits,

go to a small number of countries

at the expense of everybody else.

Have you heard of archive.org?

Archive.org? [light laughs]

So, students in the audience might laugh at this

but we are in a very different scientific research climate

is that the kind of globalization of technology

and technique happens in a way

that the 19th century even 20th century never saw before.

Any paper that is a basic science research paper

in AI today, or technical technique that is produced,

let’s say, this week at Stanford,

it’s easily get globally distributed

through this thing called archive, or GitHub, or repository.

The information is out there, yeah.

Globalization of this scientific technology

travels in a very different way

from the 19th and 20th century.

I mean, I don’t doubt there are,

you know, confined development of this technology,

maybe by regimes but we do have to recognize

that this global, the differences is pretty sharp now

and we might need to take that into consideration

that the scenario you’re describing is harder.

I’m not say impossible, but harder to happen.

So, you think that the way–

Just say that it’s not just the scientific papers.

Yes, the scientific paper’s out there

but if I live in Yemen, or in Nicaragua,

or in the Indonesia, or in Gaza,

yes I can connect to the internet and download the paper.

What will I do with that?

I don’t have the data.

I don’t have the infrastructure.

I mean, you look at

where the big corporations are coming from

that hold all the data of the world,

they are basically coming from just two places.

I mean even Europe is not really in the competition.

There is no European Google,

or European Amazon, or European Baidu,

or European Tencent and if you look beyond Europe,

you think about Central America,

you think about most of Africa,

the Middle East, much of Southeast Asia,

it’s yes, the basic scientific knowledge is out there

but this just one of the components

that go to creating something that can compete

with Amazon or with Tencent or with the abilities

of governments like the US government

or like the Chinese government.

So, I agree that the dissemination of information

and basic scientific knowledge,

we’re at completely different place,

than in the 19th century.

Let me ask you about that

’cause it’s something three or four people

have asked in the questions which is,

it seems like there could be a centralizing force

of artificial intelligence, that it will make

whoever has the data and the best compute,

more powerful and that it could then accentuate

income inequality both within countries

and within the world, right?

You can imagine the countries you’ve just mentioned:

The United States, China, Europe lagging behind,

Canada somewhere behind, way ahead of Central America.

It could accentuate global income inequality.

A, do you think that’s likely

and B, how much does it worry you?

We have about four people who’ve asked a variation on that.

As I said, it’s very, very likely.

It’s already happening and it’s extremely dangerous

because the economic and political consequences

could be catastrophic.

We are talking about the potential collapse

of entire economies and countries.

Countries that depend say, on cheap manual labor

and they just don’t have the educational capital

to compete in a world of AI,

so what are these countries going to do?

I mean if, say you shift back

most production from say, Honduras or Bangladesh,

to the USA into Germany because,

the human salaries are no longer part of the equation

and it’s cheaper to produce the shirt in California

than in Honduras, so what will the people there do?

And you can say, okay but there will be many more jobs

for software engineers but we are not teaching

the kids in Honduras to be software engineers so,

maybe a few of them could somehow immigrate to the US

but most of them won’t and what will they do?

And we at present, we don’t have the economic answers

and the political answers to these questions.

Fei-Fei, you wanna jump in here?

I think that’s fair enough.

I think Yuval definitely has laid out

some of the critical pitfalls of this

and that’s why we need more people to be studying

and thinking about this.

One of the things we over and over noticed,

even in this process of building a community

of human-centered AI and also talking to people,

both internally and externally,

is that there are opportunities

for business around the world

and governments around the world

to I think about their data and AI strategy.

There are still many opportunities

for, you know, outside of the big players

in terms of companies and countries,

to really come to the realization

it’s an important moment for their country,

for their region, for their business,

to transform into this digital age

and I think when you talk about these potential dangers

and lack of data in parts of the world

that hasn’t really caught up

with this digital transformation,

the moment is now and we hope to,

you know, raise that kind of awareness

and then encourage that kind of transformation.

Yeah, I think it’s very urgent.

I mean, what we are seeing at the moment

is on the one hand, what you could call

some kind of data colonization,

that the same model that we saw in the 19th century

that you have the Imperial hub

where they have the advanced technology,

they grow the cotton in India or Egypt,

they send the raw materials to Britain,

they produce the shirts,

the high-tech industry of the 19th century in Manchester,

and they send the shirts back, to sell them in in India

and out-compete the local producers.

And we in a way, might beginning to see the same thing now,

with the data economy, that they harvest the data

in places also like Brazil and Indonesia

but they don’t process the data there.

The data from Brazil and Indonesia

goes to California or goes to Eastern China,

being processed there, later produced

the wonderful new gadgets and technologies,

and sell them back as finished products

to the provinces or to the colonies.

Now, it’s not a one-to-one,

it’s not the same, there are differences

but I think we need to keep this analogy in mind

and another thing that maybe we need to keep in mind

in this respect, I think is re-emergence of stone walls

that I’m kind of, you know…

Originally my specialty was medieval military history.

This how I began my academic career

with the Crusades and castles and knights

and so forth and now I’m doing all these cyborgs

and AI stuff but suddenly there is something

that I know from back then, the walls are coming back.

And I try to kind of, what’s happening here?

I mean, we have virtual realities, we have 3G, AI,

and suddenly the hottest political issue

is building a stone wall.

Like, the most low-tech thing you can imagine [applause]

and what is the significance of a stone wall

in a world of interconnectivity and all that?

And it really frightens me that

there is something very sinister there,

the combination of data is flowing around everywhere

so easily but more and more countries,

and also my home country of Israel, it’s the same thing.

You have the, you know, the startup nation

and then the wall and what does it mean, this combination?

Fei-Fei, you wanna answer that?

[audience and panel laughing]

Maybe you can look at the next question.

[loud laughing]

You know what, let’s go to the next question

which is tied to that and the next question is,

you have the people there at Stanford

who will help be building these companies,

who will either be furthering the process

of data colonization or reversing it,

or who will be building you know,

the efforts to create a virtual wall.

A world based on artificial intelligence

are being created, or funded at least,

by a Stanford Graduate so,

you have all these students here, in the room,

how do you want them to be thinking

about artificial intelligence

and what do you want them to learn?

Let’s spend the last 10 minutes of this conversation

talking about what everybody here should be doing.

So, if you’re a computer science or engineering student,

take Rob’s class.

If you’re humanists, take my class.

And all of you read Yuval’s books.

Are his books on your syllabus?

Not on mine, sorry.

I teach hard-core, deep learning.

His book doesn’t have equations.

I don’t know B plus C plus D equalls H.

But seriously, you know what I meant to say

is that Stanford students, you have a great opportunity

We have a proud history of bringing this technology to life.

Stanford was at the forefront of the birth of AI,

in fact our very Professor John McCarthy

coined the term artificial intelligence

and came to Stanford in 1963 and started this nation’s,

one of the two oldest AI labs in this country

and since then, Stanford’s AI research

has been at the forefront of every wave of AI changes

and this 2019, we’re also at the forefront

of starting the human-centered AI revolution

or writing of the new AI chapter

and we did all this for the past 60 years, for you guys.

For the people who come through the door

and who will graduate and become practitioners,

leaders, and part of the civil society,

and that’s really what the bottom line is about.

Human-centered AI needs to be written

by the next generation of technologists

who have taken classes like Rob’s class,

to think about the ethical implications,

the human well being and it’s also gonna be written

by those potential future policymakers

who came out of Stanford’s humanity studies

and Business School, who are versed

in the details of the technology,

who understand the implications of this technology,

and who has the capability to communicate

with the technologies.

No matter how we agree and disagree,

that’s the bottom line, is that we need

this kind of multilingual leaders

and thinkers and practitioners and that is

what Stanford’s Human-Center AI Institute is about.

Yuval, how do you wanna answer that question?

Well, on the individual level,

I think it’s important for every individual,

whether in Stanford, whether an engineer or not,

to get to know yourself better

because you are now in a competition.

You know, it’s the all the old advice in the book,

in philosophy, is know yourself.

We’ve heard it from Socrates,

from Confucius, from Buddha, get to know yourself.

But there is a difference,

which is that now, you have competition.

In the day of Socrates or Buddha,

if you didn’t make the effort, so okay,

so you missed on enlightenment but

still the king wasn’t competing with you.

They didn’t have the technology.

Now you have competition, you’re competing

against these giant corporations and governments.

If they get to know you better than you know yourself,

the game is over.

So you need to buy yourself some time

and the first way to buy yourself some time

is to get to know yourself better

and then they have more ground to cover.

For engineers and students I would say,

I’ll focus on engineers maybe,

the two things that I would like

to see coming out from the laboratories

and the engineering departments is first,

tools that inherently work better

in a decentralized system, then in a centralized system.

I don’t know how to do it but if you…

I hope this something that engineers can work with.

I heard this blockchain is like the big promise,

in that area, I don’t know.

But whatever it is, part of when you start designing a tool,

part of the specification of what this tool should be like,

I would say, this tool should work better

in a decentralized system than in a centralized system.

That’s the best defense of democracy.

the second thing that I would like to see coming out–

I don’t want to cut you off

’cause I want you to get to this second thing,

how do you make a tool work better in a democracy than–

I’m not an engineer, I don’t know. [laughter]

Okay.

All right, well then go to part two.

Take that, someone in this room, figure that out

’cause it’s very important, whatever it means.

I can think about it and then…

I can give you a historical examples

of tools that work better in this way

or in that way but I don’t know how to translate it

into present-day technological terms.

Go to part two ’cause I got a few more questions

to ask from the audience.

Okay so, the other thing that I would like to see coming

is an AI sidekick that serves me

and not some corporation or government.

We can’t stop the progress of this kind of technology

but I would like to see it serving me.

So yes, it can hack me but it hacks me

in order to protect me.

Like, my computer has an anti-virus

but my brain hasn’t, it has a biological antivirus

against the flu or whatever

but not against hackers and fraud and so forth.

So, one project to work on is to create an AI sidekick

which I paid for, maybe a lot of money,

and it belongs to me, and it follows me,

and it monitors me, and what I do,

and my interactions, but everything it learns,

it learns in order to protect me from manipulation

by other AI’s, by other outside influencers.

This something that I think,

with the present day technology,

I would like to see more effort in that direction.

Not to get into too technical terms,

I think you would feel comforted to know that

the budding efforts in this kind of research is happening,

you know, trustworthy AI, explainable AI,

and security motivated,

so I’m not saying we have the solution

but a lot of technologists around the world

are thinking along that line

and trying to make that happen.

It’s not that I want an AI that belongs to Google

or to the government, that I can trust,

I want an AI that I’m its master, it’s serving me,

And it’s powerful, it’s more powerful than my AI

because otherwise my AI could manipulate your AI.

[audience and panel laughter]

It will have the inherent advantage

of knowing me very well, so it might not be able to hack you

but because it follows me around

and it has access to everything I do and so forth,

it gives it an edge in the specific realm of just me.

So, this a kind of counterbalance

to the danger that the people–

But even that would have a lot of challenges

in their society.

Who is accountable, are you accountable

for your action or your sidekick?

Oh, good question. This is going to be

a more and more difficult question

that we will have to deal with.

The sidekick defense. [light laughter]

All right, Fei-Fei,

let’s go through a couple questions quickly.

We often talk of, this is from Regan Pollock,

we often talk about top-down AI from the big companies,

how should we design personal AI

to help accelerate our lives and careers?

The way I interpret that question is

so much of AI is being done at the big companies.

If you want to have AI at a small company

or personally, can you do that?

So, well first of all, one solution

is what Yuval just said [laughing]

But probably, those things will be built by Facebook.

So, first of all, it’s true

there’s a lot of investment and efforts putting

and resource putting big companies in AI research

and development but it’s not that

all the AI is happening there.

I want to say that academia continue to play a huge role

in AI’s research and development,

especially in the long term exploration of AI

and what is academia?

Academia is a worldwide network

of individual students and professors

thinking very independently and creatively

about different ideas.

So, from that point of view,

it’s a very grassroot kind of effort in AI research

that continues to happen and small businesses

and independent research institutes,

also have a role to play, right?

There are a lot of publicly available data sets,

it’s a global community that is very open about sharing

and disseminating knowledge and technology,

so yes, please, by all means,

we want global participation in this.

All right here’s my favorite question.

This is from anonymous, unfortunately.

If I am in eighth grade, do I still need to study?

[loud laughter and applause]

As a mom, I will tell you yes.

Go back to your homework.

All right Fei-Fei, what do you want

Yuval’s next book to be about?

Wow, I didn’t know this, I need to think about that.

All right well, while you think about that,

Yuval, what area of machine learning

do you want Fei-Fei to pursue next?

The sidekick project. [laughing]

Yeah, I mean, just what I said, an AI,

can we create a kind of AI which can serve individual people

and not some kind of big network?

I mean, is that even possible

or is there something about the nature of AI

which inevitably will always lead back

to some kind of network defect

and winner-takes-all and so forth?

All right, we’re gonna wrap with Fei-Fei,

Okay, his next book is gonna be a science fiction book

between you and your sidekick. [all laughing]

All right, one last question for Yuval

’cause we’ve got two of the top voted questions are this,

without the belief in free will,

what gets you up in the morning?

Without the belief in free will…

I don’t think that the question of, I mean, is very

interesting, or very central.

It has been central in Western civilization

because of some kind of basically,

theological mistake made thousands of years ago [laughing]

but really it’s a misunderstanding of the human condition.

The real question is,

how do you liberate yourself from suffering?

And one of the most important steps in that direction

is to get to know yourself better

and for that, you need to just push aside

this whole, I mean, for me the biggest problem

with the belief in free will is that

it makes people incurious about themselves

and about what is really happening inside themselves

because they basically say, I know everything

I know why I make decisions, this my free will.

And they identify with whatever thought

or emotion pops up in their mind

because ey, this my free will

and this makes them very incurious

about what is really happening inside

and what is also the deep sources

of the misery in their lives.

And so, this what makes me wake up in the morning

to try and understand myself better,

to try and understand the human condition better,

and free will is, it’s just irrelevant for that.

And if we lose it, your sidekick can get you up

in the morning. [light laughter]

Fei-Fei, 75 minutes ago

you said we weren’t gonna reach any conclusions.

Do you think we got somewhere?

Well, we opened a dialogue between the humanist

and the technologists and I want to see more of that.

Great, all right, thank you so much.

Thank you Fei-Fei, thank you Yuval Noah Harari.

It was wonderful to be here, thank you to the audience.

Interesting quote from “21 Lessons for the 21st Century”

Hi – I’m reading “21 Lessons for the 21st Century” by Yuval Noah Harari and wanted to share this quote with you.

“The Political Challenge The merger of infotech and biotech threatens the core modern values of liberty and equality. Any solution to the technological challenge has to involve global cooperation. But nationalism, religion, and culture divide humankind into hostile camps and make it very difficult to cooperate on a global level. California is used to earthquakes, but the political tremor of the 2016 U.S. elections still came as a rude shock to Silicon Valley. Realizing that they might be part of the problem, the computer wizards reacted by doing what engineers do best: they searched for a technical solution. Nowhere was the reaction more forceful than in Facebook’s headquarters in Menlo Park. This is understandable. Since Facebook’s business is social networking, it is most attuned to social disturbances. After three months of soul-searching, on February 16, 2017, Mark Zuckerberg published an audacious manifesto on the need to build a global community, and on Facebook’s role in that project.1 In a follow-up speech at the inaugural Communities Summit on June 22, 2017, Zuckerberg explained that the sociopolitical upheavals of our time—from rampant drug addiction to murderous totalitarian regimes—result to a large extent from the disintegration of human communities. He lamented the fact that “for decades, membership in all kinds of groups”

Start reading this book for free: https://a.co/12jrFSb

How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED

How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED

 

https://www.wired.com/video/watch/yuval-harari-tristan-harris-humans-get-hacked

 

Yuval Noah Harari, historian and best-selling author of Sapiens, Homo Deus and 21 Lessons for the 21st Century, and Tristan Harris, co-founder and executive director of the Center for Humane Technology, speak with WIRED Editor in Chief Nicholas Thompson.

 

Hello I’m Nicholas Thompson,

I’m the editor-in-chief of Wired magazine.

I’m here with two of my favorite thinkers.

Yuval Noah Harari.

He’s the author of three number one best-selling books

including 21 Lessons for the 21st Century

which came out this week

and which I just finished this morning

which is extraordinary.

And Tristan Harris,

who’s the man who got the whole world

to talk about time well spent

and has just founded the Center for Humane Technology.

I like to think of Yuval as the man

who explains the past and the future

and Tristan as the man who explains the present.

We’re gonna be here for a little while talking

and it’s a conversation inspired

by one of the things that matters most to me

which is that Wired magazine

is about to celebrate its 25th anniversary.

And when the magazine was founded,

the whole idea was that it was a magazine

about optimism and change,

and technology was good and change is good.

25 years later you look at the world today,

you can’t really hold the entirety of that philosophy.

So I’m here to talk with Yuval and Tristan.

Hello!

[Yuval] Hello. Thank you.

Good to be here.

Tristan why don’t you tell me a little bit about

what you do and then Yuval you tell me too.

I am a director of the Center for Humane Technology

where we focus on realigning technology

with a clear-eyed model of human nature.

And I was before that a design ethicist at Google,

where I studied the ethics of human persuasion.

I’m a historian and I try to understand

where humanity’s coming from and where we are heading.

Let’s start by hearing about how you guys met each other

’cause I know it goes back a little while,

so when did the two of you first meet?

Funnily enough on an expedition to Antarctica.

(laughing)

Not with Scott and Amundsen,

just we were invited by the Chilean government

to the congress of the future,

to talk about the future of humankind

and one part of the congress

was an expedition to the Chilean base in Antarctica

to see global warming with our own eyes.

It was still very cold and it was interesting

and so many interesting people on this expedition.

A lot of philosophers,

Nobel Laureates and I think we particularly connected

with Michael Sandel.

He’s a really amazing philosopher in moral philosophy.

It’s almost like a reality show.

I would have loved to be able to see the whole thing.

Let’s get started with one of the things

that I think is one of the most interesting continuities

between both of your work.

You write about different things

you talk about different things

but there are a lot of similarities.

And one of the key themes is the notion

that our minds don’t work the way

that we sometimes think they do.

We don’t have as much agency over our minds

as perhaps we believed.

Or we believed until now.

So Tristan why don’t you start talking about that

and then Yuval,

jump in and we’ll go from here.

[Tristan] I actually learned a lot of this

from one of Yuval’s early talks

where he talks about democracy as the,

where should we put authority in a society?

And we should put it in the opinions and feelings of people.

But my whole background,

I actually spent the last 10 years studying persuasion.

Starting when I was a magician as a kid where you learned

that there’s things that work on all human minds.

It doesn’t matter whether they have a PhD

or what education level they have,

whether they’re nuclear physicists,

what age they are.

It’s not like if you speak Japanese

I can’t do this trick on you,

it’s not going to work.

Or if you have a PhD.

It works on everybody.

So somehow there’s this discipline

which is about universal exploits on all human minds.

And then I was at a lab called the Persuasive Technology Lab

that teaches at Stanford that teaches engineering students

how do you apply the principles

of persuasion to technology.

Could technology be hacking human feelings,

attitudes, beliefs,

behaviors to keep people engaged with products.

And I think that’s the thing we both share

is that the human mind is not the total secure enclave

root of authority that we think it is.

And if we want to treat it that way

we’re gonna have to understand

what needs to be protected first,

is my perspective.

Yeah I think that we are now facing

not just a technological crisis

but a philosophical crisis

because we have built our society,

certainly liberal democracy with elections

and the free market and so forth,

on philosophical ideas from the 18th Century

which are simply incompatible

not just with the scientific findings of the 21st Century

but above all with the technology

we now have at our disposal.

Our society’s built on the ideas that the voter knows best,

that the customer is always right,

that ultimate authority as Tristan said

is the feelings of human beings.

And this assumes that human feelings and human choices

are this sacred arena which cannot be hacked,

which cannot be manipulated.

Ultimately my choices,

my desires reflect my free will

and nobody can access that or touch that.

And this was never true

but we didn’t pay a very high cost

for believing in this myth in the 19th or 20th Century

because nobody had the technology to actually do it.

Now some people,

corporations,

governments,

they are gaining the technology to hack human beings.

Maybe the most important fact

about living in the 21st Century

is that we are now hackable animals.

Explain what it means to hack human being

and why what can be done now is different

from what could be done a hundred years ago

with religion or with the book

or with anything else that influences what we see

and changes the way we think about the world.

To hack a human being

is to understand what’s happening inside you

on the level of the body,

of the brain,

of the mind so that you can predict

what people will do.

You can understand how they feel.

And once you understand and predict

you can usually also manipulate

and control and even replace.

Of course it can’t be done perfectly,

and it was possible to do it to some extent a century ago.

But the difference in the level is significant.

I would say the real key

is whether somebody can understand you

better than you understand yourself.

The algorithms that are trying to hack us,

they will never be perfect.

There is no such thing

as understanding perfectly everything

or predicting everything.

You don’t need perfect.

You just need to be better than the average human being.

And are we there now?

Or are you worried that we’re about to get there?

I think Tristan might be able to answer

where we are right now better than me

but I guess that if we are not there now

we are approaching very very fast.

I think a good example of this is YouTube.

Relatable example.

You open up that YouTube video your friend sends you

after your lunch break.

You come back to your computer.

And you think okay I know those other times

I end up watching two or three videos

and I end up getting sucked into it.

But this time it’s gonna be really different.

I’m just gonna watch this one video

and then somehow that’s not what happens.

You wake up from a trance three hours later

and you say what the hell just happened

and it’s because you didn’t realize

you had a supercomputer pointed at your brain.

So when you open up that video

you’re activating Google Alphabet’s

billions of dollars of computing power.

And they’ve looked at what has ever gotten

two billion human animals to click on another next video.

And it knows way more about

what’s gonna be the perfect chess move

to play against your mind.

If you think of your mind as a chessboard

and you think you know the perfect move to play,

I’ll just watch this one video.

But you can only see so many moves ahead on the chessboard.

But the computer sees your mind and it says no no no,

I’ve played a billion simulations of this chess game before

on these other human animals watching YouTube

and it’s gonna win.

Think about when Garry Kasparov loses against Deep Blue.

Garry Kasparov can see so many moves ahead on the chessboard

but he can’t see beyond a certain point.

Like a mouse can see so many moves ahead in a maze,

but a human can see so way more moves ahead

and then Garry can see even more moves ahead.

But when Garry loses against IBM Deep Blue,

that’s checkmate against humanity for all time

because he was the best human chess player.

So it’s not that we’re completely losing human agency.

You walk into YouTube and it always addicts you

for the rest of your life

and you never leave the screen.

But everywhere you turn on the internet

there’s basically a supercomputer pointed at your brain

playing chess against your mind

and it’s gonna win a lot more often than not.

[Nicholas] Let’s talk about that metaphor

because chess is a game with a winner and a loser.

So you set up the technology fully as an opponent.

But YouTube is also gonna,

I hope,

please gods of YouTube,

recommend this particular video to people

which I hope will be elucidating and illuminating.

So is chess really the right metaphor?

A game with a winner and a loser?

The question is what is the game that’s being played?

If the game being played was,

Hey Nick go meditate in a room for two hours

then come back to me and tell me

what do you really want right now in your life?

And if YouTube is using two billion human animals

to calculate based on everybody who’s ever wanted

how to learn how to play ukulele,

they can say here’s the perfect video I have

to teach you how to play ukulele.

That could be great.

The problem is it doesn’t actually care about what you want.

It just cares about what will keep you next on the screen.

And we’ve actually found,

we have an ex-YouTube engineer who works with us,

who’s shown that there’s a systemic bias in YouTube.

So if you airdrop a human animal and they land on,

let’s say a teenage girl and she watches a dieting video,

the thing that works best at keeping that girl

who’s watching a dieting video on YouTube the longest

is to say here’s an anorexia video.

Because that’s between,

here’s more calm stuff and true stuff

and here’s the more insane divisive

outrageous conspiracy intense stuff.

YouTube always if they want to keep your time

they want to steer you down that road.

And so if you airdrop a person on a 9/11 video

about the 9/11 news event,

just a fact-based news video,

the autoplaying video is the Alex Jones Infowars video.

So what happens to this conversation?

What follows us?

Ray Kurtzweil?

(laughing)

Yeah I guess it’s gonna really depend.

(laughing)

And the problem is you can also kind of hack these things.

There’s governments who actually can manipulate

the way that the recommendation system works

by throwing thousands of headless browsers,

versions of Firefox to watch one video

and then get it to search for another video

so that we search for Yuval Hirari,

we’ve watched that one video,

then we get thousands of computers

to simulate people going from Yuval Hirari

to watching The Power of Putin or something like that.

And then that’ll be the top recommended video.

And so as Yuval says,

these systems are kind of out of control

and algorithms are running

where two billion people spend their time.

70% of what people watch on YouTube

is driven by recommendations from the algorithm.

People think what you’re watching on YouTube is a choice.

People are sitting there,

they sit there,

they think and then they choose.

But that’s not true.

70% of what people are watching

is the recommended videos on the right hand side.

Which means 70% of where 1.9 billion users,

that’s more than the number of followers of Islam,

about the number of followers of Christianity,

of what they’re looking at on YouTube for 60 minutes a day.

That’s the average time people spend on YouTube.

60 minutes and 70% is populated by a computer.

So now the machine is out of control.

Because if you thought 9/11 conspiracy theories

were bad in English try,

what are 9/11 conspiracies

in Burmese and Sri Lankan and Arabic.

No-one’s looking at that.

And so it’s kind of a digital Frankenstein.

It’s pulling on all these levers

and steering people in all these different directions.

And Yuval we got into this point

by you saying that this scares you for democracy.

It makes you worry whether democracy can survive

or I believe you say,

the phrase you use in your book

is democracy will become a puppet show.

[Yuval] Explain that. Yeah.

If it doesn’t adapt to these new realities

it will become just an emotional puppet show.

If you go on with this illusion

that human choice cannot be hacked,

cannot be manipulated

and we can just trust it completely

and this is the source of all authority

then very soon you end up with an emotional puppet show.

This is one of the greatest dangers that we are facing

and it really is the result of philosophical impoverishment.

Of taking for granted philosophical ideas

from the 18th Century and not updating them

with the findings of science.

It’s very difficult because you go to people,

people don’t want to hear this message

that they are hackable animals.

That their choices,

their desires,

their understanding of who am I?

What is my most authentic aspirations?

This can actually be hacked and manipulated.

To put it briefly,

my amygdala may be working for Putin.

I don’t want to know this.

I don’t want to believe that.

No I am a free agent.

If I am afraid of something this is because of me.

Not because somebody implanted this fear in my mind.

If I choose something this is my free will

and who are you to tell me anything else?

I’m hoping that Putin will soon be working for my amygdala

but that’s a side project I have going.

It seems inevitable from what you wrote in your first book

that we would reach this point

where human minds would be hackable

and where computers and machines and AI

would have a better understanding of us.

But it’s certainly not inevitable

that it would lead us to negative outcomes,

to 9/11 conspiracy theories and to a broken democracy.

Have we reached the point of no return?

How do we avoid the point of no return

if we haven’t reached there?

What are the key decision points along the way?

Nothing is inevitable in that.

The technology itself is going to develop.

You can’t just stop all research in AI

and you can’t stop all research in biotech.

And the two go together.

I think that AI gets too much attention now

and we should put equal emphasis

on what’s happening on the biotech front.

Because in order to hack human beings you need biology.

Some of the most important tools and insights,

they are not coming from computer science.

They’re coming from brain science.

And many of the people who design

all these amazing algorithms,

they have a background in psychology and in brain science.

This is what you’re trying to hack.

But what we should realize is

we can use the technology in many different ways.

For example we’re now using AI

mainly in order to surveil individuals

in the service of corporations and governments

but it can be flipped to the opposite direction.

We can use the same surveillance systems

to monitor the government in the service of individuals.

To monitor for example government officials,

that they are not corrupt.

The technology is willing to do that,

the question is whether we’re willing

to develop the necessary tools to do it.

One of Yuval’s major points here

is that the biotech lets you understand,

by hooking up a sensor to someone,

features about that person

that they won’t know about themselves.

And we’re increasingly reverse-engineering the human animal.

One of the interesting things that I’ve been following

is also the ways you can ascertain those signals

without an invasive sensor.

We were talking about this a second ago.

There’s something called Eulerian Video Magnification

where you point a computer camera at a person’s face

and a human being can’t,

I can’t look at your face and see your heart rate.

My intelligence doesn’t let me see that.

You can see my eyes dilating right?

[Tristan] I can see your eyes dilating–

‘Cause I’m terrified of you.

(laughing)

But if I put a supercomputer behind the camera

I can actually run a mathematical equation

and I can find the micropulses of blood to your face

that I as a human can’t see but the computer can see.

So I can pick up your heart rate.

What does that let me do?

I can pick up your stress level

because heart rate variability gives you your stress level.

There’s a woman named Poppy Crum

who gave a TED Talk this year

about the end of the poker face.

We have this idea that there can be a poker face.

We can actually hide our emotions from other people.

But this talk is about the erosion of that.

That we can point a camera at your eyes

and see when your eyes dilate

which actually detects cognitive strains,

when you’re having a hard time understanding something

or an easy time understanding something.

We can continually adjust this based on your heart rate,

your eye dilation.

One of the things with Cambridge Analytica

is the idea that if we have,

which is all about the hacking of Brexit

and Russia and all the US elections,

that was based on,

if I know your big five personality traits,

if I know Nick Thompson’s personality

through his OCEAN,

openness,

conscientiousness,

extravertedness,

agreeableness and neuroticism.

That gives me your personality

and based on your personality

I can tune a political message to be perfect for you.

Now the whole scandal there was that Facebook

let go of this data to be stolen by a researcher

who used to have people fill in questions to figure out

what are Nick’s big five personality traits.

But now there’s a woman named Gloria Mark at UC Irvine

who has done a research showing

you can actually get people’s big five personality traits

just by their click patterns alone with 80% accuracy.

So again the end of the poker face,

the end of the hidden parts of your personality.

We’re gonna be able to point AIs at human animals

and figure out more and more signals from them

including their microexpressions,

when you smirk and all these things.

We’ve got face ID cameras on all of these phones.

So now if you have a tight loop

where I can adjust the political messages

in real time to your heart rate and to your eye dilation

and to your political personality,

that’s not a world we want to live in.

It’s a kind of dystopia.

There are many contexts you can use that.

It can be used in class to figure out

that the student isn’t getting the message,

that the student is bored which can be a very good thing.

It can be used by lawyers like you negotiate a deal

and if I can read what’s behind

your poker face and you can’t

that’s a tremendous advantage for me.

So it can be done in a diplomatic setting

like two prime ministers are meeting to,

I don’t know,

resolve the Israeli-Palestinian conflict

and one of them has an earbud

and the computer is whispering in his ear

what is the true emotional state.

What’s happening in the brain,

in the mind of the person on the other side of the table.

And what happens when two sides have this?

And you have kind of an arms race

and we have absolutely no idea how to handle these things.

I’ll give a personal example

when I talked about this in Davos.

For me maybe my entire approach to these issues

is shaped by my experience of coming out.

That I realized that I was gay when I was 21

and ever since then I’m haunted by this thought,

what was I doing for the previous five or six years?

How is it possible,

I’m not talking about something small

that you don’t know about yourself.

Everybody there is something you don’t know about yourself.

But how can you possibly not know this about yourself?

And then the next thought is,

a computer,

an algorithm could have told me that when I was 14

so easily just by something as simple

as following the focus of my eyes.

I don’t know,

I walk on the beach or I even watch television and there is,

what was in the 1980s?

Baywatch or something,

and there is a guy in a swimsuit

and there is a girl in a swimsuit

and which way my eyes are going,

it’s as simple as that.

And then I think,

what would my life have been like,

first if I knew when I was 14,

secondly if I got this information from an algorithm.

There is something incredibly deflating for the ego

that this is the source of this deep wisdom about myself?

An algorithm that followed my eye movement?

[Nicholas] And there’s an even creepier element

which you write about in your book,

what if Coca-Cola had figured it out first and

[Yuval] was selling you Coke Exactly!

with shirtless men when you didn’t even know you were gay.

Exactly although Coca-Cola versus Pepsi,

Coca-Cola knows this about me

and shows me a commercial with a shirtless man,

Pepsi doesn’t know this about me

because they are not using these sophisticated algorithms.

They go with the normal commercials

with the girl in the bikini.

And naturally enough I buy Coca-Cola

and I don’t even know why.

Next morning when I go to the supermarket

I buy Coca-Cola and I think,

this is my free choice!

I chose Coke!

But no I was hacked.

[Nicholas] And so this is inevitable.

[Tristan] This is the crux of the whole issue.

This is everything is what we’re talking about.

And how do you trust something

that can pull these signals off of you?

If the system is asymmetric,

if you know more about me than I know about myself,

we usually have a name for that in law.

For example when you deal with a lawyer,

you hand over your very personal details to a lawyer

so they can help you.

But then they have this knowledge of the law

and they know about your vulnerable information

so they could exploit you with that.

Imagine a lawyer who took all the personal information

and then sold it to somebody else.

But they’re governed by a different relationship

which is the fiduciary relationship.

They can lose their license

if they don’t actually serve your interests.

It’s similar to a doctor or psychotherapist.

There’s this big question of

how do we hand over information about us

and say I want you to use that to help me.

On whose authority can I guarantee

that you’re going to help me?

There is no moment when we are handing information.

With the lawyer there is this formal setting like,

okay I hire you to be my lawyer,

this is my information and we know this.

But I’m just walking down the street,

there is a camera looking at me,

I don’t even know that,

and they are hacking me through that.

So I don’t even know it’s happening.

That’s the most duplicitous part.

We often say it’s like imagine a priest,

if you want to know what Facebook is,

imagine a priest in a confession booth

and they listen to two billion people’s confessions

but they also watch you round your whole day,

what you click on,

which ads,

Coca-Cola or Pepsi,

the shirtless men and the shirtless women,

and all your conversations that you have

with everybody else in your life

’cause they have Facebook Messenger,

they have that data too.

But imagine that this priest in a confession booth,

their entire business model is to sell access

to the confession booth to another party.

So someone else can manipulate you.

Because that’s the one way

this priest makes money in this case.

They don’t make money any other way.

There are two giant entities that will have,

I mean there are a million entities that will have this data

but there’s large corporations,

you mentioned Facebook,

and there will be governments.

Which do you worry about more?

It’s the same.

Once you reach beyond a certain point

it doesn’t matter how you call it.

This is the entity that actually rules.

Whoever has the data.

Whoever has this kind of data.

Even in a setting where you still have a formal government

but this data is in the hands of some corporation

then the corporation if it wants

can decide who wins the next elections.

So it’s not really a matter for choice.

There is choice.

We can design a different political and economic system

in order to prevent this immense concentration

of data and power in the hands

of either government or corporations that use it

without being accountable

and without being transparent about what they are doing.

The message is not okay it’s over,

humankind is in the dustbin of history.

That’s not the message. No that’s not the message.

Eyes have stopped dilating,

let’s keep this going.

(laughing)

The real question is,

we need to get people to understand this is real,

this is happening,

there are things we can do.

And you have midterm elections in a couple of months

so in every debate,

every time a candidate goes to meet the potential voters

in person or on television,

ask them this question.

What is your plan,

what is your take on this issue?

What are you going to do if we are going to elect you?

If they say I don’t know what you’re talking about,

that’s a big problem.

I think the problem is most of them

have no idea what we’re talking about

and one of the issues is

I think policy makers as we’ve seen

are not very educated on these issues.

They’re doing better.

They’re doing so much better this year than last year.

Watching the Senate hearings,

the last hearings with Jack Dorsey and Sheryl Sandberg

versus watching the Zuckerberg hearings

or watching the Colin Stretch hearings,

there’s been improvement.

[Tristan] It’s true.

There’s much more, though.

I think these issues open up a whole space of possibility.

We don’t even know yet the kinds of things

we’re gonna be able to predict.

We’ve mentioned a few examples that we know about

but if you have a secret way of knowing something

about a person by pointing a camera at them and AI,

why would you publish that?

So there’s lots of things that can be known about us

to manipulate us right now that we don’t even know about.

How do we start to regulate that?

I think the relationship we want to govern is,

when a supercomputer is pointed at you

that relationship needs to be protected

[Nicholas] and governed by some terms. Okay.

So there’s three elements in that relationship.

There’s the supercomputer.

What does it do,

what does it not do.

There’s the dynamic of how it’s pointed.

What are the rules over what I can collect?

What are the rules over what I can’t collect

and what I can store?

And there’s you.

How do you train yourself to act?

How do you train yourself to have self-awareness?

Let’s talk about all three of those areas

maybe starting with the person.

What should the person do in the future

to survive better in this dynamic?

One thing I would say about that

is I think self-awareness is important.

It’s important that people know the thing

we’re talking about and they realize

that we can be hacked but it’s not a solution.

You have millions of years of evolution

that guide your mind to make

certain judgments and conclusions.

A good example of this is if I put on a VR helmet

and now suddenly I’m in a space where there’s a ledge.

I’m at the edge of a cliff.

I consciously know I’m sitting here

in a room with Yuval and Nick.

I know that consciously.

I’ve got this self-awareness.

I know I’m being manipulated.

But if you push me I’m gonna not want to fall right?

‘Cause I have millions of years of evolution that tell me

you are pushing me off of a ledge.

So in the same way you can say,

Dan Ariely makes this joke actually,

the behavioral economist,

that flattery works on us

even if I tell you I’m making it up.

Like Nick I love your jacket right now.

[Nicholas] It’s a great jacket on you. It is.

It’s a really amazing jacket.

I actually picked it out ’cause I knew

from studying your carbon dioxide exhalation yesterday

that you would like this.

Exactly.

(laughing)

We’re manipulating each other now.

The point is that even if you know

that I’m just making that up,

it still actually feels good.

The flattery feels good.

And so it’s important,

I think of this as a new era,

kind of a new Enlightenment

where we have to see ourselves in a very different way.

And that doesn’t mean that’s the whole answer.

It’s just the first step.

We have to all walk around–

So the first step is recognizing

that we’re all vulnerable.

[Tristan] Hackable.

Vulnerable.

But there are differences.

Yuval is way less hackable than I am

because he meditates two hours a day

and doesn’t use a smartphone.

(laughing)

I’m super hackable.

The last one’s probably key.

(laughing)

What are the other things

that a human can do to be less hackable?

You need to get to know yourself as best as you can.

It’s not a perfect solution,

but somebody’s running after you,

you run as fast as you can.

It’s a competition.

Who knows you best in the world?

So when you’re two years old it’s your mother.

Eventually you hope to reach a stage in life

when you know yourself even better than your mother.

And then suddenly you have this corporation

or government running after you,

and they are way past your mother and they are at your back.

This is the critical moment.

They know you better than you know yourself.

So run a little.

Run a little faster.

There are many ways you can run faster,

meaning getting to know yourself a bit better.

Meditation is one way,

there are hundreds of techniques of meditation,

different works for different people.

You can go to therapy,

you can use art,

you can use sport,

whatever works for you.

But it’s now becoming much more important than ever before.

It’s the oldest advice in the book.

Know yourself. Yeah.

But in the past you did not have competition.

If you lived in Ancient Athens

and Socrates came along and said know yourself,

it’s good for you,

and you say nah I’m too busy,

I have this olive grove,

I don’t have time.

Okay you didn’t get to know yourself better

but there was nobody else who was competing with you.

Now you have serious competition.

So you need to get to know yourself better.

This is the first maxim.

Secondly as an individual,

if we talk about what’s happening to society,

you should realize you can’t do much by yourself.

Join an organization.

If you are really concerned about this,

this week join some organization.

50 people who work together are a far more powerful force

than 50 individuals who each of them is an activist.

It’s good to be an activist,

it’s much better to be a member of an organization.

Then there are other tested and tried methods of politics.

We need to go back to this messy thing

of making political regulations and choices.

Politics is about power

and this is where power is right now.

[Tristan] Out of that,

I think there’s a temptation to say,

okay how can we protect ourselves.

And when this conversation shifts into,

with my smartphone not hacking me,

you get things like,

oh I’ll set my phone to grayscale,

oh I’ll turn off notifications.

But what that misses is that

you live inside of a social fabric.

When we walk outside my life depends

on the quality of other people’s thoughts,

beliefs and lives.

So if everyone around me believes a conspiracy theory

because YouTube is taking 1.9 billion human animals

and tilting the playing field so everyone watches Infowars,

by the way YouTube has driven 15 billion recommendations

of Alex Jones’ Infowars and that’s recommendations

and then two billion views.

If only one in a thousand people

believed those 2 billion views,

[Yuval] that’s still two million? Two million.

Mathematics is not as strong as…

(laughing)

We’re philosophers.

And so if that’s two million people

that’s still two million new conspiracy theorists.

So if everyone else is walking around in the world

you don’t get to do that.

If you say hey I’m a kid,

I’m a teenager and I don’t wanna care

about the number of likes I get

so I’m gonna stop using Snapchat or Instagram.

I don’t want to be hacked

for my self-worth in terms of likes.

If I’m a teenager and I’m using Snapchat or Instagram

and I don’t want to be hacked for my self-worth

in terms of the number of likes I get,

I can say I don’t wanna use those things

but I still live in a social fabric

where all my other sexual opportunities,

social opportunities,

homework transmission where people talk about that stuff.

If they only use Instagram

I have to participate in that social fabric.

So I think we have to elevate the conversation from

how do I make sure I’m not hacked,

it’s not just an individual conversation.

We want society to not be hacked.

Which goes to the political point

in how do we politically mobilize

as a group to change the whole industry.

For me I think about the tech industry.

Alright so that’s step one in this three step question.

What can individuals do,

know yourself,

make society more resilient,

make society less able to be hacked.

What about the transmission

between the supercomputer and the human?

What are the rules and how should we think about

how to limit the ability of the supercomputer to hack you?

That’s a big one. That’s a big question.

That’s why we’re here!

In essence I think that we need to come to terms

with the fact that we can’t prevent it completely.

It’s not because of the AI, it’s because of the biology.

It’s just the type of animals that we are

and the type of knowledge that now we have

about the human body,

about the human brain.

We have reached a point when this is really inevitable.

You don’t even need a biometric sensor,

you can just use a camera in order to tell

what is my blood pressure,

what’s happening now,

and through that what’s happening to me emotionally.

I would say that we need to

reconceptualize completely our world

and this is why I began by saying

that we suffer from philosophical impoverishment.

That we are still running on the ideas of the 18th Century.

Which were good for two or three centuries,

which were very good but which are simply not adequate

to understanding what’s happening right now.

Which is why I also think that

with all the talk about the job market

and what should I study today that will be relevant

to the job market in twenty,

thirty years.

I think philosophy is one of the best bets maybe.

I sometimes joke,

my wife studied philosophy and dance in college.

Which at the time seemed like the two worst professions

’cause you can’t really get a job in either.

But now they’re like the last two things

that will get replaced by robots.

I think Yuval is right and I think

this conversation usually makes people conclude

that there’s nothing about human choice

or the human mind’s feelings that’s worth respecting.

And I don’t think that is the point.

I think the point is we need a new kind of philosophy

that acknowledges a certain kind of thinking

or cognitive process or conceptual process

or social process,

that we want that.

For example Lawrence Fishkin is a professor at Stanford

who’s done work on deliberative democracy

and shown that if you get a random sample of people

in a hotel room for two days

and you have experts come in

and brief them about a bunch of things

they change their minds about issues,

they go from being polarized to less polarized,

they can come to more agreement.

And there’s a process there that you can put in a bin

and say that’s a social cognitive sense-making process

that we might want to be sampling from that one

as opposed to an alienated lonely individual

who’s been shown photos of their friends

having fun without them all day

and then we’re hitting them with Russian ads.

We probably don’t want to be

sampling a signal from that person to be thinking about,

not that we don’t want it from that person,

but we don’t want that process

to be the basis of how we make collective decisions.

So I think we’re still stuck in a mind-body meat suit.

We’re not getting out of it.

So we better learn how do we use it in a way

that brings out the higher angels of our nature.

And the more reflective parts of ourselves.

So I think what technology designers need to do

is ask that question.

A good example just to make it practical,

let’s take YouTube again.

What’s the difference between a teenager,

let’s take an example of you watch a ukulele video.

It’s a very common thing on YouTube.

There’s lots of ukulele videos.

How to play ukulele.

What’s going on in that moment

when it recommends other ukulele videos?

There’s actually a value if someone wants to learn

how to play the ukulele.

But the computer doesn’t know that.

It’s just recommending more ukulele videos.

But if it really knew that about you,

instead of just saying

here’s infinite more ukulele videos to watch,

it might say here’s your ten friends

who know how to play ukulele that you didn’t know

know how to play ukulele

and you can go and hang out with them.

It could put those choices at the top of life’s menu.

The problem is when you watch,

like a teenager watches that dieting video,

the computer doesn’t know that the thing you’re really after

in that moment isn’t that you want to be anorexic.

It just knows that people who watch those

tend to fall for these anorexia videos.

It can’t get at this underlying value,

this thing that people want.

You can even think about it that we just need,

I mean the system in itself can do amazing thing for us,

we just need to turn it around

that it serves our interests whatever that is

and not the interests of the corporation or the government.

Actually to build,

okay now that we realize that our brains can be hacked,

we need an antivirus for the brain.

Just as we have one for the computer.

And it can work on the basis of the same technology.

Let’s say you have an AI sidekick

who monitors you all the time,

24 hours a day,

what you write,

what you’ve seen,

everything.

But this AI is serving you.

It has this fiduciary responsibility.

And it gets to know your weaknesses

and by knowing your weaknesses it can protect you

against other agents trying to hack you

and to exploit your weaknesses.

So if you have a weakness for funny cat videos

and you spend an enormous amount of time,

an inordinate amount of time just watching,

you know it’s not very good for you

but you just can’t stop yourself clicking,

then the AI will intervene

and whenever this funny cat video tries to pop up the AI,

no no no no.

And it will just show maybe a message

that somebody just tried to hack you.

You get these messages about

somebody just tried to infect your computer with a virus.

The hardest thing for us is to admit

our own weaknesses and biases and it can go all ways.

If you have a bias against Trump or against Trump supporters

so you very easily believe any story,

however farfetched and ridiculous.

So I don’t know,

Trump thinks that the world is flat.

Trump is in favor of killing all the Muslims.

You would click on that.

This is your bias.

And the AI will know that so it’s completely neutral,

it doesn’t serve any entity out there.

It just gets to know your weaknesses and biases

and tries to protect you against them.

[Nicholas] But how does it learn

that it’s a weakness and a bias and not something you like?

How come it knows when you click the ukulele video,

that’s good,

and when you click the Trump–

[Tristan] This is where I think we need

a richer philosophical framework because if you have that

then you can make that understanding.

So if a teenager’s sitting there in that moment,

watching the dieting video

then they’re shown the anorexia video,

imagine instead of a 22 year old male engineer

who went to Stanford,

computer scientist thinking about

what can I show them that’s the perfect thing?

You had a 80 year old child developmental psychologist

who studied under the best child developmental psychologists

and thought about in those kinds of moments

the thing that’s usually going on for a teenager aged 13

is a feeling of insecurity,

identity development,

experimentation and what would be best for them?

So we think about this is,

the whole framework of humane technology

is we think this is the thing,

we have to hold up the mirror to ourselves

to understand our vulnerabilities first,

and you design starting from a view

of what we’re vulnerable to.

I think from a practical perspective,

I totally agree with this idea of an AI sidekick.

But if we’re imagining,

we live in the scary reality

that we’re talking about right now.

It’s not like this is some sci-fi future,

this is the actual state.

So if we’re actually thinking about how do we navigate

to an actual state of affairs that we want,

we probably don’t want an AI sidekick

to be this kind of optional thing

that some people who are rich can afford

and other people who don’t can’t.

We probably want it to be baked in

to the way technology works in the first place

so that it does have a fiduciary responsibility

to our best subtle compassionate vulnerable interests.

So we will have government-sponsored AI sidekicks?

We will have corporations that sell us AI sidekicks

but subsidize them so it’s not just the affluent

that have really good AI sidekicks?

This is a business model conversation but…

One thing is to change the way that,

if you go to university or college

and learn computer science

then an integral part of the course

is to learn about ethics.

About the ethics of coding.

I think it’s extremely irresponsible

that you can finish,

you can have a degree in computer science,

in coding and you can design all these algorithms

that now shape people’s lives

and you just don’t have any background

in thinking ethically and philosophically

about what you’re doing.

You’re just thinking in terms of pure technicality

or in economic terms.

So this is one thing which kind of bakes it

into the cake from the first place.

Let me ask you something that has come up a couple times

I’ve been been wondering about.

So when you were giving the ukulele example,

you talked about well maybe you should

go see ten friends who play ukulele,

you should visit them offline.

And in your book you say that one of the crucial moments

for Facebook will come when an engineer realizes

that the thing that is better for the person

and for community is for them to leave their computer.

And then what will Facebook do with that?

So it does seem from a moral perspective that a platform,

if it realizes it would be better for you to go offline,

they should encourage you to do that.

But then they will lose their money

and they will be out-competed.

[Yuval] Mm-hmm. Yep.

So how do you actually get to the point where the algorithm,

the platform push somebody in that direction.

This is where this business model conversation comes in.

It’s so important.

And also why Apple and Google’s role is so important.

Because they are before the business model of all these apps

that want to steal your time and maximize attention.

Apple doesn’t need to–

Google’s before and after and during

but it is also before. But anyway.

Specifically the Android case.

So Android and iOS,

not to make this too technical

or an industries-focused conversation,

but they should theoretically,

that layer,

you have just the device.

Who should that be serving?

Whose best interest are they serving?

Do they want to make the apps as successful as possible?

And make the addictive maximizing loneliness

and alienation and social comparison,

all that stuff?

Or should that layer be a fiduciary,

as the AI sidekick to our deepest interests,

to our physical embodied lives,

to our physical embodied communities.

We can’t escape this instrument.

It turns out that being inside of community

and having face-to-face contact is,

there’s a reason why solitary confinement

is the worst punishment we give human beings.

And we have technology that’s basically maximizing isolation

because it needs to maximize

that time we spend on the screen.

So I think one question is

how can Apple and Google move their entire businesses

to be about embodied local fiduciary

responsibility to society.

And this is what we think of as humane technology.

That’s the direction that it can go.

Facebook could also change its business model

to be more about payments and people transacting

based on exchanging things,

which is something they’re looking into

with the blockchain stuff

that they’re theoretically working on.

And also Messenger payments.

If they move from an advertising-based

business model to micropayments,

they could actually shift the design of some of those things

and there could be whole teams of engineers at Newsfeed

that are just thinking about what’s best for society

and then people would still ask these questions of,

well who’s Facebook to say what’s good for society?

But you can’t get out of that situation

because they do shape what two billion human animals

will think and feel every day.

So this gets me to one of the things

I most want to hear your thoughts on which is,

Apple and Google have both done this

to some degree in the last year

and Facebook has,

I believe every executive at every tech company has said

time well spent at some point in the last year.

We’ve had a huge conversation about it

and people have bought 26 trillion of these books.

Do you actually think that we are

heading in the right direction at this moment

because change is happening and people are thinking?

Or do you feel like we’re still

going in the wrong direction?

[Yuval] I think that in the tech world

we are going in the right direction in the sense

that people are realizing the stakes.

People are realizing the immense power

that they have in their hands.

I’m talking about the people in the tech world.

They are realizing the influence they have on politics,

on society,

and most of them react I think not in the best way possible

but certainly they react in the responsible way.

In understanding yes we have this huge impact on the world.

We didn’t plan that maybe but this is happening

and we need to think very carefully what we do with that.

They still don’t know what to do with that.

Nobody really knows.

But at least the first step has been accomplished

of realizing what is happening

and taking some responsibility.

The place where we see a very negative development

is on the global level because all this talk so far

has really been internal,

Silicon Valley,

California USA talk.

But things are happening in other countries.

All the talk we’ve had so far

relied on what’s happening in

liberal democracies and in free market.

In some countries maybe you have got no choice whatsoever.

You just have to share all your information and have to do

what the government-sponsored algorithm tells you to do.

So it’s a completely different conversation.

And another complication

is the AI arms race

that five years,

even two years ago,

there was no such thing.

And now it’s maybe the number one priority

in many places around the world,

that there is an arms race going on in AI

and our country needs to win this arms race.

And when you enter an arms race situation,

then it becomes very quickly a race to the bottom.

Because you very often hear this,

okay it’s a bad idea to do this,

to develop that but they’re doing it

and it gives them some advantage

and we can’t stay behind.

We’re the good guys!

We don’t want to do it!

But we can’t allow the bad guys to be ahead of us

so we must do it first.

And you ask the other people,

they will say exactly the same thing.

They don’t want to do it but they have to.

Yeah and this is an extremely dangerous development

in the last two years.

It’s a multipolar trap

No-one wants to build slaughterbot drones

but if I think you might be doing it

even though I don’t want to I have to build it

and you build it and we both hold them.

Even at a deeper level,

if you want to build some ethics

into your slaughterbot drones

but it’ll slow you down by one week

and one week you double the intelligence.

This is actually one of the things I think

we talked about when we first met

was the ethics of speed,

of clockrate.

We’re in essence competing on

who can go faster to make this stuff

but faster means more likely to be dangerous,

less likely to be safe so it’s basically

we’re racing as fast as possible

to create the things we should probably be going

as slow as possible to create.

And I think that much like

high-frequency trading in the financial markets,

if we had this open-ended thing of

who can beat who by trading a microsecond faster.

What that turns into,

this has been well documented,

is people blowing up whole mountains

so they can lay these copper cables

so they can trade a microsecond faster.

You’re not even competing based on

an Adam Smith version of what we value or something.

We’re competing based on who can blow up mountains

and make transactions faster.

When you add high-frequency trading to

who can trade hackable programming human beings faster

and who’s more effective at manipulating

culture wars across the world,

that just becomes this race to the bottom

of the brain stem of total chaos.

I think we have to say how do we slow this down

and create a sensible pace

and I think that’s also about a humane technology.

Instead of a child developmental psychologist,

ask someone like a psychologist,

what are the clockrates of human decision making

where we actually tend to make good thoughtful choices?

We probably don’t want a whole society revved-up

to making a hundred choices per hour

about something that really matters.

So what is the right clockrate?

I think we have to actually have technology

steer us towards those kinds of decision-making processes.

[Nicholas] So back to the original question,

you’re somewhat optimistic about some of the small things

that are happening in this very small place

but deeply pessimistic about

the complete obliteration of humanity?

I think Yuval’s point is right.

There’s a question about US tech companies,

which are bigger than many governments.

Facebook controls 2.2 billion people’s thoughts.

Mark Zuckerburg’s editor-in-chief

of 2.2 billion people’s thoughts.

But then there’s also world governments

or national governments

that are governed by a different set of rules.

I think the tech companies are

very very slowly waking up to this.

And so far with the time well spent stuff for example,

it’s let’s help people,

because they’re vulnerable to how much time they spend,

set a limit on how much time they spend.

But that doesn’t tackle any of these bigger issues

about how you can program the thoughts of a democracy,

how mental health and alienation

can be rampant among teenagers leading to

doubling the rates of teen suicide

for girls in the last eight years.

We’re going to have to have a much more comprehensive view

and restructuring of the tech industry

to think about what’s good for people.

There’s gonna be an uncomfortable transition.

I use this metaphor it’s like climate change when…

There’s certain moments in history

when an economy is propped up by something we don’t want.

The biggest example of this is slavery in the 1800s.

There is a point at which slavery

was propping up the entire world economy.

You couldn’t just say we don’t wanna do this anymore,

let’s just suck it out of the economy.

The whole economy would collapse if you did that.

But the British Empire when they decided to abolish slavery,

they had to give up 2% of their GDP every year for 60 years.

And they were able to make that transition

over a transition period.

I’m not equating advertising

or programming human beings to slavery.

I’m not.

But there’s a similar structure of the entire economy now,

if you look at the stock market,

a huge chunk of the value is driven by

these advertising programming human animals based systems.

If we wanted to suck out that model,

the advertising model,

we actually can’t afford that transition.

But there could be an awkward years

where you’re basically in that long transition path.

I think in this moment we have to do it much faster

than we’ve done it in other situations

because the threats are more urgent.

Yuval do you agree that that’s one of the things

we have to think about as we think about trying to

fix the world system over the next decades?

It’s one of the things but again

the problem of the world,

of humanity is not just the advertising model.

The basic tools were designed,

you had the brightest people in the world

10 or 20 years ago cracking this problem

of how do I get people to click on ads.

Some of the smartest people ever,

this was their job.

To solve this problem.

And they solved it.

And then the methods that they initially used

to sell us underwear and sunglasses and vacations

in the Caribbean and things like that.

They were hijacked and weaponized

and are now used to sell us all kinds of things

including political opinions and entire ideologies.

It’s now no longer under the control

of the tech giants in Silicon Valley

that pioneered these methods.

These methods are now out there.

So even if you get Google and Facebook to

completely give it up the cat is out of the dog.

People already know how to do it.

There is an arms race in this arena.

So yes we need to figure out this advertising business,

it’s very important but it won’t solve the human problem.

Now the only really effective way to do it

is on the global level and for that

we need global cooperation and regulating AI,

regulating the development of AI and of biotechnology

and we are of course heading in the opposite direction,

of global corporation.

I agree in that there’s this notion of the game theory.

Sure Facebook and Google could do it

but that doesn’t matter because the cat’s out of the bag

and governments are gonna do it

and other tech companies are gonna do it

and Russia’s tech infrastructure’s gonna do it.

So how do you stop it from happening?

Not to equate slavery in a similar way but

when the British Empire decided to abolish slavery

and subtract their dependence on that for their economy,

they actually were concerned that if we do this

France’s economy is still gonna be powered by slavery

and they’re gonna soar way past us.

So from a competition perspective we can’t do this.

But the way they got there was by turning it into

a universal global human rights issue

that took a longer time but I think like Yuval says

this is a global conversation

about human nature and human freedom,

if there is such a thing,

but at least kinds of human freedom

that we want to preserve.

That I think is something that is actually

in everyone’s interest and it’s not necessarily

equal capacity to achieve that end

because governments are very powerful

but we’re gonna move in that direction

by having a global conversation about it.

Let’s end this with giving some advice

to someone who is watching this video.

They’ve just watched an Alex Jones video

and the YouTube algorithm has changed

and they sent ’em here and they somehow got to this point.

They’re 18 years old,

they want to devote their life to making sure

that the dynamic between machines and humans

does not become exploitative and becomes one in which

we continue to live our rich fulfilled lives.

What should they do or what advice could you give them?

I would say get to know yourself much better

and have as little illusions about yourself as possible.

If a desire pops in your mind don’t just say,

well this is my free will,

I chose this therefore it’s good,

I should do it.

Explore much deeper.

Secondly as I said join an organization.

There is very little you can do

just as an individual by yourself.

These are the two most important advices I could give

an individual who is watching us now.

[Tristan] And I think your earlier suggestion of,

understand that the philosophy of

simple rational human choice.

We have to move from an 18th Century model

of how human beings work

to a 21st Century model of how human beings work.

Speaking personally our work is trying to coordinate

a global movement towards fixing some of these issues

around humane technology and I think like Yuval says

you can’t do it alone.

It’s not a let me turn my phone grayscale

or let me petition my Congress member by myself.

This is a global movement.

The good news is no-one kind of wants the dystopic end point

of the stuff that we’re talking about.

It’s not like someone says no I’m really excited

about this dystopia.

I just wanna keep doing what we’re doing!

No-one wants that so it’s really a matter of,

can we all unify in the thing that we do want

and it’s somewhere in this vicinity

of what we’re talking about

and no-one has to capture the flag but we have to move away

from the direction that we’re going.

And I think everyone should be on the same page on that.

We started this conversation by talking about

whether we’re optimistic and I am certainly optimistic

that we have covered some of the hardest questions

facing humanity and that you have offered brilliant insights

into them so thank you for talking

and thank you for being here.

Thank you Tristan,

thank you Yuval.

Thank you. Thanks.

 

 

Interesting quote from “21 Lessons for the 21st Century”

Hi – I’m reading “21 Lessons for the 21st Century” by Yuval Noah Harari and wanted to share this quote with you.

“It’s very dangerous to be redundant. The future of the masses will then depend on the goodwill of a small elite. Maybe there is goodwill for a few decades. But in a time of crisis—like climate catastrophe—it would be very tempting and easy to toss the superfluous people overboard. In countries such as France and New Zealand, with a long tradition of liberal beliefs and welfare-state practices, perhaps the elite will go on taking care of the masses even when”

Start reading this book for free: https://a.co/a6Hik2P

Interesting quote from “21 Lessons for the 21st Century”

Hi – I’m reading “21 Lessons for the 21st Century” by Yuval Noah Harari and wanted to share this quote with you.

“Nevertheless, ancient hunter-gatherer bands were still more egalitarian than any subsequent human society, because they had very little property. Property is a prerequisite for long-term inequality. Following the Agricultural Revolution, property multiplied and with it inequality. As humans gained ownership of land, animals, plants, and tools, rigid hierarchical societies emerged, in which small elites monopolized most wealth and power for generation after generation. Humans came to accept this arrangement as natural and even divinely ordained. Hierarchy was not just the norm but also the ideal. How could there be order without a clear hierarchy between aristocrats and commoners, between men and women, or between parents and children? Priests, philosophers, and poets all over the world patiently explained that just as in the human body not all members are equal—the feet must obey the head—so also in human society equality would bring nothing but chaos. In the late modern era, however, equality became an ideal in almost all human societies. This was partly due to the rise of the new ideologies of communism and liberalism. But it was also due to the Industrial Revolution, which made the masses more important than ever before. Industrial economies relied on masses of common workers, while industrial armies relied on masses of common soldiers.”

Start reading this book for free: https://a.co/c87wPrK

Interesting quote from “21 Lessons for the 21st Century”

Hi – I’m reading “21 Lessons for the 21st Century” by Yuval Noah Harari and wanted to share this quote with you.

“If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world. Digital dictatorships are not the only danger awaiting us. 

Alongside liberty, the liberal order has also set great store in the value of equality. Liberalism has always cherished political equality, and it gradually came to realize that economic equality is almost as important. For without a social safety net and a modicum of economic equality, liberty is meaningless. But just as Big Data algorithms might extinguish liberty, they might simultaneously create the most unequal societies that ever existed. 

All wealth and power might be concentrated in the hands of a tiny elite, while most people will suffer not from exploitation but from something far worse—irrelevance.”

Start reading this book for free: https://a.co/dus7hfH

Interesting quote from “21 Lessons for the 21st Century”

Hi – I’m reading “21 Lessons for the 21st Century” by Yuval Noah Harari and wanted to share this quote with you.

“Democracy in its present form cannot survive the merger of biotech and infotech. Either democracy will successfully reinvent itself in a radically new form or humans will come to live in “digital dictatorships.” 

This will not be a return to the days of Hitler and Stalin. Digital dictatorships will be as different from Nazi Germany as Nazi Germany was different from ancien régime France. 
Louis XIV was a centralizing autocrat, but he did not have the technology to build a modern totalitarian state. He suffered no opposition to his rule, yet in the absence of radios, telephones, and trains, he had little control over the day-to-day lives of peasants in remote Breton villages, or even of townspeople in the heart of Paris. He had neither the will nor the ability to establish a mass party, a countrywide youth movement, or a national education system.30 It was the new technologies of the twentieth century that gave Hitler both the motivation and the power to do such things. We cannot predict the motivations and powers of digital dictatorships in 2084, but it is very unlikely that they will just copy Hitler and Stalin. Those gearing themselves up to refight the battles of the 1930s might be caught off guard by an attack from a totally different direction.”

Start reading this book for free: https://a.co/8dHafUJ