Skip to main content

tv   Cade Metz Genius Makers  CSPAN  June 1, 2021 6:30am-8:00am EDT

6:30 am
c-span2. >> today we continue to focus on ai innovation to fresh exploration of ai history. as well as the choices that we face of digital citizens. before ai began to change your world for better or for worse. they spent decades trying to build their own networks. often in the face of enormous skepticism. talking digital assistants to self driving cars and automated
6:31 am
healthcare. pulled into a world they did not expect, along with the rest of us. google, facebook and the world. the reporter uncovers his tail. i think it is an often dramatic telling of this history. some of the untold stories that keep people, the communities and companies shaping the community of ai. a natural interest, shareholder value at the pursuit of tech innovation. how these decisions are made and who is benefiting. it is my pleasure to welcome kate. they rich experience as a senior staff writer and the u.s.
6:32 am
magazine of the register. now based in san francisco bay area where he has a technology correspondent with the new york times and he covers ai, driverless cars and other areas. here are his five numbers. nine years covering ai. $44 million that google paid, 37. eighty photos of a black woman tagged as a gorilla by google and one book telling the whole story cleared welcome. a moderator for today's program. an editor for the wall street
6:33 am
journal. written about ai across multiple industries. she wrote for a few others. so glad you're here with us. >> so nice to see you again. sixteen, 18 months since we last hung out. it is nice to reconnect. who is fundamentally about people. people who were toiling in obscurity, maligned and mocked for some of their ideas.
6:34 am
what captivated you the most about this cast of characters? >> there were two moments i think that really inspired the book. the first was the four seasons hotel. it was based in london. it had been bought by google, built to play this game of go. they often called it the eastern version of chess. most of those people were ai experts. the best game was still decades away. it may actually be one of the top players. the four seasons hotel.
6:35 am
this moment when the people that built this machine who had spent years kind of cultivating the ideas behind it and building this machine could not understand what the machine was doing. confused and caught unaware by this machine. these individual people including dennis who, for the most part as a leader of this lab became the focus of this book. i knew that dennis would be a character. before i met jeff who was a generation older than dennis and he became the central thread. he, and his own way, is a fascinating character. had worked on many of the same ideas as dennis and dennis colleagues for decades.
6:36 am
for anyone who knows jeff, he has a fascinating, engaging, strange character who has endured serious hardship over the years among other things. i thought to myself, if i can just get jeff onto the page, maybe this book will work. >> i actually wanted to ask you about a moment about jeff. kind of this connector of this group. about halfway through the chapter that deals with the cases of ai, jeff putting out a small sigh. he said early diagnosis is not a trivial problem. we can do better, but why not let machines help us. this was specifically really important to him because of his
6:37 am
life experience with pancreatic cancer. that moment comes back at the end of the book. you see him immersed in this. he has also, at that moment -- >> i love that you pinpoint those. you see in the emotion that jeff shows, that side that he lets out. the moment when he receives the award. he does not talk about himself. he talks about his wife. those are the kind of moments they capture. it shows the personal struggle that jeff faced to bring these ideas to the floor. it is just part of his personal struggle. certainly the most powerful. at the end when he talks about his wife.
6:38 am
this is someone who had two wives died by cancer. experiencing his own physical hardship as you learn in the first sentence of the book. he is someone who literally does not sit down because of a back problem. this plays into the story as well. this extreme back problem he has to make these pilgrimages across north america. sometimes across the atlantic. realizing these ideas. those are the types of moments that i feel show, not only what he faces, but people like him often face when they are trying to realize their work. their technical problems. they are personal obstacles that have to be overcome as well. >> i felt for humanity.
6:39 am
some sections of the book just jump off the page. really loving, but sometimes struggle. also jokes. almost a defective mechanism. others were a little bit more aggressive or adamant about their ideas or ways that captured a little bit of their security. just reminding everybody there is a humanity there. i do talk to other people for the book. how did the lives, dreams, opportunities and obstacles affect. >> what i love is it was different for each person. we are talking about essentially a single idea.
6:40 am
it dates back to the 50s. it is about all of these different people trying to push the idea forward. the way they dealt with it was so different. because they are humans. we are all different. the personality comes out in so many different ways. you are right. sometimes it is humor. jeff is an incredibly funny person as well. i like how that humor would attract those around him. as he is struggling to get this idea out. he needs the help of others. the humor is not only entertaining, but it's a way of convincing people to work on this project. as i got to know jeff, more importantly, as i interviewed his students who knew him well, you saw him at this magnet often
6:41 am
driven by the humor. drawing people to him and helping him realize this idea. the people that push this idea forward in other ways. some people are so adamant about their idea and they are so upset by the obstacles that they really lash out. you see that as well. that can work in some ways and it can backfire in others. what i wanted to do was show all of that. also how, you know, these folks push the idea forward, it would behave in ways we did not expect a surprise even then. they reached these moments where
6:42 am
they do not necessarily know what to do. that is part of it as well. >> what surprises people like that about the way their technology caught on. >> the great example, it is almost inevitable. the great example is -- born in london, eventually made his way to the united states and started to explore this idea, first on the west coast and then a professor at carnegie mellon in pittsburgh, there is this moment in the mid- 80s where jeff and his wife at the time realized that he cannot do this ai work without taking money from ronald reagan's defense department. that is not something that he wants to deal. at the height of the iran-contra
6:43 am
affair. he and his wife have very firm beliefs on this. they do not want, you know, this work to be used inside the military's. they actually leave the country. he believes in this stance. the point where he goes to canada and sets up shop at the university of toronto as a professor. it would have real implications for the whole field. years on when this idea finally started to work. most of the talent was centered around jeff and others in canada it was not in the u.s. but, as that idea starts to work at jeff is sucked into google, in short order, google starts to work with the defense department. there are protests at the company. some people who believe this is absolutely the right thing for google to do. there are others that are really upset by it. i did not expect to be working
6:44 am
for a defense contractor. a moment where jeff himself struggles with this. he was against it, but he was not sure how much he should speak out. he ended up lobbying one of the founders to push back on this. he was not as public with his concerns as some others. there was a moment where there was an employee who criticizes him for that. again, you feel the humanity of the situation. relating on some level to having our own beliefs on one hand and having the motivations of our company who we work for. how do you balance those? it is a hard thing. >> i want to go back to the topic of the military. a long history. can you talk a little bit about
6:45 am
that? it goes way back to the inception. >> it really does. i think that this is important for silicon valley to remember. we are at this moment where we often see new stories that say if silicon valley as opposed to working with the military. there is certainly a portion of the valley that believes that. they would build on military money. google was billed on the defense department funding. the internet came out of the dod project. the found the heirs of hp worked in nixon's cabinet. there is this mixed history. silicon valley is largely a liberal plane. there have been times in recent years were there been protests for that kind of work. that is only one side of the equation. i think that it is a good way of
6:46 am
thinking about this technology that i write about. not only when it comes to military uses, but all sorts of uses. dual uses for this technology. it can be used for good, it can be used in many ways that you may not expect. a lot of it is about point of view. struggling to figure out what is right and what is wrong. it is not always black-and-white >> it is really important here. i think going back to that first one that you mentioned earlier in our conversation in korea where there is this national excitement. it is like chess and some of the other games where ai are trained
6:47 am
they are wargames. they are violent games. >> it is interesting. they are the game of choice because they are hard. it is as simple as that. they are chosen for technical reasons. they are chosen for historical reasons. it is about the conquest. that is a real thing. you can see this. in the book, he himself is fundamentally a game player. games are about competition. they are metaphors often for war the intensity of the ambition is palpable. you could feel in korea he wants to win. that is really what drives him as well as the technological
6:48 am
aspects of this. the other thing is games or something that we all relate to. i think that there is a reason that people look up to them winning that event. all of us play games as well. as children. we can understand on some level the way games work. there is a winner and a loser. the idea of a machine beating us is not only interesting, it is scary. you talk about the excitement in korea. it was palpable. what i often say, because it is true, one of the most amazing weeks i've ever experienced. you can feel the excitement of an entire country. focused on this. you can also feel the sadness when the korean, very human
6:49 am
player that is getting beat, you can feel the sadness and that fear and that concern. it really brought out those emotions. that is why it was such an inflection point. there is a dark side to this as well as the light. >> one of the things that i found interesting about the story in the book, the people story, there was this quest to improve humanity. almost like an outdated pc. where does that come from? >> my father was an engineer. we talked about this a lot. it is very easy when you are focused on the technology to see it as somehow separate from humanity. an attitude that you see in silicon valley often.
6:50 am
the technology is sort of boxed off from everything else happening. when you do that, it is easy to see it only for the positive things that will bring. do not see all of the other consequences and effects of this i think what we really need to do and what i wanted to do with the second part of the book is any technology is bigger than itself. anyone who has lived over the past four years can recognize that. we have relatively simple technologies. facebook is not a complicated technology. these technologies that i am writing about in the book are far more complex in part because, this gets back to your first question, the way i
6:51 am
answered it. we do not really understand how these things are operated in some cases. they literally learn skills on their own by analyzing data. more data they and we can wrap our heads around. as those technologies get more powerful and more pervasive, their effect on our world will be greater. relatively simple technology that we use today. >> based on what you learned writing this book, do you think machines will have greater the inhuman level of general intelligence within the next 25 years? please explain what you think about that. >> the book goes into this. what i do want to do, in the book, make a clear distinction between the technologies of
6:52 am
today and this idea that we will have a machine that can do anything that your brain can do. sometimes i just call it agi. artificial intelligence. that is something that we do not know how to get to. we have two labs now that say this is their stated mission ai san francisco. they, including dennis and they don't know how to get there. it is really hard to say when we might get there. this is not certainly a near-term thing. we are talking about a machine that can recognize what you say in recognizing faces and other objects and photos.
6:53 am
that image recognition can help us build self driving cars. other forms of robotics that you respond to some situations. health and healthcare. an area that you know well. that is different from a machine that can really reason. we as humans can deal with them. i don't like to make a call on this because it is impossible. it is in the future. people will argue about when that will happen. it is not going to happen soon. >> the deep mine lab. some actually went to grad school with me. i think that they would be among the first to tell you that we do not understand intelligence,
6:54 am
real intelligence. how do you dissuade something that you do not understand. >> this is why i always liked working with you. this is the key thing to understand. it may not be obvious when you read stories about ai. we don't understand how the brain works. re-creating the brain, almost from the get go, it is a task . . the brain is such a mysterious thing. what you do see is that it provides a certain inspiration to the field. he was inspired as a student at cambridge by the idea that you can re-create the brain. really nurturing this idea of a neural network.
6:55 am
he is driven by this notion that he can take the way the brain works and apply it to machines. now taking it to another level. he does have a team of neuroscientists that studied the brain and the way that the brain works. how that may be re-created. some people see this as a circle. as you better understand the way the brain works, that can help you build machines that work like it. as the machines improve and you figure out ways that they can mimic human behavior, that can help you better understand the brain. that understands sometimes, but not always. deep learning based ai.
6:56 am
modern research in social media also incorporate an enormous amount of ai. can we talk about the impact of the ai? a problem with the ai field. you have to defy your term. i think it is applied to everything. what i meant by simple or is the algorithms that are choosing what is in your social media feed are relatively simple compared to these neural networks style systems. the reason they are more complicated is that they do learn their task and this really intense way. the example i used sort of set the conversation as you take thousands of cap photos and you feed into this neural network
6:57 am
which is a mathematical system. analyzing those photos and looking for patterns. you define what it cap looks like. that is one thing with the calf photo. now it is medical finishes. learning the same way when it comes to medical images. we do not see the flaws in those systems. we do not know my hundred -- do not know the mistakes. these mistakes learn from everything we posted to the internet. all sorts of other stuff. we all know that the internet can be biased. it gets rid of people with color. we have hate speech. needless to say, on the
6:58 am
internet. these giant systems which are learning natural language now are starting to be used in chat box. they are learning those bias and those flaws. by the way, other things that we may not see. even the creators of this technology may not see. because of the way the systems are built. >> the bias side. it may be escalating or producing at scale. about two thirds of the way into the book, we come to the chapter that deal with these very issues from the interview he gave to bloomberg, if i'm not mistaken, warning about the problem.
6:59 am
can you talk about that? >> absolutely. that quote is interesting on many levels. one of which as it came so early another scene in that chapter is , this was alluded to at the top of this call, that moment when google identifies photos posted by a software engineer in brooklyn as guerrillas. that is 2015. we are still struggling to deal with that issue. years after meg said that in the pages of bloomberg. years after that incident. even though you have people like meg. and so many others, you have not only noticed this problem but called attention to it. the fundamental thing to realize is it is endemic to the technology. the technology has to be built
7:00 am
in this way if it is going to work the way these large companies wanted to. it requires enormous amounts of data. what that means is you cannot just remove those flaws. you can try to start over, but how do you get the data that does not have those flaws in it. it is such a difficult thing. it exemplifies this moment we are going through now. not only at google and microsoft, but the technology industry as a whole. struggling to deal with that conundrum. >> a couple years ago i had a conversation about this. she mentioned that we have this bias to think about. being infallible. how can she make a it requires enormous. amounts of data and what that means is you can't just remove those flaws, right? you can try to start over, but
7:01 am
how do you get the data that doesn't have those flaws in it? that's such a difficult thing andin simplifies this moment we are going through now not only at google and microsoft but the tech industry as a whole where it's struggling to deal with that conundrum. >> a couple of ago i had a conversation with kate crawford and she mentioned that we have this bias to think about machines as being infallible and what they say goes because how could a machine make a mistake, but i think what you pointed out is really thoughtful because in a way the machines are just really an extension of us, right? and we come with our own biases, all of us. are people starting to think more about that since the 2015
7:02 am
quote and even before that others were speaking out about it? >> the good news is they are thinking about it more but there are the other forces and what is so interesting to me is that you do have this moment in my book where theyen called attention to this. they were both hired by google and led my meg, they create the ethical ai team at google whichd is designed to fight the recently since my book was put to bed, they have both been ousted from google and, you know, i've written about this in the pages of the times. as you have on the one side people calling attention to this and a lot of people taking notice, inside companies as well as out, you have the other corporate forces in various ways pushing against it, right? these -- these companies have
7:03 am
their own names, companies are complicated and, you know, driven by the profit mode among other things and often comes into conflict with these other efforts ndustry thinks that it will not. you know, you could derive them as an activist. these are all scientists. i think that you make a great analogy. in the face of pushback to all of these people around her, she is saying, you need to pay attention to this. thankfully, she is not alone. >> you know, change is hard. the other thing is, the companies are designed in certain ways. another certain thing that you see in the book.
7:04 am
these companies develop these individual personalities almost. they respond to situations differently. in particular ways that they are conformed by their history. promoting them selves and to tell the world what they are doing is positive. even if on one level they acknowledge the problems of the world, they don't want those problems to be on them. that is the heart of the clash between google. they try to publish a paper that called attention to these problems. it is part of the issue there. again, this is not just a google problem. this is something that all of these companies will have to deal with. >> this is an audience question.
7:05 am
[inaudible] >> it is very, very hard. it is endemic to the technology. if you take these systems, these natural language systems, literally, they spend months analyzing texts from the internet. thousands of books and thousands of articles and blog posts and everything. it works because of volume. it works because you want to throw as much at it as you can. you cannot just weed out all the stuff that may be problematic. the industry as a whole is still struggling. how do we deal with that? right now you have to put band-aids on. put filters on. that is not always ideal. people have talked about can we develop synthetic data so to
7:06 am
speak. that is still an open question as well. technically, such a hard problem >> there is a scene in the book. a woman is trying to understand why the data does not work. can you walk us through that scene? >> clarify a startup in new york. they are trying to build, and image recognition system and a content moderation system. she herself is a black woman from ottawa. to her, the problem is obvious. the company had taken all of these stock photos which had been floating around the
7:07 am
internet for ages and use them to train the system. the majority of the photos were of white men. so when she sees that, that is a problem. it is not necessarily obvious to others. that is part of what, you know, we are dealing with here. the question of diversity has obviously been a problem for a wild. there is an added level for that problem. me, as a white man i have a certain perspective on the world. that will inform the data that i choose. one of the reasons i need a diverse population working on this. they can see this issues in the way she could see them. completely obvious to her.
7:08 am
>> you mentioned that -- there was a different scenario there. can you talk about that? to other people got let go in the wake of that. >> it is true. this is a project that i mentioned earlier with jeff. google started working with the department of defense on image recognition for drone footage. that can happen to a lot of uses. including serve aliens and autonomous weapons, eventually. that was a real concern to people. i think that there are a lot of echoes here with the situation involving bias. so many of the people who
7:09 am
protested that, many of whom, you are right, are women and no longer with the company. this is the way, the personality sometimes exert themselves. at least in the early years, they were encouraged to speak their minds. push back when they wanted to push back. that is one of the reasons you see them bubble over in the public sphere and the way they have not in others. they have really pushed back about this. they, you know, ended up pushing out a lot of people who protested. there is a pattern here in this respect. even though it is a very different situation, technically
7:10 am
>> an audience here related to government and ai. they are using ai for surveillance and defense tactics what should be the government's role? >> yeah, i do think these are issues that we have to deal with as a society as a whole. technology companies, maybe it is individuals, but that means government as well. we are starting to see government at least wake up to these ideas. even the dod has laid out ethical guidelines for the use of this technology. we need to keep thinking about this. i need to keep writing about it and you do, too. keep looking for what is really happening. it's easy to say we've seen this from government agencies.
7:11 am
you need to say we are thinking about it. we have a framework in place. how much will it relate do. when push comes to shove, is it really going to affect things? these are all questions that we all have to deal with. they will only become more important. how do you audit and ai system where you mentioned earlier, understanding what goes in and the answers that come >> there is a lot of disagreement on this. part of the problem is, we do not have the data that we need to really audit it. ultimately at this point, what it is about is really testing the system. see where it works and where it does not. that is not always the way
7:12 am
internet technology has worked. it is more about getting it out to the world and patching it after the fact. the way that systems are built now, it is to really test it. see where it goes wrong and fix those flaws. that is a hard thing to do. that is really what needs to be done nowadays. >> what is the importance of -- and you see this growing in the future? >> i think it is interesting. we have this giant network. it trains on all of this internet data. can we develop tools that allow us to really understand what is learned? again, that is a really interesting area of research. you see an industry that a lot
7:13 am
of people are working on. had the same time, these networks are getting bigger and bigger and bigger. they are taking in more and more debt. harder to understand what it is that they are learning. fundamentally, these machines are learning at a scale that we as humans cannot. it is just a reality of it. these machines are powerful. they can learn from more data than we could ever learn from. they can learn from these skills to a level that you could never hardcode all their behavior and as an engineer. i think that although interesting, it is not something that will pay off any time soon, if ever. we really need to realize that. >> you mentioned the way we
7:14 am
learn. a baby will learn words or how to handle something with her hands or walk. what about the falling over? >> you are exactly right. they have been for a long time. we keep developing new ways they are superior. the machine is not good at so many things that we are still good at. a machine can analyze thousands of cap photos. pinpoint all of those patterns that we could never define on our own. picking things up in a moment. it is a great point. >> human intelligence also has a
7:15 am
lot to deal with emotions. what we learn, how we learn it, how fast we acquire that knowledge but forget it. we depend on human emotions. an audience question. a drive to win. are there artificial machine equivalents to a motion whether intended or not and then i added on my own question. how would that help report requests for artificial intelligence? >> what i often say is we tend to see emotions sometimes in machines. they will exhibit a little piece of behavior and that will elicit something in us that reminds us of what we see in humans. we typically project those motions on the machine.
7:16 am
machines do not feel that, so to speak. among their many other flaws. there are efforts to re-create that. that is also hard to deal. what i will say, though, i think that we need to understand the way that these machines are affected by our own emotions. this does not build technology for technology sake. we need to think about how that affects us emotionally, how it affects us historically. there are so many things to consider when it comes to building these machines. would it help if they have emotions? i don't know. >> how was your relationship with technology changed?
7:17 am
>> what i try to bring to my reporting and even to the book as well is a healthy skepticism and objectivity. a story i often tell is, you know, my father was an engineer at ibm. one of the things he worked on was a universal product code. it is on all of your groceries. he had these amazing stories about the creation of the technology. it was technically interesting how they did this. th >> how is your own relationship with technology changed since you've been covering this so deeply? >> well, you know what i try to bring to my reporting and even to the book as well is a healthy skepticism at an objectivity.
7:18 am
you know, i often tell, as i said to you earlier, my father was an engineer at ibm and one of the things he worked on was thein universal product code, te bar code that's on all your groceries and it's something that everybody can relate to and he had these amazing stories about the creation of the technology. it was technically interesting how they did this but then he had a great story of when they actually deployed it and something happened that they didn't expect. they put this in grocery stores and there were literally protests on the sidewalks because people thought this was the signve of the beast, you kn, come true from the bible of the book of revelations is the person who was talking to me now, do they have the full picture? do they need to talk to someone else and see what their
7:19 am
perspective is. it is about constantly widening your net to new people that understand new things about technology and also about how that technology will affect that world. >> we talk about facebook earlier. a lot of the technology on that platform and others is about the nations. we have become these big bubbles where we don't talk to people that don't share the same values how are we amplifying that in what is the consequences for the future? >> it is another great point. you do see this in our daily lives when you use these services. it is designed to give you what you want. whether it's a social network or a chat bot.
7:20 am
another story about during quarantine about these chat bots that are using these ai systems. we have talked about it at length here. it is a powerful thing at this moment where we need, you know, interaction. people are starting to use that. what the system does, the reason people respond to it is it sort of gives people what they want. it is telling them positive things. we need both positive reinforcement and the negatives. we need to be taken out of those bubbles. how is technology going to do that? whether it's social network or a chat bot. how do you get people to use a chat bot? a good sibling and going to
7:21 am
point out my sisters did this, point out your flaws. where you need to be better. that is the important part of our lives. a sibling or a therapist. you do not need someone only telling you the good things. you don't want people only creating that bubble for you. how do we step out of that. >> are there enough people working on that problem to give you a balanced look with a very personalized outlook? >> i guess people are at least recognizing the problem. as i wrote about the chat bot, for instance, i talked to experts in all sorts of fields. in therapy. in technology. sherry, you know, well known in this field, she is adamant about this.
7:22 am
they are only reinforcing what you already think. they only give you the positive. we need to step back and think about it. at least calling attention to it >> people in quarantine, has that experience being isolated or maybe technology as part of the solution, but not the entire story, has that changed the focus area that you are talking about with people? what has been the effect of this >> the effect is huge. i think about it a lot. i think we all need to think about this. i think about this as i raise my two daughters. you rely on technology more and more. there are some things that are
7:23 am
incredibly positive. i have a 13-year-old daughter who has actually developed some of her relationships through face time. she will live on face time with her cousins on another coast. that is been a real positive thing. you can see it. in the long term, this can also drag us down. you can rely on technology as a crunch in no way that maybe we should not. it can be easy to stay at home and do zoom like this. it is easier to be a reporter to get on zoom. but, the better stories come. the better questions come. you find these things when you get out into the world. you are not just a journalist.
7:24 am
they are the things that i think we all need. it may be easier to say, you know, things are looking well for everybody at home. let's just keep it that way. maybe that is not the best thing. i am just hoping we can get back to a world, you know, where i am meeting people on the street. others are going back to the office for all of these very basic human reasons we are talking about. >> how would you come to choose the main characters in the book? >> well, what i eventually realized was there was this common thread with jeff. there is this tiny circle around him. but i have been fascinated by is this idea at various stages of a neural network.
7:25 am
so strange, even, you know, in this ai field. not a lot of people who believed it. that circle is teeny tiny. that makes for a really good story on two levels. fundamental story, someone who believes in everything, even the skepticism. you have this tiny group whose paths would cross each other in these really surprising ways. moments where i was doing this reporting period when i realized dennis came out of this program at the university of college london. this blend of neuroscience and ai. founded by jeff. and there were all of these moments that i discovered in my
7:26 am
reporting where jeff was there. talking about the moment when he and two students really showed image recognition. jeff, two years earlier had been instrumental in making this idea of speech recognition in a completely different part of the world. he was at microsoft. a great story about him traveling by train because he does not sit down. making this work. and on top of all of that, this tiny group suddenly became enormously valuable when the idea started to work. you see this at the opening of the book. he literally auctions his services to the high bidder. and it set the price for the talent. you had this tiny group of people and each, you know, they
7:27 am
are people. they are interesting in their own particular ways. and then there is certainly a demand and they move into these companies. that sort of became the center of the book. this tiny group that moved into the industry. that will leave out a lot of interesting and important people. that is part of any story that you do, any book that you write. >> let me back up. certainly the hub of this network. i had a chance to sit in at one of their workshops. one of the things that i noticed at the time, 2012 was it was all
7:28 am
guys. the one part of a group was not there. i do not know why. how does that trickle down to now? >> well, you walk into a room like that and we were actually working together when you wrote the story. you can see that. and you can see that historically. all of these people that we are talking about, the people that were instrumental, the people in that room when you are visiting that meeting are the people that built the technologies. a tiny group and they built it. that is a fundamental problem that is there as the technology starts to work. and then you have people, similar to your experience, walking into the conference and seeing hundreds of people, you
7:29 am
see this, too, who are therefore a lecture. she realizes that there is no one that looks like her. out of hundreds of people, she counted five people and they were all men. it is in barcelona. this is a global community that we are talking about. this issue is in many ways global. >> doing an interview with one of the first employees of facebook. she said that she came to technology and found out -- is that still playing out today? >> it absolutely is playing out. you see it time and again with
7:30 am
the situation. what i think is positive, though , people are more willing to call it out. what you and i saw first hand was it was often so hard to convince people to call it out. there were consequences. there still are for people who are willing to say the obvious. you know, on some level we have seen some progress. some people willing to stand up and say we need to think about this. that is so hard to do. we are starting to see that. we also need to listen when that happens. that could be hard, too. you have to be willing to put
7:31 am
yourself in uncomfortable situations. me as a reporter and white man, i need to listen when those moments come up. when people are critical of the way i'm doing things or i'm building a story. that is one thing that i have learned. do you have a certain way of doing things? you have to be willing to be challenged on that. even if there are really good reasons why you do things and why you build technology or you build a story. you have to be willing to step back. over and over and over again. there's something i need to think about. >> how do you want people to remember your book? >> what i wanted to do, you know , there came a moment when i thought, wow, this is a really bad idea. it is never going to happen.
7:32 am
build a book that told the definitive story at this moment in time. what has happened, you know, on so many different levels. that means roping in so many things we have talked about. the development of the idea and all of these areas where it started to work. this single idea is driving it area after area after area. i wanted to have it read like a novel. show all of these questions that it has raised. the bias question, the autonomous weapons question. the big question we have not talked about was the diff information question. these are the other huge problems we are going to face. these systems can generate images, videos, block post tweets as well as conversations that look like the real thing.
7:33 am
once the machine's perfect that, once we build a machine that can do that 100% of the time, yet the change in a way we look at the world. and then there are all the geopolitical issues. this is a global thing. it is not an american thing. all the talent was outside of the u.s. the u.s. companies jump on it. by the way, the beginning of the book when jeff -- there is a chinese player right then. they were right there. all of these geopolitical issues to consider. roping all of that into one book and kind of level set us for everything to come. this is what has happened. these are the questions that we
7:34 am
are facing including the agi question. how do we think about that. hopefully, it is a good read for people that want a human story. but, hopefully, i can layer at the bigger ideas on top of all of that. >> yeah. outside of the u.s. it has been very u.s. focused here. how is it playing out in china and asia and in europe? can you talk about that? >> absolutely. one of the things that i am fascinated by, maybe this is surprising to some people. jeff penton was an academic. professor at the university of toronto. when he moved into google, one of the stipulations was if he wanted to keep his professorship and he wanted to keep behaving like an academic, you saw this
7:35 am
with one of his old colleagues who followed him, he followed jeff lee. he would not have been able to publish. .... ....
7:36 am
and currency becomes who has the data, right, to train the systems and who has processing power and who has the talent, in some ways, a lot of people think china has the advantage there. they have a huge population, that means they are going to create more data, in theory they are going to create more talentede ai researchers who are going to build the systems and that's -- that's really what's
7:37 am
important there, so you have this new landscape that you have to think about differently. this is not a 1950's cold war landscape. we need not think about this in terms of, you know, export controls or, you know, ceiling off our borders to certain immigrants. absolutely. it's not that kind of world. we in the u.s. we are relying on chinese talent. i if we say bar our borders and we are not exporting anything to china, what is that going to do. they have access to all the researching paper, right? so we need to think about this world a differently.
7:38 am
certainly we would worry about espionage fromp other countries. there are concerns particularly when it comes to, you know, military applications and the like, but it is -- this is not the world of absolutes that people might have thought our world was in the past. >> some of the -- going back to the misinformation a moment ago, before it played out here, it played out elsewhere in the philippines, for example. are there things happening outside our borders that are powered by ai that we should keep an eye on in the future? >> absolutely. the prime example is in china where this technology, a neuronetwork which can identify
7:39 am
faces and photos among so many other things is being used to target an ethic minority. that's the type of thing that really raises concerns in this area. this same type of technology is being deployed here in the states and, you know, luckily we are starting to raise questions about it. china is an extreme example. as you see play it out in extreme ways. as we see it play out outside our borders, we need to think about how we are going to deploy these things and how we are starting to and you see this in my book where there start to be ad roll out of face recognition technology and really because of people like tim these companies start to wake up to it and some of them start to respond and at least say, you know, we need to think about legislation, you
7:40 am
know, as usual the companies only go so far but at least they're publicly recognizing these types of issues. i think you're exactly right. we live, you know, we live in a world where this technology is developed, everywhere, deployed everywhere, we can't think of what we are doing solely within our borders. >> we are also doing that here, right? a company that doesn't comee up in the book but their technology is being used for years to track on undocumented immigrants? >> that's exactly right. you might call technology simpler. it's not this sort of ai that we are talking about but it's often about, you know, the way the things are deployed and as the technology become more powerful those very issues become bigger and bigger. what can private citizens do if they are worried about the use of ai by the local police
7:41 am
department or t local governmen? >> well, i think one of the lessons here and i said this earlier, is that these are problems that we all need to deal with, right? it's not -- the companies aren't going to deal with it on their own. we know that, the governments aren't going to deal with it on their own. and particularly, you know, if we had the situations where, you know, the corporations are pushing people out or were calling attention to this, that becomes our problem. we need to individually, you know, speak up about this. journalists need to write about it but also people need to call ..attention to it. often the the problems that ai causes affect groups that are disenfranchised or marginalized
7:42 am
already and so the problems that it causes may seem that they are at an arm's reach, far away from someone else. why should people like you and me that are privileged care about that? >> well, again, we all have to step outside of our bubbles, right, and realize it's easy in silicon valley to forget everything else that is going on. it's easy to only look at what is happening in your bubble. we need to remember those these are not technologies just for the privilege, these are technologies that are creeping into the daily lives of everyone. sometimes in unexpected ways. we absolutely have to keep an eye on that.
7:43 am
>> what i really appreciate at the book is in a way it was a history of this community but also a story of us. >> absolutely. what i often say, you know, is that any good story is about people, right? and that includes technology, right? and technology writing doesn't always work that way, but fundamentally what i wanted to do was tell a story about people and then if i could build the bigger ideas on top of that. >> if you were going to write a sequel, what do you think about, a new technology, a new community that are going to push us forward even i more? >> well, we will see. my inclination is always to do something completely different,
7:44 am
so, you know, i may surprise you. i don't know. one of the things i've been thinking about is quantum computing which is another fascinating area which is coming to the fore. this ai story is only really just beginning and we are still understanding how the systems work and how they are being deployed. i think there's so much left to cover and certainly i will keep covering at the times. >> there's a lot of high-flying stories in your book, some quite literally. [laughter] >> can you talk aboutut that particular story and then we can move onto your word after that. >> that's a good place to end because we go back to jeff
7:45 am
hintton, right? you do learn in the first part of the book that he has this back problem and literally does not sit down. as a teenager he was lifting a space heater for his mother and slipped a disk. by late 50's, the disk would slip so often that it was hard for him to function. like all the stories from his students where they would talk and they are trying to defend thesis and he's either on desk or by the wall. he does not drive and he does not fly because the commercial airlines make you sit during takeoff and landing and this is another moment that i was just floored by as it came up in my research that, you know, jeff had moved to google and google is thinking about acquiring deep mind and they want jeff to go to
7:46 am
london to vet the company and help them decide if they are going to spend what ended up $650 million on this company. he says, well, i don't fly. alan, head of engineering devised this incredible contraction which was inspired by his own feet as a skydiver, basically strap jeff in place on the make-shift bed on a gulf stream jet and this is how jeff made it to london and ended up walking into the offices of deep mind. >> that's pretty wild. who told you the story? >> well, you know, one of the great things and the very hard things about our job is that, you know, you do it piece by piece. you get a little hint that this has happened and you get a little bit more. and once you have enough, then you can go to the source and say
7:47 am
i've got this much. might as well tell me the rest. that's the way the book often worked. >> we -- to follow up on deep mind here. one of your numbers was 44, 44 million, that group paid for jeff and two students. a crazy numbere at the time but, you know, in retrospect it's not even 2 years, i think. that would seem like a bargain basement deal given how much was paid for deep mind. why the huge explosion in the price for these -- for this talent, one researcher you talked about ended up acquiring an nfl quarterback. >> right. that -- the 44 million-dollar figure that jeff was acquired, jeff was acquired for, that was the hardest thing to pinpoint in the book. i was worried that i wouldn't be
7:48 am
able to back that up. that's how much they paid. that ended up being a $44 million for 3 people. that's a whole lot of money but then the prices would explode and basically supply and demand. tinny group of people who specialize in this field and we can argue about whether or not the companies behaved ration -- rationially but that meant the prices went sky high. everyone wanted their own jeff hinton and you see the most interestingly with microsoft where facebook had lacoo inform and google had hinton and they wanted their own. incredible guy that we talked about lu, top executives, ends up going to montreal to try to
7:49 am
get their own. you had the frenzy and when that happens, there are these moments. it may happen in other areas soon where the price just suddenly sky rise for the talent. >> is that still going up for the engineers looking for a job on this call? >> because it beheads into the other areas, self-driving cars, for instance. self-flying drones which are becoming big, it's definitely an area that people who are looking for a career should -- should look at. these are skills that are in demand and it is a change in the way we think about technology and the way technology is built.
7:50 am
>> anything that you left out of the book because there wasn't space for it that you had gotten in? >> i always say everything is in there. there's one great anecdote that i cannot share. sometimes that's just the way it works. one thing -- one anecdote that i wished was in there which i think was interesting because it parallels the whole story, fefe lee who was a professor at stanford -- exactly. she's in theen book. one adeck note that people don't realize that i think is really -- is really powerful in order for neuronetworks to work. you needed the data and processing power.
7:51 am
image net is a collection of photos that can recognize everything in the photos and that was the brainchild of fefe. and i was talking to her at one point and ended up talking to her advisers as well and there was a moment between them where fefe says i want to do image net. she wanted to bet her career on this idea. he told her that was not the way to go. that wasn't the thing to bet her career on and she did it anyway, right? that's another piece that had to be in place for this type of thing to work. >> why isn't that in the book? it's such a powerful thing that drove all the men's success? >> like i said, i -- it's complicated but i wished it was in there, right? and sometimes you make choices for narrative and -- and for
7:52 am
flow and, so you make the hard decisions but, you know, again, we need to step back and think rabout those decisions we've made, right? whoever we arere and rethink thm and say, you know, did i make a mistake there? does that, you know, does that need to go in the next type or the type of thing next time. >> how quickly did you write the book and what have you learned d from writing? >> i wrote it over the course of about 2 and a half, 3 years. i made the mistake of agreeing to write a book and agreeing to join "the new york times" on the , same week. never ever do that. that's a mistake. you end up making your life far too complicated.
7:53 am
[laughter] >> what i learned the next time i'm taking a book leave and concentrate on the book rather than trying to dot it in the mornings and late at night and standing in line at the grocery store while, you know, thumbing through google docs on my phone. that's not the way to go. >> so you learned more about the actual -- taking time off to write your project rather than doing it as part of your full-time job? >> well, i mean, i think you also -- in writing a book, you learn more about your field and i think you also learn more about the ways that we tell stories and -- and the way you give people a real ideas of what's going on. it's a different skill, you know, from -- from, you know, daily reporting and, you know,
7:54 am
i've learned a lot about things and like i said, you know, you also make, you know, make choices that, you know that you regret and you try to learn from that in the future. >> i think that's a perfect segue into the computer history museum's one word initiative. so youou were asked to write a single word down that would give a young person some advice and i know that it's a very personal word to you. can you show us what that story -- what that word is and tell us the story behind it? >> absolutely. this is going to go back to my father again who was an engineer at ibm but amateur philosopher. my word is truth. and there's some symbols below which i will explain. my father -- he loved philosopher adler who wrote this book called sixth grade ideas.
7:55 am
it was about, you know, many of the ideas that you should live your life by. my father always talked about -- and he would echo this book that there was three ideas that you should strive for in your daily life, truth, goodness and beauty and he -- he believed this to the point where he put symbols for each of these ideas on keep safe p boxes made of oak for grandchildren and believed this is the way you should live and truth was the most important and he would represent truth with the theorum and truth was so important because it informed what we understood to be goodness and beauty and to understand the concepts, you need to understand what is true and truth is something that you
7:56 am
do have to struggle with every day. i certainly struggled with it as a reporter. it's about constantly talking to new people and reevaluating what you've heard the previous day and taking in what you're hearing from everyonet and sitting down at the end of the day and thinking, what do i believe is true given everything that i've learned, giving everyone i've talked to. it's a personal decision but htit's informed by everyone you speak to on a daily basis. >> that's great. and you dedicate the book to your dad. >> absolutely. and he --- he died a few years ago so what i often say, the one person that would enjoy the book the most isn't around to read it and that makes mesad but otherwise i really do think this embodies the book in many ways,
7:57 am
a lot of the things he stood for. he was an engineer but he believed in the bigger, bigger ideas. and he often raised those concerns that we've talked about, about how technology can affect our world in ways that we might not expect. >> i think we will leave it at that. ei don't want to go on into moe awesome conversation here so i'm going to -- excuse me. i'm going to turn it back. thank you so much for doing this. it's been a pleasure to talk about the book and i hope it's very successful. >> thank you for doing this. a lot of fun. >> i will turn it over to you. >> thank you so much, daniel and cay, this is a terrific conversation. absolutely on point as we think about how technology is redefining what it means to be human and changing how we interact with one another and frankly the planet. chm p has established its to be
7:58 am
the premier institution and trusted source for preserving and communicating the history of computing and its impact on the human experience. and while we serve a diverse array of audiences, the key point here is the live events are a vital part of public education and conversations that help all of us become better citizens in the modern world where technology is ever present. these conversations are preserved and available full length video on the youtube channel but also key takeaways have been parsed and will be available on our website. and they are preserved as a permanent part of our collection for publication and researching and exhibits and ongoing education. so in quick summary, it's a beautiful thing to be able to think about history because everything has it. and as one of our previous guests pointed out and he was a noted historian, it's an imperfect but indispensable
7:59 am
guide to the future and the only mirror in measuring that we have for the present. thank you kate and daniela, it's a great conversation of how the world is today in ai and we will be back in a few years to readjust. thanks so much. bye, bye, everyone. >>, is c-span's online store, browse to see what's new. your purchase will support our nonprofit operations and you still have time to order the congressional directory with contact information and go to here is a look at some books being published this week. in the second university professor and best-selling author carol anderson examines the history of the second amendment in relation to africans. historian max hastings new book operation pedestal, to rescue


info Stream Only

Uploaded by TV Archive on