Skip to main content

tv   Samuel Woolley The Reality Game  CSPAN  February 29, 2020 2:31pm-3:16pm EST

2:31 pm
husband. >> i just have to add we're out of time, dick was the other opponent of his being chosen. >> yeah. >> and argued strenuously against it for many of the same reasons karl did. and the president finally leaned on him down in texas on the hot porch of the ranch house until, you know, dick was sweating and the president was sweating and finally dick said, okay, i'll do it. >> yeah. and you know what? he never held my if bluntness against me. he could not have been a better colleague to and more importantly, a better mentor. >> i'm terribly sorry that we're out of time. i hope you'll agree with me this was a terrific opportunity to hear from -- [applause] >> and now on c-span2's booktv, more television for serious readers. [inaudible conversations] >> like to welcome you all to city lights booksellers and publishers. we are thrill to have samuel
2:32 pm
woolley with us here tonight celebrating a very, very important new book. it's called "the reality game: how the next wave of technology will break the truth." it is from our friends at public affairs books. dr. woolley is a writer and researcher specializing in the study of a.i., emergent technology, politics, persuasion and social media. he is an assistant professor in the school of journalism and program director for computational propaganda research at the center for media engagement at the university of texas at austin. professor woolley founded and directed the digital intelligence lab at the institute for the future which is a 50-year-old think tank based in the heart of silicon valley. he also directed the research team at the computational propaganda project at the university of oxford. he has written on political manipulation of technology for a variety of publications
2:33 pm
including wired, the atlantic monthly, vice, tech crunch, the guardian and many, many others. his research has been featured in publications such as "the new york times," "the washington post," "the wall street journal." he's also made a appearances on the today show, "60 minutes" and "frontline". his work has been presented to members of nato, the u.s. congress and the u.k. parliament. it is such a great honor to have him here with us tonight doing this incredible and very important research. please give him a warm welcome. [applause] >> hi, everyone. it's great to be here. every time i hear my bio, it sounds kind of fake. it's been crazy. this is the last talk on my tour, and i'm really happy to be ending it here in san francisco, specifically at city lights. thank you to everyone here at the store for having me. i couldn't think of a better place to end this tour and to talk about really what is a book on democracy. at the end of the day.
2:34 pm
and the ways in which we reimagine and rebuilt democracy in the technological age. people assume when they talk to me about my work that i am a computer scientist, and that's actually not true. nothing could be further from the truth. for a long time i kind of thought that maybe i should, you know, try to play the game and be like, yeah, i took a few classes, i know a little bit about html. but at the end of the day, i'm the kind of person that studies what i study by talking to people. i spend time in places, i spend time with people, and i go deep. i go deep on subjects. and so for the better part of the last decade, i've been going deep on the subject of what i call computational propaganda. and it's a fancy term for the ways in which automation and computer code, algorithms and things like that get used to manipulate public opinion. what we've seen in the last four or five years, 2016 during the
2:35 pm
u.s. election, during the brexit election, in india with recent problems caused by arguably by what's app that have led to offline violence, we've seen social media become used as a tool for manipulation, as a tool for disinformation. a lot has changed, right? in the early 2000s we had a perspective that social media was going to be something that would be the savior of democracy in many ways, and that was best shown through google's kind of talking phrase of do no evil, it was also showcase by a lot of the work talking about digital utopia and sort of cyber libertarianism. that's not where we are now. but we're not lost. everything's not lost yet. this book is not just a book about how screwed up everything is and how scary the world is. it's actually a book about solutions. every single chapter in this bookends with a solution.
2:36 pm
the conclusion is a solutions-oriented chapter, and i realize that there are a lot of things that we can do. and i'm going to end the talk today on those things. so first let's talk about storytelling and what it means to be someone who studies technology by talking to the people who make and build technology. tonight i'm going to tell you, i'm going to introduce you to four people, four places and four ideas that i learned in the last several years. and these four people, places and ideas have been instrumental in how i wrote this book and how i've been thinking about technology. the first person, his name's phil, and phil my adviser. he's who the book is dedicated to. he's now the director of the oxford internet institute at the university of oxford, and phil really took me under his wing when i came to do my ph.d. at the university of washington in seattle. phil, at the time, had been studying the arab spring. he had been in tunisia, in tunis
2:37 pm
studying the people who were using technology in attempts to communicate about democracy, to organize protests, to do all of these sorts of things. heed had written a book with oxford press called "the digital origins of dictatorship and democracy." and the discussion in this book was all about the ways in which the internet had played a role from the beginning of the internet going mix in countries -- public in countries, for helping people to realize freedom but also for helping people to realize control. and so phil, obviously, was thinking about these kinds of things very early on. and i had just come from being a fellow on the obama campaign in 2012, and i had become enthralled when i was working on the campaign with the way that they were making use of day. i was, like, blown away by how sophisticated the data campaign was. there was a lot of excitement
2:38 pm
about the community organizing aspects of the campaign and the personal storytelling aspects, but really none of that would have been anything without the connection to the amount of data the obama campaign had on independent or undecided voters. so what they did was they married the data, the massive amounts of data and the massive amounts of work with personal stories and humanizing sort of the data in a way that was able to really reach people. and with a resounding of -- [inaudible] when i met phil, phil taught me something very important, and it was something i kind of known but really bear saying to all of you which is that technology and politics today are inherently connected. you really can't have one without the other. and to some extempt, if you think of technology similarly as tools, if we think about media and the ways that media gets used to commune candidate with people or two people about information on behalf of others, then this has always been the case. but in today's world, technology
2:39 pm
and politics are very much intertwained. intertwined. the campaigns that tend to do the best are the campaigns that have the most technological savvy because the reality is, not to overuse the phrase on the front cover, but the reality is that if you have a lot of data and if you can marry it to a sophisticated a.i. system, then you can do very, very hyperspecific, individualized ad targeting or targeting to people and speak to them in a way that they'd like to be spoken to. that's the thing we realize now. so phil, in seattle -- university of washington -- taught me that people and technology are intertwined. the next person i want to introduce you to is a person named andrew. i met andrew in england when i had taken a job at the university of oxford. what ended up happening is we'd gotten grant money to study compational propaganda in about 2013 from the national science foundation in the u.s., and then
2:40 pm
the european research council followed suit, and they wanted to know the ways in which russia and other countries were using social media to try to influence public opinion in democracies. so he said, hey, do you want to come to oxford with me? i'm like, yeah, twist my arm. of course i want to come to oxford with you. one day i was at a conference, actually lsd, and i was standing around, tonight know anyone, kind of scared about it -- still am -- and this guy eye poaches me. and when you study propaganda, a lot of times conspiracy theorists want to talk to you. when random people know who i am and i've given a talk, i'm always like we going to start talking about aliens or antivaccine stuff, am i going to have to carry on with you. the fact of the matter is i don't really know how to talk with you about that stuff. andrew said, hey, i actually make and build automated to files on social media for the
2:41 pm
labour party in england. i was like, what? yeah, you know, i control several hundred or thousand accounts on twitter. i do it on behalf of the labour party. they don't pay me to do it, but i'm doing it because i'm a member and believe in it and stuffer. and i said, wow, this is pretty crazy. yeah, let's that talk. and we got to talking and struck up an unlikely friendship, and he really taught me a lot about the ways in which people can use technology to amplify their voices online. a lot of what i talk about in this book is about social media bots, the use of campaigns built to look like people -- sorry, profiles built to look like people on social media but aren't people, they're automated profiles. and one person can manage many, many, many accounts online, and you can use those accounts to drive up likes, to retweet messages, you can also use them now with the evolution of machine learning, a.i., to talk to people in a more sophisticated fashion. and andrew taught me that
2:42 pm
there's always, always a person behind the technology. the technology doesn't exist on its own. social media firms, i think, today would have you believe that the algorithms are apolitical, that they don't have value, that they make decisions in a way that no one could have figured out or decided upon. but if you follow the work of people like noble or the work of microsoft who actually have something called social media collectives that does some surprisingly open work, you'll know that the algorithms and software and technology always have human value in them. the people that build these things encode them with their own beliefs. for instance, if you train a machine learning tool and you, what you have to do is you have to go through a process of tagging the data, and you have people do that. if all the people that tag the data are white men, then the
2:43 pm
algorithm may end up being racist, especially if it's prioritizing where bus lines should go in a certain neighborhood, hypothetically. that's something we've actually seen. suddenly the poorer neighborhoods or neighborhoods of color stop getting bus lines. they end up not getting as much buses coming through, and that is an algorithm or tool encoded with human values. bots are is tame thing. when you -- the same thing. when you with use social media, when companies build algorithms that prioritize certain types of information, there's politics there. there are decisions that go into that process. if i had a dollar for every time i heard a social media company, a representative of a social media company say we're not the arbiters of truth, i'd have $10,000, you know? because they don't want you to think that they arbitrate truth. but i'm hear to tell you today that's not the case. trending algorithms that prioritize certain kinds of
2:44 pm
information to people cure rate information. they prioritize the things that you see. for the longest time and even today, organizations like google, facebook, twitter have made decisions about how to prioritize news to people. think about that. that matters. and this book is about that. and andrew taught me that you always need to look at the person behind the tool. it's not enough for just to download a ton of data and say we're going to do big data analysis. we need to whoa why they're building it and who they're doing it for. you might think that it's, you know, savvy political campaigns a lot of times that are doing this work, but it turns out when you dig down deep, you start finding shadowy pr firms that actually say i can build your social media profile and add 10,000 accounts in the next few weeks, and surprise, surprise, what they're using is fake profiles, fake information. and it's a whole weird world out there. the third person i want to introduce you to is a woman
2:45 pm
named marina. and marina was my boss at the institute for the future, and i met marina the day or two days after trump won the 2016 election. it was the first time i'd ever been to institutes of the future, pal hoe alto, and they flew me out with a bunch of politicians from the state department who were concerned with the weaponization of a.i., and at the time, like, a.i. hadn't been weaponized in the way a lot of people thought it had. there weren't bots talking to people getting them to change their minds about politics. it was more than a.i. that was used behind the algorithms had been more sutley manipulating opinion. she listened to the experts and very astutely in her way said at the end of the talk, this is a continuation of kgb tactics and russian stuff. she grew up in ukraine during the, in the former soviet union. and she said maybe what we need
2:46 pm
to think about isn't that -- we don't need to think that this propaganda's new, because the tactics aren't new. the things we're seeing are continuation of things we've seen for a very long time. it's the technology and the way technology's being leveraged that's making it so much more potent. we see automation, anonymity, we see these things, the problems of scale, quantum computing, all of these issues are things we need to be considering. but while we consider them, we also need to consider the way the people who are behind them are using them. so marina taught me we need the look to history. we have to think about how propaganda's been used in the past and have a deep understanderring of the terminology that we use to discuss this stuff. because what we say matters. right now there's a sort of, a sort of epidemic in this cup of using the term fake news. and the term fake news has been weaponized by the people who spread fake news. i challenge you from here on
2:47 pm
out, if you want to ask me what you can do as an individual, don't use the term fake news. use misinformation which means stemly -- accidentally spread false information. or you could say malinformation which just means bad, stupid stuff, junk news. and so, you know, the terminology matters, history matters, and it's no surprise to me or researchers who have studied this stuff, people like carol jack who wrote a great piece called the lexicon of lies, it's no surprise that the people who spread these lies are retaking on the terminology and making the same exact arguments about when you point the finger at them, they say, no, you're doing this. that's exactly the playbook. the playbook isn't necessarily to change people's mind, it's to create confusion, to generate apathy, to make people mad and polarized. and that's something we miss a
2:48 pm
lot of the time. we think there's sophistication in the sense that these bots are coming and talking to us and making us suddenly become interested in owning a gun. the bots are there to make you not want to the finish to vote and not engage in dems cra. to think that system is is broken. so we've got to look at history, and marina is absolutely right that this stuff comes out of the soviet playbook. one thing that you all should know is that the russians aren't the only people that do computational propaganda. and, in fact, i think the russians benefit a lot from us thinking they are the only people that do computational propaganda and saying this is a thing that only happens from them. computational propaganda and information operations happen in nearly every country around the world. there's a great report from my old team at oxford that suggests during elections over 70-80
2:49 pm
countries, i'm not sure of the specific number, this stuff has been weaponized by governments and by campaigns. so it also happens domestically. domestic actors do this. in some sense there's been a democratization of computational propaganda. and, in fact, my next book is going to be coming out with yale press, it'll probably be even more boring -- [laughter] but my next book is called "manufacturing consensus," and the idea behind the book is that we news these technologies to create the illusion of popularities of things. that the more you make something look popular, the more you make it seem like a viable idea. okay. one more person. the last person, the last place and the last idea, last person is kathleen. she's my boss now at the university of austin. she was formerly at "the new york times," she was the dining editor, and before that she was a sports reporter. i don't know how you make that transition, you'll have to ask kathleen. but, yeah, she's fantastic.
2:50 pm
and when i went to ut, i had kind of lost a little bit of hope because everything's going on, and i'd been writing this damn book and spending time thinking about the ways in which the informational system broken. but i work at the school of journalism at ut, and kathleen's the director. and kathleen has taught me that we need to place faith in the institutions that we already have. we tonight need to create brand -- don't need to create brand new things. we have the federal communications commission, we don't need the federal disinformation commission, we don't need one more commission, we need to invest in those. but more specifically, we need to invest in journalism. journalism in this country has done amazing things. there's so many people that work for a great publication around the united states that want to do good work and want to protect democracy, but they're still having to learn on the fly. and, in fact, in the book i talk
2:51 pm
a lot about the ways in which journalism has been not just challenged by the digital era, it's not like they're feckless individuals or organizations that can't handle the digital era. it's that organizations like google, google news, youtube, facebook, twitter massively benefit off the work of journalists without giving any renumeration or money to these folks. the same can be said for organizations like wikipedia. when youtube faced the crisis of disinformation, what did it do? it started linking to wikipedia articles. they're a nonprofit. youtube was using it as the resource that youtube sent people to. same thing goes for journalists. google news, for the hongest time, gave snippets of articles, and when people started researching it, you couldn't click through to the actual article. when you clicked the actual article -- sorry, the research showed that no one actually read the full article. no one actually click through.
2:52 pm
they just read the little piece, and so the journalists put all their work into doing the investigation, writing the article, google posts a snippet and then no one actually reads it. and we wonder why the news industry is failing, why it's having a hard time. i mean, maybe not failing, maybe that's the wrong word. what i think is we can reinvigorate journalism, and i'm looking at an operation right now. i'm going to tell you exactly what my argument is. the argument is that the technology firms around the country should have to put, i don't know, 10 billion or $20 billion into a public trust in the united states and let it be overseen by civil society groups, people that have a stake this making sure the money spent wisely and well. google news labs has committed $350 million or so to the google news initiative. google gives out that money. they make partnerships with organizations. they make decisions about who gets it and who doesn't. and for a long time what google's done when they've
2:53 pm
experienced backlash about their policies or their algorithms not prioritizing full articles, they've deprioritized the news sites that have complained. and so that's not good enough. the technology companies have helped create this problem, and they have admitted to it. there's been a big mea culpa moment. we all saw mark zuckerberg sitting before congress saying, shit, i know that cambridge analytica did some bad stuff, we had a hand in that, that's kind of our fault, but they haven't really given back. they haven't really systematized the response to this problem. they've done some things and they've been working hard in many ways. but it's not enough. it's really important that we remember these are multibillion dollar companies, some of the richest companies in the world. they get treated more like nation-states these days than they get treated like a regular company. so kathleen taught me to reinvest in journalism and to be skeptical of what we see today
2:54 pm
and to not think, like i said earlier, that journalism's failed. with all these things in mind, all these things taken together we have a really interesting picture, and we have this book. and this book is actually a book about the future. i spent a lot of time talking about what we've been through, but this book looks at the next wave of technology. it's about deep state video, about a.i., about virtual reality, about automated voice systems that sound just hike a person like google assistant. and it thinks a lot about the ways in which this next wave of technology will make for more potent artificial disinformation. and the subtitle is provocative for a reason. it's supposed to scare people. but in best case scenario, you'll prove me wrong. you will not let technology break the truth, this is supposed to be a warning and a provocation. and to do a little reading for a few minutes, and then i'm going to end with solutions and we'll do q&a.
2:55 pm
conclusion. designing with human rights in mind, finding solutions to the problem pose by online disinformation and political manipulation is a daunting task. the disinformation landscape is vast, and it extends beyond our current ability to track it or contain it effectively. moreover, the internet grows larger every day. according to a 2017 report, 2017, on the state of the net from the software dim domo, we create 2.5 quintillion bites of data every day. 2.5 quintillion. i don't even know what that number means. moreover, the number of internet users grew by one billion to a total of 3.7 billion active users in the five years previous to that report. so from 2012-2017, the internet grew by a billion users. a "forbes" article asserted that 90% of the online data was generated in the previous two
2:56 pm
years. let me let that sink in. 90% of the online data available in the world was generated in the priest two years. this means that people working to gain public opinion or exert political oppression have almost unimaginable amounts of data available on potential targets with new information beaming out to them every millisecond. they also have access to a lot of potential targets and leverage online anonymity, automation and the sheer scale of the innocent to make them nearly untrackable. important ethical and legal considerations along with the possibility of finding a skillful operative make prosecution a poor strategy for stamping out computational propaganda. instead, we've got to fix the ecosystem. it's time to build, design and redesign the next wave of technology with human rights at the fore -- forefront of our minds. thinking about responses to the rising tide of computational propaganda, i find it helpful to
2:57 pm
break down into the short term, the medium term and the long term. many of these efforts are band-aid approaches focused on triaging help for the most egregious issues and oversights associated with the infrastructure of web 2.0. such amendments include little tweaks to social media news algorithms or software patches to existing tools. they include several new applications for identifying junk news or tracking and cataloging political his advertisements. these efforts are useful, but on manipulation tactics -- online manipulation tactics are constantly involving. what works today may not be useful a year from now. in fact, many of the applications that have been built quickly become defunct, going to code level changes made by the social media firms. a lack of funding or upkeep or propaganda agents finding a
2:58 pm
simple way to game them. there are useful products of this kind though like bot check and surf safe and robot labs used to detect computational propaganda on twitter and check for fake news sites using one's browser. but these programs need to be constantly updated. they present a promising start for tools that alert or disinformation threats, but they must combined with others in order to be trulyfective. another example of a propaganda tracker is -- [inaudible] a project from the alliance for securing democracy and the german marshall fund, gmf, that was built to track russian twitter accounts. although it's important to identify nefarious traffic and equally important to notify users they may be encountering false news reports, these efforts are too passive and too focused on user-based fixes. also it's important to remember
2:59 pm
that a good deal of research shows that post hoc fact checks do not work and the social media firms are engaged in a constant battle to catch at least new sigh bork and human-based cyber operations. more than anything, i want to communicate that everything's not lost. people are fighting to stem the tide of digital prop can da. employees at facebook have managed to dismantle disinformative advertisements on topics from payday loans to which candidate should get a vote. googlers have stood firm against military drone research and manufacturing. it's also leer to me that today's large tech firms have to get real with themselves. they are now media companies. purveyors of news, curators of information and, yes, arbiters of the truth. they owe a debt to both democracy and the free market,
3:00 pm
and their allegiance to the latter doesn't mean they can ignore the former. so one of the things that you might not know and you probably don't know, because why would you, is that the english version of the book subtitled how the next wave of technology will break the truth and what we can do about it. for some reason didn't make the american version due to editorial decision, but i really like the what we can do about it. and so i'm going to tell you a few things. i talked about the short term, the medium term and the hong term. . one of my closest friends
3:01 pm
said the university of washington, who was brilliant, got a message saying they, i don't out them, had shared a known piece of russian disinformation on tumbler from the research agency. this is a person who studies this stuff, and knows it really, really well. if they can be fooled by it, if i can be full but, then we can all be full by it. so we have to read the whole article and we have to think very carefully before we share what we share. what we are seeing right now is not the proliferation of cheap fakes it's not like sophisticated ai video that
3:02 pm
makes donald trump look like he saying something that is not saying. there's a whole chapter on this and it's coming. we will start to see more of it. what we are seeing as regular people sharing videos that are edited on imovie. videos of joe biden who looks like he's a racist because his edited videos of nancy plus he looks like she's drunk. looking like high school students the something happened completely different than what actually happened. also of jimmy costa was sped up that looked like he was abusing a white house intern. got his press credentials revoked and then subsequently reinstated through some strain episode of mr. magoo. so we have to be careful about what we share. the other thing that people can do is talk to the people that they love and they care about. i just wrote a report for the national endowment for democracy in d.c. it's called the demand for miss information. the main takeaway i got from that report, i read it with my friend katie joseph who gets credit for the work. the big takeaway i had, was the only way people really change their minds when it comes of these kind of issues and the polarized issue of the united states and other countries, is a talk to people they care about most. because psychologically you don't change your mind on the
3:03 pm
conversation have on facebook, based upon an argument that you have with someone that you don't know. it needs to be a conversation with someone who is civil and about a topic that you care about. that needs to be conveyed. so those are short-term things. in the medium-term, we need policy. we need regulation. i am so sick of hearing from people that this is a user issue and self-regulation is going to solve this. we can't just let google, facebook, twitter, and the powers that be make decisions internally about what they are going to do. when that's the case will say researchers like me, your research is flawed. it's not scientific. and i say would emit flawed is not scientific. will you didn't have all the access to the data that we had. and you didn't have a representative sample and the quantitative analysis she did. and i say oh well maybe you could share with me a whole data set? and then i'll do the analysis, how about that? like yeah we'll figure that
3:04 pm
out and they have this thing called social science one that is supposed to do that, but it's not happening. there's no real regulation holds them accountable for going on in their platform. there's no oversight. in the early to thousands they made a decision that said they weren't going to look at any political communication during elections online. they look at tv, they look at radio, they look at magazines, and they make sure campaigns are not doing really elicit mess of things in those areas. but online they don't look at all. that's hugely problematic. the government has a huge role in this. and the government needs to do something. currently, nothing will get done, there is policy that has been created that is waiting to go before a more let's say a lesser polarized congress. i'm very happy with that i think is well-informed. but right now there's not much
3:05 pm
appetite to it. however, we can look to other countries, other places throughout the world to figure out the ways in which they are dealing with the problem. some countries in south america, and then there's the long term fix. in the book the tagline is you got to design with democracy in line you got to design with human rights and mine. my belief is the platforms that we use today, it's not much of a belief but a fact, are designed for engagement. they were designed to get people to stay on them, they were designed to scale, to grow to massive sizes. we don't have to accept that, it's not so that we asked for it exists. it's infrastructural in society. facebook is over 2 billion users. there were also designed to make money at the end of the day. what happens when that's the case? i think what we get what we have now. we gotten what they built, and what they built was a system
3:06 pm
that did not prioritize high-quality information, or engagement, or civility. they built systems that prioritize the opposite of those things. i have a belief that it's possible to actually create technology in the interest of democracy and human rights. we see it to some extent when we can see more of it. with jane mcgonagle who really guided me on this book, i cocreated the ethical os with funding from the video network. you can find it for free it's a tool it had a hand in that gives technologist a bunch of proper occasions about how to think about the problems that could happen with the technology they are building as they are building it. it's kind of a toolkit to think through things. introduction to computer science at stanford has user, we talked about white, nader about using it and other
3:07 pm
organizations about how they can leverage it. that's really exciting for me because it means there's things we can know questions we can ask. the long term is more of a challenge. and at the simplest level, we have to reinvest in critical thinking in public schools. we have to reinvest in media literacy in public schools. it's not enough, it's not fair to expect that this is a user fix and a people -based problem as we have been told when the education institutes we have in this country don't give your critical thinking until you get to college a lot of us don't get there. it's time for us to create a robust system also. the 1950s a bunch of foundations came to united states to create public interest law because if you're poor you cannot get a lawyer. and you might say to some extent today that is true and
3:08 pm
i'd say you're right but would come along way. we need the same thing for technology. we cannot always have the best and brightest going to google and facebook to make a ton of money because they think they're going to do something good. they are sold that line. we also need the best and brightest going to nonprofits, universities, we need people to understand technology to be helping build technology and trend legislation. many people to understand code to help it build legislation on propaganda. while diane feinstein, local hero called on accountability and abuse act is put before the senate a year or two ago, was laudable. it was also not feasible at all. because i don't think technologist had really looked at it. the bill is written in such a way it did not understand the structure to the web. over half of all internet traffic comes from automated accounts. that probably would not happen
3:09 pm
if we had public interest technologies. when we studied germany, around 2016 during their last election, their election then, we thought we would find a lot of people sharing disinformation but we did not. and we were wondering why the heck that was. we knew the far right in germany had gained a foothold and become very powerful. but we quickly realizes germany had a really robust public media system. germany had a robust program for critical thinking in public schools. after world war ii, with what german experience with the '90s, they made it illegal to talk about white supremacy and things in the public domain. we are very, very afraid in this country to think about hate speech because we are afraid of free speech rightly so in some ways. but we cannot always treat the first amendment as if it's at odds with the right to safety, and the right to security. we got to figure out a better way. in the introduction to this book, i begin with a quote
3:10 pm
from betty reed who is from the bay area, she's in the east bay in her late '90s. she is a park ranger at the rosie the marin intent the riveter park museum. my wife took me i was kind of reluctant to go but i found myself in tears at the end of the talk because she is one of the most inspirational speakers i have ever heard. she was so amazing. i would highly recommend looking or about youtube. middle end with this much s and questions. she said every generation i know now has to re-create democracy in its time because democracy will never be fixed. it was not intended to. it's participatory form of governance and we all have the responsibility to perform that more perfect. thank you. [applause] >> so now i think we can do some q&a and i'm happy to
3:11 pm
manage it. don't be shy in a questions and when? >> i enjoyed the book and your talk for sure. i've a content question. first off i thought is interesting how you brought in the emergence component, and to me when you brought up the kgb and stuff in the 50s, we've seen the stuff before. but to me, want to think about emergence, i think about as you scale some of these things up you get more than just the sum of the parts. and we do see the emerging effects of some of the staff at scale that does render them kind of a new beast in a lot of ways. and kind of thinking about the deep state protection stuff. for a few more years we will be able to do it for sure. five years may be, ten years optimistically. but ultimately a lot of these techniques are asymmetric to the attacker. and i think that is something really important and gets lost in the discussion about this.
3:12 pm
the other question, i wanted to ask about was what you brought up kind of in the book about a lot of the blame gets put on the people who are building the solutions. i totally agree it's a techno- libertarian thinking when it comes to who's building the platforms and algorithms. but ultimately, to me, though systems are just answering to a ceo that answers to a board, that answers the shareholders. so to me all of this is symptomatic of capitalism driving things at the base. and so i am wondering how you see a solution existing in this kind of incentive model? >> on your first point, point well taken. the point on emergence to get into the heart of that, for everyone else, we seen disinformation and confrontational propaganda lunch at the public and often
3:13 pm
time its disinformation when they began. they are purposefully spread false information intended to manipulate. they become disinformation very quickly. when i worked at google jigsaw is a fellow for a year, it was interesting. i was an experience that really shook me. what they called it was seeding and fertilizing. you basically plant the seed, then you fertilize it, then you let regular people do the work and spread the propaganda for you. it is difficult to track, there is this problem you can't figure out where the snake's begins in the tail ends. tease that is a metaphor. i think, with the second question, so yes, this is a problem of capitalism in the free market there is no way of getting around it. when you prioritize the profit you prioritize system that take advantage of people a lot of the time. but not out myself but my masters is in critical society
3:14 pm
and i'm spending a lot of time thinking about the stuff. i believe that serious changes have to be made, but i don't think we have to throw the baby out with the bathwater. what we have right now, it may not be the best system that we have, but the quote gives me hope because it's the idea that democracy can look different in different generations. and i don't quite know what the answer is, i cannot answer your question, i wish i did have the answer. i act like i did not know anyone and probably just leave. we can rebuild democracy in a different way that interacts with capitalism in a different way. and maybe need to speak the language of the market. recently when i've been talking to people at facebook and google i've been saying maybe it will be good business for you. too actually do something beneficial for society. [laughter] think about that. how do we market this as good business? other questions?
3:15 pm
anyone? all right, that's great. short and sweet. that's by far the shortest and sweetest of all them, thank you, thank you very much for having me, this was great. [applause] >> next on book tv microsoft president brad smith talks about the ways technology can be both a tool and a weapon. and later caterpillar foundation president michelle sullivan, provides her thoughts on leadership and philanthropy. and espn's undefeated editor-in-chief kevin marita talks about the fierce 44, black americans are shook up the world. check your program guide for more information. >> brad smith, microsoft


info Stream Only

Uploaded by TV Archive on