Skip to main content

tv   Facebook Whistleblower Testifies Before UK Parliament Committee  CSPAN  November 30, 2021 4:48pm-7:15pm EST

4:48 pm
facebook whistle-blower frances haugen testified before a united kingdom parliament joint committee about extremism on the social media platform and the harmful effects the app snapchat owned by facebook can have on children. the uk is considering legislation to impose government regulations on facebook and other social media companies. this is two and a half hours. >> good afternoon and welcome to this session of the joint committee on the draft online safety bill. today we are pleased to welcome frances haugen to give evidence to the committee. we are delighted you've been able to make the trip to london and give us evidence in person. and, also, respect the personal decision you have taken to speak out on these matters with all the risks incumbent speaking out against a multi billion dollar corporation. i'd just like to ask you first about some of the statistics
4:49 pm
facebook uses itself to describe its performance. it says it removes what it finds on thepl platform but the documents you've published suggest its ai only actually finds 2% to 3% of hate speech there. t does that mean facebook's transparency report without any context around the numbers are essentially meaningless?s? >> i think it is important to -- there is a pattern of behavior on facebook, they are very good atci dancing with data. if you go and read that transparency report the fraction they're presenting is not total hate speech caught divided by total hate speech that exists which is what you might expect given what they said, the fraction they are actually stuff that s the robots got divided by stuff robots got plus what humans reported andha we took down. and it is true that 97% or something of what they take down happens becausese of their robo.
4:50 pm
but thatus is not actually the question we want answered. the question we want answered is did you take the hate speech down? and the number i've seen is like 3%ld to 5% but i wouldn't be surprised if there were some variation within the documents. >> it is a really important the variation in the document. >> essentially what we're looking at here is the creation of an independent regulator in theat tech sector that not onlyo we need the answers, but we need to know what the right questions are as well. >> part of why i came forward is i know i have a specific set of expertise. i've worked at four social networks. i'm an algorithmic specialist. i ran ranking for pin interest. i have understanding how ai can unintentionally behave. facebook never setout to prioritize polarizing divisive content. it happened to be a side effect ofno choices they did make. part of why i came forsurd is i
4:51 pm
am extremely, extremely worried about the condition of our society, and the interaction of the choices facebook has made and how it plays out more broadly. soca thingsll i'm specifically worried about are engagement based ranking, which they've said before. engagement based ranking is dangerous unless ai can take out the bad things. and as you saw they're getting 3% to 5% of hate speech. engagement based ranking prioritizes that kind of extreme content. i'm deeply concerned about the y underinvestment in nonenglish languages andng how they mislea the public. facebook s says we support 50 in reality hen there's only a tiny fraction. also i don't think this is widely known. u.k.k.ie englishsh is officiall different. and i would be funsurprised if the safety systems they
4:52 pm
developed primarily for american english were actually underenforcing in the u.k. and facebook should have to disclose those differences. i'm deeply concerned about the falsese choices that face presents. they routinely try to reduce the discussion to things like you can either have transparency or privacy, which do you want to have?? or if you want safety you have to have censorship. and facebook is unwilling to give up those slivers for our safety. and i i came forward now becaus nowt is the most critical time o act. when we seeee something like an oil spill, that oil spill doesn't make it harder for a society to regulate oil companies. but right nowf the failures of facebook are making it harder for useg to regulate facebook. >> looking at the way the
4:53 pm
platform is moderated today, do you think itnt makes it more likely we'll see events like on the 6th of january this year, more violent attacks, it's more likely we'll see moreve of thos events? >> i have no doubt like the eventsre we're seeing around th world,d things like myanmar and ethiopia, those are the opening chapters. it a prioritizes and amplifies divisive and polarizing, and two, it concentrates it. if facebook comes back and says only a tiny sliver on our platform is hate, only a tiny sliver is violence. two, it's hyperconcentrated in 5% of the population. andnl you only need 3% of the population on the street to have as revolution.
4:54 pm
i remember being told a couple years ago by a facebook executive the only way you can drive content -- what do you think groups are shaping experience for peopleal on facebook? i believe it was something like 60% of the content i think the thing important for this group to know facebook has been trying to extend the sessions. and thehe only way they can do that is by multiplying the content. when combined with engagement basedan ranking that group migh
4:55 pm
produce 1,000 pieces a content a day but only three get delivered. if your algorithm is biased, like viral variant. those giant groups are producing lots and lots of pieces of content, and only the ones most likely to spread are the ones that go out. >> what action is facebook taking about groups that share extremist>> content? >> i don't know the exact actions that have been taken in the last six months, year. actions regarding, they should have to articulate here's our five-point plan. facebook acting in a nontransparent, unaccountable
4:56 pm
way will just lead to more. >> they'rere a significant driv ofnt engagement, and gaugement a problem in the way facebook has designed it. >> part of what's dangerous about groups we talk about sometimes this idea of is this an individual problem or a societal problem? one of the s things that happen in eagrugrts is? algorithm takes people who have very main stream interests and push them'r towards extreme interests. one of the thing that happens
4:57 pm
withco groups and with networksf groups is people see echo chambers that create social norms. if someone gives covid vaccine they get completely pounced upon. they get torn erapart. i'veen learned certain ideals o acceptable and wunacceptable. when that context is around hate now you see a normalization aroundth hate, a normalization dehumanizing others. some of thesee groups have hundreds of thousands off membes inin them. >> i strongly recommend above a certain sized group they should be required to provide their own moderators and moderate every post. this would naturally in a content agnostic way regulate the impact of those largef groups.bl because ifen that group is
4:58 pm
actually valuable enough they'll have no trouble recruiting volunteers. if that group is just anpoin amplification point, like wae see in foreign operations using groups like this and a practice ofra borrowing viral content fr other places, we see these places as being if you were to launch an advertising campaign with misinformation in it, we always have a credit card to track you back.d if you want to start a group and invite 1,000 people every day -- the limitda i think is 2,200 people you can invite every day. you can build out thatew group, and yourmo content will land in their news feed over a month. things like that make them very, very dangerous and attract outside impact on the platform. > if an agency wanted to influence what people would
4:59 pm
see --it >> that's definitely a strategy currently used by information operations. there's no accountability. there's no trace. you can find a group to target any interest you want to. even if you remove microtargeting from ads, people would microtarget your groups. >> again, what do you think the company strategy is for dealing with this? again, there were changes made to18 facebook groups i think 20, 2018. it would seem some of the changes to the way news feed works in terms of the content it prefers. >> we need to move away from having binary choices. there's a huge continuum of optionsar that exist. coming and saying, hey, groups
5:00 pm
aren't 100,000 people or they create communities andli solidarity -- you get above a certain size maybe 10,000 people, you need to start monitoring that group because thatra alone. and similarly when you think about where do we add selective friction to these systems so they're safe in every language?d you don't need the ai to find the bad content. >> inok your experience you spe ofti testing of systems all the time. w does facebook experiment with the way systems work and how you can increase engagement, but obviously content on the news feed we know they experimented aroundew election time. how does facebook work in experimenting with these tools? >> facebook is continuously running manyy experiments in parallel on littlee slices of te data theyy have. i'm at strong proponent faceboo should haved to publish a feed 6 all the experiments they're running. and even just seeing the results
5:01 pm
data t allows to establish patterns of behavior. the real thing we're seeing here is facebook accepting real tiny additions of harm. rightht now we can't benchmark d say you'reen running this experiment, but if w we had tha data, we could see patterns of behavior and see whether or not trends aren occurring. >> who would you report to? is a huge weak spot. there would bere a phone numbern my break room i would call and saye did you see something that endangered public safety, call this number. someone will take you seriously and l listen to you in like the department of transportation. when i worked on counter espionage i saw things i was concerned about national security, but i had no idea how to escalate those because i didn't have faith in my chain of command.
5:02 pm
i didn't see where they would take that seriously. >> you would report to your line manager. would it beho up to them whethe they chose to escalate that? >> i wasta told at facebook we accomplish unimaginable things with far fewer resources than anyone would think possible. there is a culture that lionizes an ethic that is in my opinion irresponsible. person who can figure out how to move the metrics byt cutting the most corners is good.e and the reality is it doesn't matter if facebook is spending $14 million a year,g if they spend $25 billion or $35 billion, that's the real question. people will not get rallied around for help because everyone is under water.
5:03 pm
>> what do you think people like mark zuckerberg know about these things? >> i think it'snk i important t all facts are viewed through a lens of interpretation, and there is a pattern across a lot of people who run the company or senior leaders this may be the only job they've had. there are a lot of other people who are vps or directors where this is the only job they've ever had., so there is a lack of -- you know, the people who have been promoted or the people who focus on the goals they were given and not necessarily want to ask questions around publicre safet. and i think there's a real thing they're exposed and they say look at all the good we're doing.nt we didn't invent hate, we didn't invent ethnic violence.
5:04 pm
and that's not the question. the question is what is facebook doing to amplify or expand hate. >> you think o it's making hate worse? >> unquestionably, i think it's making hate worse. >> thank you for coming and talking to us. first of all, l just on some of that last fascinating discussion you were having. would the same be true if you were working in pr or legal with facebook? >> i've never worked in pr or communications with facebook so i'm not sure. i wass shocked oo hear recently facebook wants to double down on meta verse and they're going to hire 10,000 engineers in europe. and i was like, wow, see what we coulde have done if we had 10,00
5:05 pm
more engineers, that would be amazing. i think it's very-- short-term thinking. facebook's own research has shown when people have worse integrity experiences on e the site, they're lesss likely to retain. i think regulation could actually be good to facebook's long-term success. because it's back into a place where it t was pleasant on facebook andr that could be goo forth the long-term growth of t company. >> thank you. and let me go back also to the discussion about facebook groups, by which we're essentially talking about target groups clearly. if you were asked to be the regulator of a platform like facebook, how do you get the transparency about what'ss goin on in private groups given tha they're private? >> they need to have a conversation in society around how many people -- after a certain number of people have seen something is it truly private?r is that number 10,000, 25,000?
5:06 pm
is itt really private at that point? because i think there's an argument that facebook will make, which is that, you know, there might be a sensitive group someone might post einto, and w wouldn't want i to share it eve if 25,000 people saw it. i think that's more dangerous. if people are lull into a sense of safety no one can see their hate speech or more sensitive, like maybe they haven't come out yet. that isan dangerous because tho spaces are not safe. whenen 125,000 people saw something you don't know who saw it and b what they might do. proponent of both twitter and google are more trance parent. because google knows this happen they staff engineers. twitter knows 10% of all public tweets end up going out, and people analyze those and because
5:07 pm
twitter knows someone is b watching, they behave better. i think in the case of facebook and even private groups there shouldwh be some bar which we s enough people have seen it's not private. if w we want to catch national security threats weio need to he not just the people at facebook, we need to have 10,000 researchersng looking at it. i think in addition to that we have accountability on things & understanding whether or not our children are safe. >> a report on friday suggesting there's an algorithmic bar politically. do you think that's unique to twitter or facebook has had something included in the way these algorithms and these platforms with all their algorithms are d designed to optimize clicks and therefore
5:08 pm
there's somethingme about certa types of political content that makes it more extreme that is endemic to all these social mediaan companies? >> i am not aware of any research thatse demonstrates a bias on facebook. i'm familiart with lots of research thatke says the way engagement based s ranking was designed socso facebook calls ie meaningful social interactions. meaningful could have been hate speech orly bullying up until 2020. that would still be considered meaningful. let's call it social interaction rankings. i'vef seen lots of research tha says that kind of ranking, prioritizes polarizing extreme divisive content. it doesn't matter if you're on the left or right. it pushes youus to the extremes and it fans hate, right? anger and hate is the easiest way tohe grow on facebook. you figure out all the tricks and how to optimize facebook. and good actors, good publishers are already publishing all the content they can do.
5:09 pm
but bad actors have incentives to play the algorithms. and they figure out all the ways to optimize facebook. so the current system is biased toward bad actors. >> thank you. focusing on individual harm rather w than societal harm. given the work you've done around democracy as part of your work at facebook, do you think it is a mistake to admit societal harm? >> i think it is a grave danger to democracies and societies around the world to remit societal harm. i believe situations like ethiopia are just part of the opening chapters of a novel that is going to be horrific to read, right? we have h to care about societa
5:10 pm
harm not just for the global selflf but our own societies. like i said before when an oil spillde happens it doesn't maket harder for us to regulate oil companies. but rightht now facebook is closing theus door on us being able to act. we have a a slight window of ti to regain people control over ai. we have to take advantage of this moment. >> and my final question and thank you b is undoubtedly just because you're a digital company you're analyzing a lot of details the danger. is there any relationship between paid for advertising and moving into some of these dangerous private ngroups, possibly thenin moving into messagingg services and encryptd messaging and we should also be concerned about particularlyab given paid for advertising is currentlyur excluded.
5:11 pm
>> i am extremely concerned about paid for advertising being excluded because i engagement based ranking impacts ads as as it impacts organic content. an ad that gets more engagement is a cheaper ad. we have seen that it over and over again in facebook's research it's easier to provoke people with anger than with empathy and e compassion so we' literallyy subsidizing hate on these platforms. it is cheaper substantially to angry, hateful, divisive ad than it is to run a compassionate, empathetic ad. having full transparency on the ad stream and understanding, you know,, what are those biases an how they'rere targeted. in terms of user journeys from
5:12 pm
ads to extreme groups, i don't have documents regarding that, but i can imagine it happening. >> thank you really very much for being here and taking a personal risk to be here. we are grateful. i reallyy wanted to ask a numbe of questions that sort of speak to the s fact this system is entirely engineered for a particular outcome. i maybe you could start by telling us what is facebook optimized for? >> i think a thingrily not necessarily obvious to consumers ofof facebook, facebook is actually a two-sided marketplace in addition to being about consumers. like you can't consume content on facebook without getting someone to produce it. facebook switched over to engagement basedon ranking. they said thenk reason we're dog this is we believe it's important for people too interat with each other. we don't want people to
5:13 pm
mindlessly scroll.nt but a large part in the documentspl is a a large factor that motivated this change is people are producing less content. facebook has run things called producer side experience where they artificially get people more distribution to see what is the impact on your future behavior of getting more likes, more reshares, because they know if you get those ye little hits dopamine, you're more likely to produce more n content. it is inte facebook's interest make sure the content protection wheel keeps turning because you won't look att ads if your feed doesn't keep you on site. and facebook has accepted the cost because itt allows it to keep the wheeli turning. >> i was really struck not so
5:14 pm
much by the harms because iba funny wayay they just gave evidence to what a lot of people have beenn saying for a long tie and a lot of people have been experiencing. what wasasat super interesting that t again and again the documents show that facebook employees were saying, oh, you could do this, you could do that. and i think that a lot of people don't understand what you could do. so i'd really love you to say to the committee unpack a little bit. what wereye facebook employees sayingo we could do about providing the body image issues onin instagram? whatay were they saying about ethnic violence? and what were they saying about the democratic congress you were just referring to? >> i have been mischaracterized repeatedly in certain parts of the internet, but i'm here as a plant to get more censorship. one of the things i thought over and over again in the docs there are lots and lotsot of solution that don't involve picking good
5:15 pm
and badad ideas. they're about designing the platformig for safety, slowing e platform down, and that when yu focus and you give people more content from their family and friends you get for free less paid for l divisive content. you getnt less misinformation. you get -- because the biggest partg driving misinformation is thesee hyperdistribution nodes, these groups where it goes out 500,000 people. let's imagine alice posts something and bob reshares it and carol reshares and ends in dan's news feed. if dan had to copy and paste it and continue to share hait, so e that has the same impact as the entire third party fact checkins system only it's going to work in the global cell. it just slows the platform down. moving too systems that are humn scaled instead of having ai tell us where too focus is the safes
5:16 pm
way to design socialt media. and i want to remind people we liked social media before we had anme algorithmic feed. and facebook said if you move to a chronological feed you won't like it. and it's true with groups of 100,000 people, you're not going to like it. has choices it could do in different ways. like this is called discord servers where it's all chronological,t' but people bre out into different roomsat as i gets too crowded. that's a human scale solution not an ai driven solution. and so slowing the platform down, content agnostic strategies, that's the direction we need to go. >> why can't they do it? >> each one of these interventions so look at thear reshares -- there's some countries in the world where 35% of the content news feed is a reshare. and the reasonra why facebook doesn't crack down on reshares
5:17 pm
or reduce t friction in the mide east is because they don't want to lose that growth. they don't want 1% shorter sessions because that's also 1% less revenue. sos facebook has been unwillin to accept even little sliversf profitit being sacrificed for safety, and that's not acceptable. >> and i wanted to ask you in particular about what a great last measure is if you would tell us. >> facebook's current security strategy, safety strategy sometimes the heat in a country gets hotter and hotter and hotter. it might bee a place like myanme that can have any misinformation classifiers, because their language wasn't spoken by enough people. they allow these temperatures in these countries to get hotter and hotter. andtt when the pot starts boili
5:18 pm
over. they're like oh, no, we need to break the glass. instead of watching as the temperature gets hotter and making the platform safer as it happens. so that's what great last measures are. >> i guess why i'm asking these questions is if you could slow it down, make the groups smaller as a norm rather than an emergency, these are all really safety by design strategies. these are all just saying, you know, make your product safe. can you just say if you think those could be mandatory in the bill that we'reng looking at? >> facebook now has characterized why they turned off thehi break the glass measus is because they don't believe ia measures. there werere questions around h much do you v amplify live vide? do you go to 6x multiplier or
5:19 pm
60x multiplier.st little questions facebook optimizes theirac settings to growth over safety. i thinkor there's a real thing need to think about safety by design first, and facebook has demonstrate they have to mandate assessing the risk, and how good is that risk assessment, becausese facebook will give you a bad one if they can. and wete need to mandate they he to articulate solutions because facebook is not tarticulating what'sce the plan to. >> i want to raise an issue because a lot of the bill actually talks about terms and conditions and then upholding terms andng conditions have havg a regulatory sort of relationship to upholding them. but what about white listing where somee people are exempt from terms and conditions? >> for those not familiar with reportingli by "the wall street journal" there's a program
5:20 pm
called cross-check. so cross-check was a system where about 5 million people around the world, maybe 5.7 million were given specialo privileges that allows them to skip the lines, if you will, fot safety systems. so a t majority of safety syste inside of facebook didn't have enough staffing to actually manuallyal review, so facebook claims this is about a second check, making sure and if facebook was unwilling to invest in enough people to do that second check, so i think there's a real thing of unless we have more avenues to understand what's going on insided thehe company,y, like, for example, imagine if facebook was required to publish its research on a one-year lag, if they have tens ofrs billions of dollars of profit, they cann afford to a one-year lag. we should know systems like this exist because no one knew how bad the system was.
5:21 pm
facebook lacked their own oversight about it. >> the last thing i want to think about is obviously all the documents young bring come from facebook, but we can't really regulate for this company in this moment. we have to look at the sector as a w whole, and we have to look into the future. i wonder whether you have any advice for that. we're trying to make the digital world d better and safer for it users. >>s engagement based ranking isa problem across all sites. all sites it's easier to provoke humans to anger. engagement based ranking and vulnerabilities and panders to thosee things. i think having mandatory risk assessments and mandatory like remediation strategies and new hold these companies accountable is critical because companies are goingse to evolve. they're going to figure out how to sidestepo things, and we nee to make sure we have a process that is flexible and evolve.
5:22 pm
>> and finally, really, just, you know, a do you think the sce of the bill, if you research, do you think that's a wise move, r should we be looking for some systemic solution more broadly? >> that's a great question. i thinkk any platform that hasnt reached more than a couple million people, the public has a right to understand how that's impacting society, because we're entering an s age where technoly is accelerating faster and faster, right? democratic processes take time if they're done well. and we needd to be able to thin about how will we know when the next danger iss looming? because, for example, in my case, because c facebook is a public companyas if i work at tiktok, which is growing very,
5:23 pm
very gfast, that's a private company and i wouldn't have any avenues to dealwh with a whistle-blower. any tech company that has a large societal impact, we need toha think about how do they ge data. for example, if you can't take a college class today to understando integrity systems inside of facebook, the only people whonl understand it are people inside of facebook, so thinking systematically for large techom companies how do w get the information and we need toit make the decision is vital. >> you mentioned the oversight board. they themselves don't have access to thehe sort of information --on you've been publishing all the information you've been discussing. do you think thenc oversight bod should insist on transparency or itself in. >> i always reject binary. i'm not an "a" or "b" person. i love "c" and "d." i think there's a great opportunity for the oversight
5:24 pm
board to experiment. these are defining moments to the oversight board. what relationship does it want to have with facebook? and i think -- i hope the oversight board takes a moment tote step up and demand a relationship that has more transparency because they should ask the question why was facebook l able to -- what enabd them? because if facebook can come in there and juste actively mislea the oversight board, which is what they did, i don't know -- i don't know whatno the purpose o thee oversight board is. >>ov it's the hindsight board n the oversight board. you've been very eloquent about the impact of the algorithm. you talked about pushing extreme content, the amplification of that sort of content, an addiction driver i think you've used the phrase. and thisse follows on, really,
5:25 pm
from talking about the oversight board orbo a regulator over her or indeed trying to construct a safety by design regime. what do we need to know about theow algorithm? and how do we get that basically? should itt, bee about the outpuf an algorithm? or should we be actually inspecting the entrails for a code? when we talk about transparency it's w very easy just to say we need to be much more transparent about thehe operation of these algorithms, but what o does tha really mean? >> i think it's always important to think about facebook as a concert of algorithms. there are manyer different algorithmic systems and they work in different ways. and understanding how all those parts workse and how they work together iser important. giving an example facebook has said engagement based ranking ie
5:26 pm
dangerous unless you have the ai that can pick out extreme content. because of this they are actively misleading the speakers of most large languages in the world by saying we support 50 languages and most of the countries have a fraction of the safety systems english have. when we say how does the algorithm work,ee we need to be thinking about what is the appearance ofs the algorithm fo lots ofen individual population. because the experience of facebook's news feed algorithm in a place that doesn't have infegty systems on is very different than say the experience in menlo park. there are ways of doing privacy sensitive disclosures of -- we call it segmentation. so imagine if you divided the united states up into 600
5:27 pm
communities based on what groupt people interact with, their interests, you don't need to say this group is, you know, 35 to 45-year-old white women whold le in the south. you don't't need to say that. you can have a number on that cluster. buttut understanding are some groupsnd disproportionately getting covid info? right now 40% of those segments are getting 80% of all the mis-info. for hatet speech it's the same way. for. violence incitement it's te samell way. we should really be asking do we understand the experiences of the algorithm, and if facebook gives you aggregate data it will likely hide. i want
5:28 pm
to be really clear. the people c who go and commit acts of violence, they're people who get hyperexposed to this dangerous content, so we need o be able to break out those experiences. >> it's really interesting. do you think that that is practical for facebook to produce -- would they need to have further research, or have they gotar ready access to that kind of information? >> you could produce that information today, right? so. the segmentation systems exist. that was one of the projects i founded w when i was at faceboos that has been used since, and they'veat already produced manyf these integrity like statistics. so part ofeb what's extremely important facebook should have to publish which integrity systems existnd and in which languages. let's imagine we're looking at self-harmon content for teenage. let's imagine we came in and said we want to understand how
5:29 pm
is self-harm concentrated across these segments. facebook's most recent position according to a source we talked to is they said we don't track self-harm content. if they were forced to publish integrity systems exist, they would say wait why not use self-harm as a classifier? if i were writing standards on risk assessment, a mandatory exposition i'd put in there is you need segmented analysis because the median experience on facebook is a pretty good experience. the real danger is that 20% of the population has a horrible experience or experience that is dangerous. core of what we would need by way of information from f facebook or other
5:30 pm
platforms, or o is there other information about data? i mean, what else do we need to be really effective in risk assessment? >> i think there's's an opportunity - -- imagine for ea ofth those integrity systems facebook had to show you a sampling of content at different scores? really concerned about is that facebook has trouble differentiating in many languages their extreme terrorism a content and counter terrorism content. think about the role of counter terrorismsm content in society, howw it helps people make sociey safer. for the language in question i believe it was in arabic, 76% of counter terrorism content which getser labeled if facebook has content we can go in and check how interesting because each
5:31 pm
differently.orms i think there is a real importance i for if there were d facebook had to disclose what theng scoring parameters were, guarantee you researchers would develop techniques for understanding thees roles of the scores and amplifying which kind of content. >> thank youou very much. you might be interested to know you'rere trending on twitter. so people are listening. i thought the most chilling sentence that you'd come out with so far in this afternoon and i wrote it down, it's the easiest way to grow on facebook. only that's shocking, isn't it? what a horrendous insight into contemporary society on social media, that that should be the
5:32 pm
case. >> one report from facebook demonstrates how there's different kind of feedback cycles aller playing in concert. so when you look at the hostility of a comments thread a singlegl publisher at a time an take all the content on facebook and look at the average hostility of thatd, common thre, theom more hostile a common thread, the more likely it'll click as that publisher. we also see people who want to grow really fast do that techniquem and spread it into their own groups and the easiest psychological reaction is anger. >> for children this is particularly challenging, isn't it? i'd likew to follow up
5:33 pm
specificallyw on harm to children. what percentage of british teenagers -- get traced back to instagram. >> i don't remember the exact statistic. i think it was around 12% or 13. >> yes, it's exactly that. and body image is also made much worse,e, isn't it? ander why should that be for people who don't understand that? why should b it be that being o instagramad makes you feel bad about the way your body looks? >> facebook's own reports say it is not just that instagram is dangerous forge teenagers, it i actually more dangerous than otherth forms of social media - >> s why? because tiktok is about doing fun activities.
5:34 pm
reddit is at least vaguely about ideas, butde instagram is about social comparisons and about bodies. it'se about peoples lifestyles. a number of things are different than what high schoolig used toe like. sot when i was in high school i didn'tte matter if your experies in high school was horrible. most kids t had good homes to g home to, and they could at the end of the day disconnect. they could get a break for 16 hours. facebook's own r research says w the bullying follows children home. it goes into their i bedrooms. the last thing they see at night is someonen being cruel to them. thee first thing they see in th
5:35 pm
morning. >> some of your answers were quite complicated. so perhaps you could tell us in a really simple way that anybody can get what facebook could do address thoseto issues, childre to the want to kill themselves, childrenen who are being bullie children obsessed with their body image and not in a healthy way, what is it facebook can do now without a lot of difficulty to solve those tissues? >> there are a number of factors that interplay and drive those issues. on a basic level children don't have the self-regulation adults do. whenhe u kids describe their us of instagram, facebook's a own research describes it as an narrative. the kids say this makes me
5:36 pm
unhappy. a i feel like i don't have the ability to control my usage of it, and i feel like if i left i'll be b deeply ostracized. >> they shouldn't be on it. >> i would love to see a proposal from an established independent agency that had at picture of what a safe version of instagram is for a 14-year-old. >> you don't think such a thing exists. to speak with care whether or not instagram is safe for a 10-year-old. >> what i find misleading about facebook's statements regarding children is they say things like we need instagram kids because kids are going to lie about their age, m so we might as wel have a safe thing for them. facebook should havee to publis what they do to protect
5:37 pm
134-year-olds on their platform. facebook's research does something where facebook can guessfa how old you are with a great deall of precision becaus they taker a look at who your friends are, who you interact with. >> because you're at school. if you're wearing a school uniform chances are you're under 20. > this is something they fou, they found facebook had estimated the ages of teenagers and worked backwards to figure lied about kids their ages and were a on the platforms. they found 10% to 15% of 10-year-olds were on thehe platform. facebook s should have to publi those stats every year. >> so facebook with resolve this, solve this if it wants to so. >> facebook can make a huge dent on this if they wanted to. and they don't because they know youngf users are the future of the platform, and the younger they get them, the more likely
5:38 pm
they'll get them dihooked. >> they're also u getting to se all the disinformation about covid t and everything else the restf us are i getting to see. and just remind us what percentage ofag disinformation being taken down about facebook? >> i t actually don't know that state off the top b of my head. >> from what i understand it, it's 3% to 5%. >>. oh, that's for hate speech. but i'm sure it's approximately the same. >> it's probably even less of the only information false at facebook is information that has been verified by third party fact-p checking systems. and that can only catch viral misinformation. that's misinformation that goes to half a million, a million people. and this is the most important thing for thehi u.k. i don't believe there's any -- anywhere near so much third party factin checking coverage r the u.k. u compared today the united states. >> okay. actually the figure is 10% to 20%, so i stand corrected.
5:39 pm
10% to 20% for misinformation, and 3% to 5% for hate speech. so a vast amount of disinformation and hate speech is getting through to children, which must present childrenn wih a veryoh pucuellar sense of the world. and we have absolutely no idea, do we, how these children are going to grow up and change and develop and mature having lived in this very poisonous society at this very delicate stage in theirag development. i'm extremely worried about theut development and impacts o instagram onn children. you f may have osyo porosis for the rest of your life. there are going to be women walking around on the face of the worthes with brittle bones because of the choices facebook makes now. kids are learning people they care about treat them cruelly because kids on instagram when
5:40 pm
they moved by the feedback of watching someone cry, they're much moree hateful to people evn their flends. imagine what the domestic relationships will be like to those kids when they're 30 if they learn that people who they care aboutut are mean to them. >> the other disturbing thought you told us about is the idea that language matters. so we think facebook is bad now, but what we don't tend to realize in our culture is that all the other languages around the world are getting no moderation of any kind at all. >> the u.k. is a diverse society. the gaugement based ranking is dangerous without aai. those people are also living in
5:41 pm
the u.k. and being fed misinformation thatat is dangers or radicalizes people. so i language based coverage is not just good for individual but a national security issue. >> that's interesting. on the s social front you point out that there might be differencese between the united states. i have personal experience on on twitter where i was called -- andnd i reported on twitter andd twitter wrote back and saidei there was nothing beg called. and i wrote back the exact chapter and verse from their community that showed it was unacceptable, and somebody wrote back to me presumably from california telling me it was absolutely acceptable. to be generous it may just be they didn't know what i a bende
5:42 pm
was w in the united states. in aos nutshell, what do you wa us to do? what's the most useful thing in addressing the concerns you've raised here? >> i think forcing facebook -- let me be clear, bad actors have alreadyd tested facebook. they've gone and try to hit theerate limits. they've tried experiments with content.na theyio know facebook's limitations. the only one who doesn't know facebook's limitations are good actors. facebook needd to disclose whic languages they work in and the performance per language or per dialect. iee y guarantee you the safety systems e designed for english probably don't work as well on u.k. english versus american
5:43 pm
english. what youras evidence has shown us is that facebook is -- it's failing to prevent harm to children. it's failing to prevent the spread of disinformation. it'sfo failing to prevent hate speech. it does have the power to deal with theseue issues. it's just choosing not to, which makes me wonder whether facebook is just fundamentally evil. is facebook evil? >> i cannot t understand the hearts of men. and i think there's a real think of people -- good people -- and facebook is overwhelmingly full of conscientious, kind and empathetic people. good people who are embedded in systems are led to bad actions, and there's a real pattern of people who are willing to look the other way are promoted more than peoplete who raise alarms. >> we know with athletes in history, don't we.
5:44 pm
can we compromise it's not evil, moeb that's an overly moralistic word. but the way some offth the outcs of facebook's behavior is evil. >> i think it's negligent. >> malevolent? >> malevolent implies intent. i believe there's a pattern that facebook is unwilling to acknowledge its own power. it believes in a world of flatness which hides the difference that children are not adults. they believen in flatness and won't accept the consequences of their actions. so i think that is negligent, and it is ignorance, but i can't see -- i see into their hearts so int don't want to consider i asthma lev lnt. >> i respect your desire obviously to answer the question in your own way. given the evidence you've given us, i think a reasonable person running facebook and seeing the consequences of thef company's
5:45 pm
behavior would, i imagine, have to conclude that what they were doing,ay the way their company wereve performing and the outcos were malevolent and would want to do something about it. >> do you mind if i rest my voice for five t minutes? can we take a break for a second? sorry, i don't know how long we're going to go. if we go for two hours -- go ahead and ask. i am a>> big proponent of we ne to look at systems and how systems perform, and this idea that --- and actually this is a huge problem i inside of facebo. ifif they establish the right metrics, they can f allow peopl
5:46 pm
free reign. they're intoxicated by flatness. it's and quarter of a mile longn one o room. they believe inn flatness. they believe if you pick a metric you can let people do whatever they want to move that metric, and that's all you have to do. if you learn data and that metric is leading to harm which is what meaningful social t interactions did, that the metric can get embedded and people get scared to change a metric and make people not get their b bonuses. so i think there's a real thing of there's no will at the tom. you know, mark zuckerberg has unilateral control over 3 billion people. there's no will at thegh top to make sure these systems are run in an adequately safe way. and i think until we bring in a counterweight, things will begh operated for thet shareholders interest and not for the public
5:47 pm
interest. >> thank you, chair. and thank you, again, for joining us. today. it's incredibly important. your testimony has been heard loud and clear. the points aren't just -- if facebook were optimizing algorithms in the same way or dutifully doing it w in the sam way a drug company was trying to improve addiction of their product, p it would be probably viewed very differently. i wondered if you could explore a bitan further this role of addiction and whether or not facebook is doing something we perhaps have never seen in history before, which is creating an addictive product that perhaps isn't consumed by taking a drug as it were, but is consumed by a screen? >> inside of facebook there are many euphemisms that are meant to hide the emotional impact of things. the ethnic ple,f violence team isen called the social cohesion team. ethnic violence is what happens
5:48 pm
and social cohesion breaks down. forma addiction the metaphor is problematic use. a people are not addicted. they have problematic use. the reality is that using large scale studies -- so these are 100,000 people -- facebook has found problematic use is much worse inor young people than people who are older. so the bar for problematic use is you have the self-awareness and honest enough to yourself to admit you don't have control of your usage and it is harming your physical health, yourou school or your employment. and for 14-year-olds, their first year haven't gotten enough problematic use yet. but between 5.8% and 8% of kids say they have problematic use, and that's a huge problem. the real number is probably 15%, 20%. i am deeply concerned about ok
5:49 pm
facebook's role in hurting the mostmo vulnerable among us. facebook has studied who has been most exposed to misinformation. and it is peoplepo who have bee recently widowed, people recently w divorced, people who moved from the city, people who have been c socially isolated. i'm deeply concerned they've made a product that can lead people away from their real communities and isolate them in these rapid holes, in these filter f bubbles. what you find is when people are sent s targeted misinformation a community, it can make it hard to reintegrate into a larger societyts because now you don't have shared facts. i'd like tof talk about the nis information burden o instead of thinking about it as -- because it is a burden when we encounter this kind of information. facebook right now doesn't have any disincentive for trying to do high quality, shorter sessions. imaginee if there's a syntax tht was a penny an hour.
5:50 pm
this is like dollars a year for facebook. there was a syntax that pushed facebook to shorter sessions with higher quality. nothing today is incentivized if this. all the incentives say if you can gets say if you can get them to stayev on longer, you'll get more ad revenue, you'll make more- money. >> in h the sense of the way th discussion is around thaten we' looking at w the online safety around it being a publishing t platform. should we be looking at this much moremo about almost a prodt approach, whichic in essence is the addiction, as you say, with young people, and pass on trying to get almost a high from the dopamine in their brain. we have heard previous testimony fromes experts highlighting tha children's brains have been changing becauseil they're usin facebook and other platforms to a large extent over many, many
5:51 pm
hours. were being given the same h symptoms, the same acces wed would be very quick to clam down on it, but we call it facebook and we think everyone is using it nicely, that that doesn't happen. it's interesting your view on the impacts of children, whether facebook iss looking at that an whether w we should be doing th almost with regards to being a product rather than a platform. >> i find it really telling if you go to silicon valley and look at t the most elite privat schools, they often have zero social media policies. that they try to establish cultures wherere you don't use phones and don't connectct with each other on d social media. the fact thatt is a trend in these schools in silicon valley warning to us all. it is super scary to me that we're not taking a safety firstf perspective with regard to children. safety by design is so essential for kids because the burden that
5:52 pm
we have set up until now is the idea thatt the public has to prove to facebook that facebook is dangerous. facebook hass never had to prov that their product is safe for children. and we need to put that through, like with pharmaceuticals a long time ago, we said it's not the obligation of the public to say this medicine is danger. it's the obligation of the producer to prove the medicine is safe. we have done that over and over again. this isd the right moment to as, to change that relationship with the public. >> and if i may just on that point, sorry, my screen seems to be switching off. my apologies. with regards to that point of addiction, have there been any studies from your awareness within facebook and within the documents you haveen seen where they havehe actually looked at where they canan increase addiction by the algorithm? >> i have not seen any documents that are as explicit as saying
5:53 pm
facebook is trying to make addiction worse, but i have seen documents where on one side, someone is saying the number of sessions per day that someone has, like the number of times they visit facebook, is indicative of their risk of exhibiting problematic use. but on the other side, there's clearly not talking to each other. someone says interesting, an indicator that people will still beer on the platform in three months is i if they have more sessions every day. we should figure out f how to drive more sessions. this is an example of facebook is not -- because their management style is flat, there isn't enough cross promotion, cross filtrationfiltration, and is kept away from the highlights, and that kind of world where it's not integrated, it causes dangers and makes the problemse worse. >> we have been here over an hour. we'll take a ten-minute break.
5:54 pm
thank you. >> thank you. the session will now resume. i will ask to continue with his questions. >> thank you, chair. and thank you again. the question i wanted to build on, i have a few more questions, but hopefully it won't take too long. one just continues on facebook and the platform. you mentioned before that you haven't seen any research in that area, but i just wonder if there's any awareness within facebook of the actual effects of long use of facebook and similar platforms on children's brains as they're developing? >> i think there's an important question too be asked, which is what is the incremental value added to a child after some number of hours of usage per day. i'm not a child psychologist. i'm not a neurologist. i can't advise on what that time
5:55 pm
limit should be, but i think we should ask,s like we should weih a trade-off, which is i think it's possible i to say there is value given from instagram, but there's a real question how valuable is thebl second hour after the firstft hour, how valuable is the third hour after the second hour? the heimpacts are probably more than cumulative. they probably expand potentially moreover time. those are great questions to ask. i don't have a great answer. >> on that point before i move on, a small extra point. do you think from your experience that ther senior leadership, including mark zuckerberg, actually care if they're doing harm to the next generation of society, especially children? >> i cannot see in the hearts of men, so i don't know. i don't know what their position is. i know that there is a philosophy inside the company i have seen repeated over and over again, which is people focus on the good.
5:56 pm
there's aoc culture of positivi. and you know, that's not always -- but the problem is that when it's so intense that it discourages people from looking at hardt questions, the it becomes dangerous. and i think it's really, if you haven't adequately invested in security andnd safety, and when they see a conflict of interest between profits and people, they coop choosing profits. >> you agree it's a sign that they perhaps don't care because they haven't investigated or done research into this area? >>re i think they need to do mo research and take morear action and they need to accept that it's not free, that safety is important and is ae, common goo. and that they need to invest more. >> thank you. if i may just on a slightly different point, if i may. you're obviously a globally known whistleblower and one of the aspects we have looked at overr the last few weeks is the
5:57 pm
anonymity, and one of the points that is made is if we pushed on anonymity, that would do harm in the future. want to get your sense whether you agree with that and if you have any particular view on anonymity. >> i a worked on google plus in the early days. i was actually the person in charge of profiles on google plus. when google internally had a small crisis over whether or not reals names should be mandated. and there was a movement inside the company called real names harmful.d it detailed at great length all the different platforms that are harmed by a including anonymity. that's groups like domestic abuse survivors where their personal safety may berv at ris ifif they're forced to engage wh their real name. anonymity, i think it's important to weigh what is the
5:58 pm
incremental value of requiring real names. real names are difficult to implement. so mosto countries in the world do not have digital services where we can verify someone's id versusus their picture or database. and inhe a world where someone e use a vpn and claim they're in one of those countries and register a profile, that means they can still do whatever action you're afraid of them doing ytoday. the second thing is that facebook knows so much about you, and they're not giving you information to facilitate investigations, that's a different question. facebook knows a huge amount about youce today. the idea thatt you're anonymous on facebook is, i think, not actually accurate to refer to what's happening. but the a third thing is the re problem here is the system amplification. it's not a problem about individuals. it's having a aut system that prioritizes and mass distributes polarized and extreme content and in situations where you just
5:59 pm
limit content, not limit, when you just show more content from yourur family and friends, you t for free safer, less dangerous content. >> so very finely, just in terms of anonymity for this report, are you saying we should be focusing moree on the proliferation of the content to large numbers than we should on anonymity and the sort of content? i think the much more scalable effective e solution i thinking about how the content is c distributed on the platfor what are the biases of the algorithms,bu what are they distributed more of, and are people getting pounded with that content? for example, thisthth happens o both sides. both people otexposed to hypertoxicity. >> thank you.he thank you very much. >> thank you. i want to ask you about the
6:00 pm
pointt you just made on anonymity. whatit you're saying sounds lik anonymity currently exists to hide the identity of the abuser from thehe victims but not the identity of the abuser to the platform. >> platforms have far more information about accounts than aware of.ople are and platforms could be more helpful in identifying those connections in cases. and so i think it's a question of ffacebook's willingness to t to a protect people more so thaa question of are those people anonymous on facebook. >> one of the concerns is if you say, well, they should always know who t the account user is. some people thsay, if you do th, there's a danger of the system being hacked o or the informati got out in another way. whatge you're saying is practically, the companyy alreay
6:01 pm
has that data anyway. it knows so much about each one of its users. and facebook, you have to have your name on the account anyway ine theory. anonymity doesn't really exist because the company w knows so much about you. you can imagine designing facebook in a way where as you use the i platform more, you ha more reach. right? like the idea that reach is earned is not a right. in that world, as you have a platform more, the platform is learning more and more w about you.. the fact they even make a throw-away account, that opens up all sorts of doors. i want to be clear, in a world where you require people like ideqs, they're still going to he that problemem because facebook will never bee able to mandate that foro the whole world. lots of countries don't have those h systems. as you can pretend to be in thatte country and register account, you still have that. > if i could join my colleags
6:02 pm
in thanking you for being here today. there is no obligation on facebook and the other platforms toth carry that journalism. instead, it is up to them to apply the code which are laid down byy the regulator, directe by the government in the form of the secretary of state. ostensibly to make their own judgments about whether or not to carry it. now, it's going to be a.i. which is doing that. it's going to be the black box whiche is doing that. which leads t to the possibilit in effect of censorship by algorithm. and what i would like to know, in your experience, do you trust a.i. to make those sorts of
6:03 pm
judgments? or will wer get to the sort of w situation where all news, legitimate news about terrorism, is inn fact censored out becaus the black box can't differentiate between news about terrorism and content which is promoting c terrorism? >> i think there's a couple different issues there to unpack. thee first question is around something, you know, excluding journalism. right now, my understanding of how thew bill is written is a blogger could be treated the same ass an established outlet editorial standards. people have shown over and over again that they wantta high-qualityve news. people aret willing to pay for t high-quality news. it's actually interesting, one of thent highest rates of subscription news is amongst 18-year-olds. young people understand thesc value of high quality news. when we treat random blogger and an established high-quality news source the same, we actually
6:04 pm
dilute the access of people to high qualitycc news. that's the first issue. the concern if you just exempt it across the board, you're going to make the regulations ineffective. the second s question is around can a.i. identify safe versus dangerous content. part of why we need to be forcing facebook to publish which integrity systems exist in which a languages and performan data is right now those systems don't work.em like, facebook's own documents say they have trouble differentiating between content promoting terrorism and counterterrorism at h a huge ra. the number i saw was 76% of counterterrorism speech in this country were being flagged and taken down. any system where the solution is a.i. is a system that's going to fail. instead,ai when you focus on slowing the platform down, making it human scale, and letting humans choose what we focus on, not letting a.i.,
6:05 pm
which is going to be misleading us, make that decision. >> whatt practically could we d? >> great question. i think mandatory risk standards, likeh how good does this risk assessment need to be? analysis around segmentation, all those things are critical, and the most important part is having a process where it's not just facebook articulating harm, it's also the regulator going out and turning back to facebook and saying you need to articulate how you're going to solve these problems because right now, the censors for facebook are .shareholders and o pull that center of mass back towards the public good. and right n now, facebook doesn have to solve these problems. itis doesn't have to disclose ty exist, and it doesn't have to come up with solutions. in ati world where they were regulated andd mandated, you hae to tell us what a five-point
6:06 pm
plan is on each of these things and if it's not good enough, we're going to c come back and k you again, that's a world where facebook now i has an incentive insteading getting 10,000 engineers to make that, you have 10,000 engineers to make us safer. >>e i believe that if facebook doesn't have standards for thosi assessments, they will give you a bad risk assessment. facebook hassh established over and over again, whenga asked fo information, they mislead the public. so i don't have any expectation they'll give you a risk assessment unless you articulate what ami good one looks like, a you have to be able to mandate they give solutions because on a lot of these problems, facebook has not thoughtro about how to solve them or becausert there'so incentive forcing them away from shareholder interests, when they have to make a little sacrifice, like 1% growth here or 1% growth there, they choose growth over
6:07 pm
safety. >> just a general question. as things stand at the moment, do you think this bill is keeping mark zuckerberg awake at night? >> i am incredibly excited and proud of the uk for taking such a world-leading stance when thinking about regulating social media platforms. they don't have the resources to stand upp and save their own lives. they're excluded from these discussions, and uk has a reputation of leading policies that are followed around the world. i can't imagine mark isn't paying attention to what you're doing because this is a critical moment for the uk to stand up and make sure that these platforms are in the public good and are designed for safety. >> probablyc need to do a littl more in the bill to make that the case.
6:08 pm
>> i have faith in you guys. >> thank you very much. >> veryt' compelling argument o the way regulation should work. you are not thinking it's disingenuous of companies like facebook to say we welcome regulation. we actively want you to regulate, and yet the company does none of the things you said it should do and doesn't share any of that information. >> it's important to understand that companies work within inincentives andnd the context they're given. i think today facebook is scared if they freely disclose information that wasn't requested by a c regulator, the might get shareholder losses. i think they're really scared about boog t the right thing because if they -- t in the unid states, because they're a private company, they have a fiduciaryav duty to maximum shareholder value. so when they're giving these choices between 5% more misinformation or 10% more
6:09 pm
misinformation, and 1% session, they choose sessions and growth over and over again. i think there's actually an ro opportunity, i think, to make the lives of facebook employees better, likee rank and file employees better, by giving appropriate posts for what is a safe place, because right now, i think there's a lot of people inside the companye that are uncomfortable about the decisions they're beingng force to make withinre the incentives that exist and that creating different incentives through regulation gives more freedom to be able to do things that might be aligned with. >> they don't -- so much of your argument is about data drives engagement, drives up revenue, which is what the business a is about. to me,wi that doesn't look like company that wants to be regulated. >> again, i think, like i said before, i can't see in men's hearts, i can't see motivations,
6:10 pm
but knowing what i know, i have an mba, and given what the laws are in the united states, they have to act on the shareholders' interest or justify something else.te and i think a lot of the long-term benefits are harder to prove. i think if you make facebook safer and more m pleasant, it wl be a more profitable company ten years from now because the toxic version of facebook is slowly losing users. at the same time, the actions in thect short term are easy to prove. i just don't know. >> thank you. thank you so much for being here. it's's truly appreciated. and everything you have doned t get yourself here as well in the last year. look, what wouldel it take for mark zuckerberg, the facebook executives,, to actually be intangible. do you think they are aware of the human cost that has been
6:11 pm
expelled in terms of what i say isan they're not accountable enough and the human price on this? think it's very easy for humans to focus on the positive over thehe negative. i think it's important to remember that facebook is a product that was created by harvard students for other harvard students. when atu facebook employee look at their news feed, it's likely they seean a say, pleasant plac. their immediate visceral perception of what the product is and what'spp happening in a place like ethiopia, they're completely foreign worlds. i think there's a real challenge ofof incentives where i don't kw if all of the information that't really necessary gets very high up i in the company, where the good news trickles up but not necessarily the bads news. i think it's a thing where executives see all o the good they're generating and they can write off the badat as a cost o that good. >> i'm guessing now having
6:12 pm
probably watched what's going on here, in westminster, they probably are very much aware of what has been going on. i really, truly hope they are. bearing in mind all the other sessions we havee had and peopl coming here. just stories that are quite unbelievable. so has it ever been the message to your knowledge entirely or privately that they have got it wrong? >> there are many employees internally, i think the key thing that you will see over and over again in the reporting on thesen issues is that countless employees have lots of solutions. we have lots ofot solutions tha don't involve taking the good and bad ideas. it's not about censorship. it's about the design of the platform, about how fast t it i how growth optimized it is. we could have a safer platform and it could work for everyone in the world, but it will cost little bits of growth. and i think there's a real problem that those voices don't get amplified internally because
6:13 pm
they're making the company grow slower.lo as c the company that lionizes growth. >> what's your view on criminal sanctions and content? do you believe there is a need for criminal sanctions? >> my philosophy on criminal sanctions for executives is they act like gasoline, like whatever the terms are, if you have really strong confidence, then they will amplify those consequences. but the same could be true if there's flaws in the law. it's hard for me to articulate with where the law stands today whether or not i would support criminal bsanctions, but i thi its is a real thing that it maks executives take consequences more seriously. so, you know, it depends on where the lawaw ends. >> thank you. just quick one now. i know you touched on this earlier and hadch that conversation.
6:14 pm
quick question. if the promotion of hate and anger, is it by accident or is it by design? >> facebook has repeatedly said we havef not set out to design system that promotes anger, divisive, hateful content. they said we never did that. we never set out to do that, but a huge difference between what you set out to do, which was prioritize content based on its likelihood to elicit engagement and the consequences of that. so s i don't think they set outo accomplish theseac things, but they have been negligent in not responding to data as it is produced. and there is adi large number w have been raising these issues internally for years and the solutions facebook has implemented, which is in countries where it has specific classifiers, which is not very manynt countries in the world, many languages in the world, they're removing some of the most dangerouse terms from engagement. ignores the fact that the most vulnerable, fragile
6:15 pm
places in the world, are linguistically diverse. ethiopia speaks six languages facebook only r supports two. there's a real saying if we believe iner linguistic diversi, the current design of the platform is dangerous. >> online harm has been out there for some time. we're all aware. it's very much inr the public domain. i touched on briefly before. why aren't the tech companies doing anything about it? why are theybo haven't to wait r this bill to come through to make the most obvious changes to what is basically online harm and human loss to this. why aren't they doing something nowng about it? >> i think as we look at the harms of facebook, we need to think about these things as systemin problems.
6:16 pm
like the idea that these systems arere designed products, they'r intential choices,ti and it's often difficult to see the forest for the trees. facebook is ahr system of incentives, full of good, kind, conscientious people working with thaten incentive, and thers lack of incentives in the company, and lots of rewards fot amplifying and making things grow more. there's a big challenge of facebook's t management philosoy is theyos can pick good metrics and let people run free, and they have foundd themselves in trap where in a world like that, howl do you propose changing thm metrics. very hard, becausese 1,000 peop might have directed their metrics and changeing the metrics will disrupt all of that work. i don't think it was intentional. but they're t trapped in this. that's why we need regulation, mandatoryr regulation. mandatory action to help pull them isaway from that spiral th they're caught in.
6:17 pm
>> thank you. >> debbie abrahams. >> thank you.nd and i echo my colleagues. thanks to you for coming over and giving evidence to us. i just wanted to ask you in relation to ann interview you gave, recently.y you said facebook consistently resolved conflicts in favor of itsf own profit. i wonder if -- in the testimony you have given so far, i wonder if you could pick two or three you think are really highlighting this point. think overall, their strategy of engagement based ranking is safe, once you have a.i. i thinkf l that is like the flagship one of showing how facebook has content based tools they could use, and each one of those, for oexample, limiting researching, that's going to carve off maybe 1% of growth.n
6:18 pm
right? or clicking on a link before you this is something twitter has done. facebook wasn't willing to. lots of things around language coverage. facebook could l be doing much, muchch more rigorous safety systems for the languages they support, and they could doup a better job of saying we have alreadyy identified what we believe, but we're not giving them equal treatment, we're not even outou of a risk zone with them, and i think that pattern of behavior of being unwilling to invest in safety is the problem. >> okay. so looking specifically at the events in washington on the 6th of january, a lot of talk about facebook, facebook's involvement inin that. i think in the moment, you have evidence of being looked at in terms of depositions.
6:19 pm
so is that -- would that be harmful? would somebody have highlighted this as a particular concern, taking it to the executives, and i'm horrified about what you say about the lack of risk assessment and risk management in the organization, i think is a gross dereliction of responsibility, but would that have been one example of where facebook was aware of the potential harml that this could create, that was created, and they chose not to do anything about it? >> particularly problematic to me is that facebook looked at its own product before the u.s. 2020 election and identified a large number ofrg settings. these are things as subtle as like, y you know, should we amplify live videos 600 times or 60es times? because they want live video on
6:20 pm
top of your feed. they came in and said, that setting is great for promoting live video, for making that product grow. having impact on that product. but itt is dangerous because lie it was used on january 6th, it was used actually for coordinating the rioters. facebook looked at those risks along with thoseth and said we need toed have these in place before the election. facebook has said the reason theyon turned them off is they censorship is a delicate issue. i findic this so misleading. most of the interventions have nothing to do with content. they have questioned like, promoting livehe video 600 time versus 60 times, have you censored someone? i don't think so. so facebook has characterized, they turned this off because they don't believe in censorship. on the day of january 6th, most of the interventions were still on at 5:00 p.m. eastern time. right? and that shocked me.
6:21 pm
they could haveev turned them o seven days before. even they're not paying enough attention for the amount of power they have or they're not responsive when they see those things. i don't know what the root cause is. buttno all i know is that's an unacceptable way to treat something as powerful and as delicate as this. >> your colleague, former colleague was giving evidence lastue week and she made the pot we have freedom h of expression freedom of information. we don't have freedom of amplification. is that something you would ee agree with? in terms of censorship. >> the current philosophy inside the company is almost like they refusehe to acknowledge the pow they have. the ideas, the choices they're making, they justify them based on growth. and if they came in and said we need to use safety first and safety by design, i think they
6:22 pm
would choose different parameters inhe terms of optimizing howin this works because i want to remind people, we like the version of facebook that didn't have algorithmic amplification. we saw ourik friends, we saw ou families. human scale, and i think there's a lot of value and joy that can come from returning to facebookva like that. >> very important points. ari private company, they have fiduciary responsibility to their tshareholders and so on. do you think, though, there are breaches in their conditions? again, that conflict there. >> i think there's two issues. the onect is i think terms and conditions can define them. that's like them grading their own homework. like they're defining what's bad, and then we know now they don't even fine what they say is bad.
6:23 pm
the seconddhe thing is around duties outside of shareholders, and we had a principle for a long, long time that companies cannot subsidize their profits. they cannotdi pay for their profits using public expense. right? like if you go and pollute the water and people get cancer, the public has to pay for those people. similarly, if facebook is sacrificing our safety because they don't want to invest enough and don'tin listen to them when they say weon spend $14 billionn safety. that's not thehepe question. the question ishe how much do y need to pak d to make it safe? >> one of the things the bill or the committee rather has been looking at is in terms of the duty of care, is that something we shouldng be considering very carefully to c mandate? >> i think the duty of care is really important. that we have let facebook act freely for too long, and they have demonstrated that multiple
6:24 pm
criteriaos necessary for facebo to act independently. the first is when they see lv conflicts of interest, they resolve them to align with the public dgood. the seconds is they can't lie t the public. facebook has demonstrated oversight. >> thank you so much. my tha final question is do you think the regulators are up to the job? >> i think -- i am not a lawmaker, so i don't know a ton about the design of regulatory bodies, but i b think things li having mandatory risk assessments with certain levels of quality is a flexible technique that as long as facebook is required to articulate solutions, that might be a good enough dynamic where as long as there's also community input into that risk assessment, that might bees a system that could be sustainable over cttime. the reality t is facebook is gog to keep i trying to run around e edges so we need to have
6:25 pm
something that can continue over time. not play whack-a-mole on specific pieces of content. >> thank you s so much. >> joining us remotely, darron jones. >> thank you. following on some of the discussion. some of the provisions in the bill might be operationalized in the day to day at facebook. firstly, there's something in the bill about illegal content such as terrorism, and legal harmful, and the question about how you define what is harmful is based on the idea a company like facebook would reasonably foresee thatoo something was causing harm. andeeee we have seen through sof the lasthe few weeks that facebk undertakes the search internally but maybe doesn't post that. the harm and claiming they have no reasonable foresight of new harm, would they be able to get around that in your view?
6:26 pm
>> i'm extremely worried about facebookhi ceasing to do importt administration. it's a great illustration of where the only one who gets asked questions of facebook is facebook. weob probably need something li a program wherere public intere people are embedded in the company for ade couple years an theyrs can ask questions. they can learn about the systems and go out and see the academia and train the next generation of integrity workers. i think there are big questions around legal but harmful content is dangerous. right? for example,e, covid actually tion, that leads peopleua losing their liv. there's large social, societal consequences of mthis. i'm also concerned that if you don't cover legal but harmful content, you will have a much, much smaller impact of this bill. and especially on impacts to a children, for example. a lot of the content we're
6:27 pm
talking about here is would be legal but harmful content. >> thank you. i know in your office, you said -- i agree with you on those t points. my second question is, say the company has found a new type of harmful content or content that was leading to physical or mental harm to individuals. we talk today about how facebook will do, whether it's about content promotion or groups or e messaging. how would you go about auditing and assessing how this harm is being shared within the facebook environment? it sounds like a very big job to me. how would you actually go about doing it?ll >> one of the reasons why i'm such a big advocate of having the fire hose, picking standard where you're like, if more tha x-ant people see the content, it's not really private. you can include metadata about
6:28 pm
each piece of content, for example, the group, what group? where does this content show up? whichwh groups are most exposedo it? imagine if we could tell, oh, facebook is distributing a lot of harmful content to children. the serious metadata we could be releasing, i think there's a really interesting opportunity that once more metadata is accessible, a cottage industry will spring up amongst independent researchers. like,ld if i had access to this data, i would start a youtube channel and educate people about it. i think there's opportunities where we will develop the muscle of oversight if we have at least a peep hole to look into facebook. >>oo there's a new type of harm
6:29 pm
being created appearing, that the amount of content on the platform, it's really difficult to find if. you're saying that's not true and they have the caability to do that. >> so, as i said earlier, i think it's really important for facebook to public which integrity systems exist. what content can they find? because we should be able to have h the public surface, like hey, we believe there's a firm here, and like, oh, we don't that harm. for example,or i heard it was around self-inflicted harm content and are kids exposed, and facebookk said we don't hav that content. we don't have a mechanism today to force kids to answer those questions and we f need to have the ability to be mandatory where wee can say you need to b tracking this harm. sorry, i i forgot your question.
6:30 pm
my apologies. >> i just wanted the question, the capacity to consider that, and in your experience, how the different themes in the business operate. you know, the program who are toting all these algorithms, the whole thing and how it works, and then receiving answers from the product in order to produce what they present to our regulator. my concern isul that the real truth about what'snc happening the programmers and it may not get through in that audit, that submission to the regulator. am i wrong in these assumptions? dol they work well in assuming what they'll all do? >> i think it's really important to know there are conflicts of
6:31 pm
interests between those teams.fl one of the things that's been raised ist the fact at twitter, the team that is responsible for writing policy on what is harmful reportsex separately to ceos than the team that is responsible for external relations with government officials. at facebook, those two teams report to the same person. the person who s is responsible for keeping politicians happy is the same person thatpy gets to define what is harmful or not harmful rcontent. i thinknt there's a real proble around the left hand doesn't speak to the right hand at facebook. that's the example i h gave earlier of on one hand, someone on integrity is saying problematic use, addiction, is one of the signs is you come back a lot of times a day, and someone on the growth team is saying, did you notice, we get you to come back multiple times a day, you're still using the product in three months. the world is too flat with no wherein is really responsible. antonia frasier said when she
6:32 pm
wass pressed on instagram kids, she couldn'tn articulate who wa responsible, and that is a real challenge at facebook, that there is not any system or responsibility or governance, so you end up in situations like that where you have one team, you know, probably unknowingly pushing behaviors to cause more addictions. >> it may be that we need to look at like an audit where people are coming together. >> it might be good to include in the riskk assessments saying what are your organizational risk assessments, not just your product risk n assessments becae the organizations choices of facebook are introducing systemic risk. >> my very last question, is this law here in the uk has some international reach, but it's a uk law. we have heard evidence before that employees of different technology companies here in london will watch how decisions
6:33 pm
are made. do you think there are risks that the way the power is structured in california means the uk is fine with the law here, may not be able to do what they need to do? >> facebook today is full of kind, conscientious people who work on a k system of incentive unfortunately leads to bad results, result that are harmful to society. there is definitely a center of mass that exists in menlo park. there's definitely a greater priorityre on moving growth metrics and safety metrics, and you may have safety teams looking in the world, who their actions will a be greatly hinded or even rolled back on behalf of growth. a real problem, again, we were talking about the idea of the right hand and left hand. there are reports in facebook that talk about the idea an integrity team might spend
6:34 pm
months pushingng out a fix that lowerss misinformation ten%, bu because the a.i. is so poorly understood, people will add in little factors that will basically re-createsi whatever e term was and -- so over and over, if the incentives are bad, you'll get bad behavior. >> thank you. thank you. >> thank you. wilf stevenson. >> thank you very much. my thanks to everybody else for the incredible picture you have painted. i don't think we have mentioned we recognize the problem you face with what you have done, but it is fantastic to get a what's happening inside the company. i want to pick up on what was being said just a few minutes ago. it'ss almost like 1984, some of your inscriptions. the way you have talked about the names given to the
6:35 pm
organization that are what the name seems to emply. that raises an issue. you also saidro the promotion structure probably finds it in the different direction, and therefore these people didn't get what you might expect. organizations have a culture of their oown, so in a sense, my question is about culture. do you think there is a possibility that with a regulated structure oful the ty we're talking about, being seen as a way forward in the way the world deals with the huge companies, there are sufficient people with goodpl hearts and gd sense in the company to rescue it? or do you feel that somehow the duality you're talking about, the left hand and right hand, have gotten so bad that it will never recover itself, it has to
6:36 pm
be done by an external agency? it comes back to culture. do you think there's a way? >> unless the incentives change, incentives under facebook exists -- until the incentives facebook operates under changes, they will not change facebook. i think it's a question of facebook is full of kind, conscientious good people, but the systems reward growth. and the "wall street journal" has w reported on how people wh have ofadvanced inside the company, and disproportionately, the managers and leaders of the integrity teams, they have growth. that seems deeply problematic. i think there's a need to provide an external pull to move it away from just being optimized on short termism, and immediate teshareholder profitability and more towards
6:37 pm
the public good, which i think will lead to a more profitable, successful company tene years down the road. >> you may not be able to answer it, not ann easy question to answer. inside the company, the things you have been saying, i think you said people do talk about things. what is it that stops it getting picked up in official policy? is therer gate keeper on behalf of the group which says we don't talk about that, move on? >> i don't think there is an explicit gate keeper. not like there are things we cannot say. but i think there is a real bias in that experiments are taking n into review, and the cost and the benefits are assessed. and facebook has characterized some of the things i have talked about in terms of we're against censorship, and the things i'm
6:38 pm
talking about are not content based. i am notot'm here to advocate f more censorship. i'm about saying how do we make the platform more human scale, how do we move back things like rankings. finding ways to move towards solutions that work for all languages. but in order to do that, we have to accept the cost of little bits of growth being lost. i b love the radical idea of, like, what if facebook wasn't profitable for oneeb year? this giant pile of cash, what if they focused on making it safe? what would happen? what kind of infrastructure would be tbuilt. i think there's a real thing, until incentives change, facebook will not change. >> i want to talk to you about that because if -- i said i have thiss organization. i have thego profiles, the instagram andil facebook profil and i what i want do is reach o
6:39 pm
from facebook and instagram and see if we can help people before they do too much harm. i could go to facebook and say i want to reach those people, and they would happily sell that to me to do t it.o and i have the data, the data matches to young people who are self-abusing. couldn't be rtsimpler, the way e platform is designed. but you ask the same question, why don't webo reach out and he people thate are actually in danger? why don't you stopou it, why doh you reach out? they won't do it themselves, but worse than that, they're continuing to feed those people with content that is going to make them more vulnerable. ion don't see how that is a company thatan is good, kind, a conscientious. >> there's a difference between system. i always come back to what are
6:40 pm
the incentives and what system do those incentives create?or i can only tell you what i saw at facebook, which t is i saw kind, conscientious people, but they were t limited by the actis of the system they worked under. that's part of why regulation is so important. and w the example you give of, one, amplification of interest, so facebook has run an experiment where they expose to healthy recipes and just like following the recommendationscin instagram,s they're led to anorexia content very fast, within a week. so say following the recommendations, because extreme polarizing content is the one that gets trewarded by engagemet in ranking. i have never heard described what you just d described. using a look-alike tool that exists today. if you want to target ads today, you can talk an audience and find a look-alike audience. very profitable. advertisers love this tool. ii have never thought about usiu
6:41 pm
it to reach critical content to people who might be in danger.li right now, the number of people who see the current self-harm tools, so this is like facebook loves to brag about how they built tools to protect kids or protectoo people who might have eating disorders. those tools trigger on the order of hundreds of times a day. hundreds globally. and so i think unquestionably, facebook should have things like thathi and have partnerships wi people who can help connect them to normal populations. you're right, the heihave the tools and they haven't done it. >> when you were working in civic integrity, it would be problematic, and we would like to use the ad tools to identify more people like that. would that have been a conversation you could have had? have had that conversation. there is a concept known as sensibility inside the company, where they are very careful
6:42 pm
about anyth action which they believe is not sensible. and things that are statistically likely but are not proven to be that, they're very hesitant to do. let's imagine you found some terrorists and you were looking for other people who were at a risk for terrorism or cartels, that happened in mexico. thee platforms are used to recruit young people to cartels. you could imagine using a technique like that to a help people at risk of being radicalized. facebook came back and said f there's no guarantee those people rat t risk. we shouldn't label them in a negative way. that wouldn't be defensible. i thinknk there are things wher coming in and changing the incentives, t making them articulate those risks, pretty rapidly, they would shift their philosophies and be more willing to ask. >> it would be defensive to tell a terrorist government that have known interests, it would be
6:43 pm
defensible to sell it for advert advertising? >> i'm not fure if facebook allowed gun sales. >> you can't use the same technology to reach outhe and ht them. >> it's actually -- like in my senate testimony, one of the senators showed this example of an ad that was targeted at children, where inn the t background, the ad was an image of a bunch of pills and tablets, clearly a pile of drugs that said something want to have your best party this weekend, reach out or something, and that party is like a drug party. that ad got approved by facebook. there's a real thing where facebook says they havehi policf for things. they might have a policy saying we don't sell guns, but i bet there's tons of ads. >> so -- it can target people
6:44 pm
with addictions through th advertising, and yet, it's largely feeling aroundit in the dark. >> there is a great asymmetry in resources that are invested to grow the company versus keep it safe. >> when you said there's asymmetry, you can't even see all the data and information today that could be used to help us do our job because it's not defensible to do that. >> i never saw the usage of a strategy like that, but it seems like a logical thing to do. facebook should be using all of the tools its in power to solve these problems. >> if they really wanted to do it, it>> would have been done. d but for some reason, it appears they don't. >> i think there are cultural issues and there's definitely issues at the highest level of leadership where they're not
6:45 pm
prioritizing safety enough. we need to say no, you might have to spend twice that to have a safer platform. the important part is it should be designed to be safe, not we should -- >> thank you. >> you said they know more about abuse but a willingness to act. it brought to my mind a particular case of a couple whose daughter committed suicide and struggled for the last year to get anything beyond an automated response to their request to have access to her account. i made t a small intervention. they did eventually get an answer. really basically said no, you can't. and i have to be clear, they
6:46 pm
didn't say no, you can't. it was a very complicated answer. complicated legal answer. but what it was saying is they had to protect the privacy of third parties on the platform. so i really wanted to just -- sorry, i really wanted you to say whether you think that privacy defense is okay in this setting or whether actually, you know, those sort of think piece isat another thing we need to lk at because that seems pretty horrendous to give grieving parents some sort of completion. >> from the way you described, i think their argument is that -- i think it's a really interesting distinction around private versus public content.
6:47 pm
they could have said at least for the public content she viewed, welo can show you that content. i think they probably should have come inpu and done that. i wouldn't be surprised if they no longer had the data. they delete the data after like 90 days. and so unless she was like lea terrorist and they were tracking her, which i assume she wasn't, they would have lost all that history within 90 days of her passing. and thatn is a recurrent thing that facebook does. they know it will recede into the fog of time within 90 days. >> the idea of it being a reason for not giving a parent of a deceased child access to what they were seeing, i'm interested in that. >> i think there's an unwillingness at facebook to acknowledge i they're responsib to anyone. they don't disclose data. there's lots ofhe ways to discle that in a privacy conscious way.
6:48 pm
you just have to want to do it. andd facebook has shown over an over again not that they don't want to release that data, even when they dohe release the data they often mislead people. they didid this with researchera couple months lago. they literally released misinformation using assumptions they did i not reveal to researchers. in case of the grieving parents, i'm sure facebook has not thought holistically about the experience of parents who hadad traumas on the platforms because i'm sure her parents'm aren't t only ones who suffered that way. and i think it's cruel of facebook to not think about, you know, how do they even take minor responsibility after an event like that. >> a lot of comments have talked to you about children and that particular interest of mine. the one thing that hasn't come up this afternoonre is age assurance. and specifically privacy preserving agere assurance. and this is something that i am worried about, that age
6:49 pm
assurance sadly could drive more surveillance, could drive more resistance to regulation, and we need rules of the road. i'm just interested to know your perspective on that. >> i think it's kind of a two-fold situation. on one side, there are many algorithmic techniques facebook could be using to keep children off the platform. thought are not involved in asking for ids or other form of information disclosure. facebook currently does not disclose what c they do. so we can't as a society step in and say you have a much larger tool chest w to pull from. it alsooo means we don't understand whatoo the privacy violations are that arere happening. we have no idea n what they're doing. the second thing is w we coul be -- facebook has systems for estimating the age of any users, and within a year or two of them
6:50 pm
turning 13, like enough of their actual mates have joined that they can estimate realistically what their age is, and they should publish the results and say one, two, three, four years ago, how many ten-year-olds were on the platform? because they know the data today, and t they're not disclosing it to the public. that would be a forcing function to make them do better protection of young people on the platform. >>te i have been ticking off a list while you have been speaking this afternoon. and you know, mandatory risk assessment and mitigation measures. mandatory routes of transparency. mandatory safety by m design. with humans in the loop, i think you said. algorithmic designs. application of their own policies. right? will that set ofl things in the
6:51 pm
regulator's hand, in our bill that we're looking at, would all of that keep children safe? would it save lives? would it stop abuse? would it be enough? >> i think it would be unquestionably a much, much safer platform that facebook hat to take the time to articulate itss risks and here's the secon part, they can't just articulate their risks. they have to articulate their path to solve those risks. they have to be mandated. they can't do a half dssolution. they need to give you a high quality answer. we would never accept a car accident that has five times the accident coming out and saying, you know, we're really sorry, but brakes are so hard? we're going to get better. we would never accept that answer, but we hear that from facebook over ando over again. i think between having more transparency, privacy, and having a process of that
6:52 pm
conversation around whether the problems or solutions, that is a path that should be resilient in moving it to a safer facebook or any social media. >> i have a question i wanted to bring back. >> thank you. i'm interested in whether it's something we should be considering in this committee in terms of whether there's a regulatory risk and certainly there are security risks that some parts of the government are concerned about like encryption. but i would like to give you an opportunity to clarify what your position is and if there's any common need on whether we should be concerned? i wouldrn be grateful to you. >> i want to be very, very
6:53 pm
clear. i was mischaracterized on the telegraph yesterday on my opinion on end to end encryption. that's wherent your encrypt information on a device, send it over the internet, and it's encrypted onto another device. i'm a strong supporter of access to open source end-to-end encryptionnc software. part of why i'm such an advocate for open source software in this case is if you're an activist, ifif you have a sensitive need, journalist or whistleblower, an open source, but part of why the open source part is so important is you can see the code. anyone can go and look at it. s for the top, open source platforms, those are some of the only ways you're allowed to chat with say, the defense department of the united states. facebook's plan for end-to-end encryption i think is concerning because we have no idea what they're going to do.
6:54 pm
we don't know what it means, we don't know if people's privacy is actually protected. and it's also a different context. on the open source, end-to-end encryption product i like to use, there is no directoryor whe you can find 14-year-olds. there is no directory where you can go and find the uighur community in bangkok. on facebook, it's easy to access vulnerable, populations, and there are nation-state actors. i want to be clear, i am not against end to end encryption, but i believe the public has a right to knowel what does that even htmean. where theyat really going to peruse heend-to-end encryption. say they're doing it and they don'tg really do that, people's lives are in danger. i personally don't trust facebook currently to tell the truth. and i am scared that they are waving their hands at a situation wherein they're concerned about it, and they to see the ant
6:55 pm
dangers anymore. i'mth concerned about them missg the product theyct build, and ty need regulatory oversight. that's my position on end-to-end encryption. >> just to be really clear, really important use case for end-to-end encryption in messaging. but if you ended up with an integration of some of the e otr things you can do on facebook with fend-to-end encryption, y can create a, dangerous place fr some groups. >> i think there's two sides. like i want to be super clear. i support access to end-to-end encryption, and i use end-to-end encryption every day. my social network is on an end to end encryption service.ry i'm concerned on one side that the constellation of factors related to facebook makes it even more necessary for public oversight to o do end-to-end inkrepgz. things like access to the directly, butut the second one
6:56 pm
security. if people thinkur they're usingn end-to-end encryption product, and facebook's interpretation of that is different than, let's nt say, an open source product would do, because an open source product, we can all look at it and make sure what it says on the label is on the can. if facebook has vulnerabilities, people's lives are on the line. wewe need public oversight of anything facebook does around end-to-end encryption because they're making people feel safe might be in danger. >> thank you very much. >> thank you. just a quick follow up. are you aware of any facebook in relation to human cost of misinformation? say, for example, covid is a hoax or anti-vax misinformation, have they tried to quantify that, both in terms of illness,
6:57 pm
death, human cost? >> facebook has done many studies looking at the misinformation button is not shared evenly. you have things like the most exposed people ind misinformatin are recently widowed, recently divorced, moved to a new city. when youou put people into thes rabbit holes, when you pull people in from main stream beliefs, it cuts them off from their community. if i believe in flat earth, it makes it hard for me to reintegrate into my family. is thanksgiving dinner ruined? did your relatives go listen to too much misinformation on facebook. when we look at the social costs and the health costs, i'll give you an example, facebook underenforces on comments because comments are shorter,
6:58 pm
they're really hard for a.i. to figure out. right now, a group like unicef has really struggled even with the free ad credits that facebook has given them, because they will promote positive information about the vaccine or about ways to take care of yourself in covid and they'll get t piled on in the comments. so documents talk about how much more impact those ads have had if theyy hadn't been buried in toxic content. >> thank you. >> thank you, chair. i just wanted to build on the comment you made earlier. you mentioned about this idea of skilled parties, if i think i heard it reright, and it occurr to me, the platform in terms of the language, thinking of english. within english, there's huge differencesug between american english and english english and britishs english. also the slang that's used. when you were mentioning that,
6:59 pm
ite occurred to me a few weeks ago, i had someone on facebook, they met me in a pub previously, and they wished they had given me a hug, which it turns out wasn't -- it meant to stab me. withinin the kind of that being reported, i only found out about it afterwards, it was reported initially to facebook. said it didn't -- i reported it and a few others, and it got took down, even by the page it was on all by facebook. in that time, someone else said they had met me during an election campaign and they wished they had given me a glasgow hug as well. so in other words, two people wanting to stab mee publicly on the platform. now, when that was reported and notified facebook that a glasgow hug meant stab me, i wonder, do you know in facebook, would it learn from that? would it know the next time
7:00 pm
someone said they wanted to give someone a t glasgow hug, that ty wanted to stab them? >> i think it's likely it would be lost in the ether. so the process, facebook is very cautious about how they hug, nt time, that they want to stab you? >> -- facebook is very cautious about how they evolve with terms like hate speech. i did not see a great deal of regionalization, which is what is necessary to do content based interventions. -- i think there is interesting design questions where, if we as a community, as government and academics and independent researchers, came together and said, let's think about how facebook has gathered enough structured data to get that
7:01 pm
peace right. or in the case of the insult for the other member, like, how do you do that? i think they took a strategy that was closer to what's google has done, they would have a substantially safer product. which is, google is committed to be available in 5000 languages. how do you make goggles interfaces and help content available in all major languages in the world? they invested in a community program in order to do that, that said, we need the help of the community to do this. and if facebook invested in collaboration with academia or other researchers, to figure out collaborative strategies to structure this data, i think we would have a better facebook. -- they could be so much better than they are today. so let's make the platform safer and if you want to continue --
7:02 pm
protects you from scottish english not just american english. >> thank you. and a third question on that. that, for me builds on these platforms, the learning they have, and also future proofing this bill. at the moment, we talk a lot about facebook and google and twitter and those things. increasingly, they are -- we've seen this oculus rift as well that will increase user generated content. do you know if any principles we are talking about here around safety and reducing harm are being discussed for those future innovations. because my concern, to be honest, is that we will get this right and then the world will shift into a different type of use of platforms. and actually we won't have cover the basis properly. >> i'm actually a little
7:03 pm
excited about augmented reality, because often it attempts to recreate interactions that exist as invisible reality. in this room, we have maybe 40 people plural. the interactions we have socially are at human scale. most augmented reality experiences have been creating the dynamics of an individual, or dynamics where there are one or a handful of people. those systems have a very different consequence than the hyper amplification systems that facebook has built. the danger is not in people saying bad things. but rather, people saying extreme polarizing things to the largest megaphone in the room. so we need to be careful about future proofing. but i think the mechanisms you talked about earlier, like having risk assessments, ones not just produced by the company, but also to be the
7:04 pm
regulator gathering from the community, saying, are there other things we should be concerned about? a tandem approach like that that requires companies to articulate their solutions -- i think that's a flexible approach. i think that may work for quite a long time. but it has to be mandatory and with some quality bars. i guarantee you that facebook would phone it in. >> thanks very much. >> thank you, just a couple of final questions. we heard from the evidence last week -- based on his experience from youtube, the recommendation is not there to give you more of what you want, it's -- do you think that's a fair characterization? >> there's a difference between the intended goals of a system -- so, facebook has said we never intended to make a system that amplifies is extreme content -- and the consequences of a system.
7:05 pm
this is trump's intent to give you content you will enjoy. as facebook has said, that will keep you on the platform longer. but the reality is is that algorithmic systems are complicated and we are bad at assessing the consequences and for seeing what they will be. they are very attractive to use, they keep you on the site longer. if we went back to chronological ranking, i bet you would view 20% content. you may enjoy it more, it may be more friends and family. the goals, i don't think they are intended to have you go down rabbit holes. i don't think they are intended to force you into bubbles. although, they have chosen choices that have unintended side effects. i will use the example. auto play. auto play on you to be super dangerous. instead of having u2 that you engage with, it chooses for you. and it keeps you in a stream of flow, where it just keeps you going. there is no conscious action of
7:06 pm
continually picking things or whether or not to stop. right? and that's where those rabbit holes come from. >> from what you said earlier on, someone could join a group without your consent, that is focused on anti-facts conspiracy or covid. -- that's quite an interesting -- before and probably the whole system will give you more of that content. that's what i meant between the system recognizing an interesting view or line of inquiry as a user and then piling in. >> i think that's what's so scary. there's been some reporting on a story about a test -- facebook has said it takes two to tango. facebook wrote a post back in march saying, don't blame us for the extreme content you see on facebook. you chose your friends, your interest, it takes two to tango.
7:07 pm
when you make a brand-new account, and you follow some news, for example fox news, or interest like trump or melania, it will lead you very rapidly to white genocide, but this isn't just true on the right, it's drew on the right and left. these systems lead to amplification. this is dumb wants to find the content that will make you engage more. and that is extreme, pacific northwest. pushing polarizing streams, creeks content. >> i and rivers well think to outside of their banks into farmland and claim in the situation -- communities, into to say that it is roads and your fault, bridges. again, that's more rain on the way on top of what has already a misrepresentation. >> come down. just facebook is very in the past good at dancing with week, several of inches of rain data. and good at computers. the reality is, the business model is taking them to dangerous ends. >> -- the other party's facebook.
7:08 pm
-- we heard from sophie -- , about her work, and inauthentic activity. how big of a problem do you think that that is on -- there is evidence -- we talk about hundreds of thousands of -- being taken down. how much of an issue is that in your work as well? >> i'm extremely worried about fake accounts and i want to give you guys a taxonomy of fake accounts. so there are, both automated. it's fairly easy to detect them. then there are things like manually driven fake accounts. there are cottage industries in certain parts of the world, pakistan, parts of africa, where people have realized, you can pay child a dollar to play with an account, and to be a fake 35 year old for a month, and after the window you have passed the window of scrutiny,
7:09 pm
and you will look like a 35-year-old human because you are a real hohman human. and that sold will be account will be sold to someone else. there are at least 100,000 of these -- there's 800,000 i believe -- back when i left, there were approximately 800,000 connectivity accounts. facebook's was subsidizing the internet. discovered by a colleague of mine. they were being used for some of the worst offenses on the platform. i think there is a huge problem around the ability to detect these accounts and spreading on the platform. >> how confident are you that the number of active users on facebook are accurate? that those people are really people? >> i think there are
7:10 pm
interesting things around the general numbers. as we talked about before, there is the distribution of things. on social networks, things are necessarily evenly allocated. facebook has published numbers saying that it believes 11% of its user numbers are not genuine. -- there is a question of, if i investors are to interpret value based on a number of account, and 60% of new accounts are not new people, they are over inflating the value of a company. >> and if those audiences have been sold to advertisers, that's fraudulent. if people have reason to believe that they are fake? >> there is a problem known as
7:11 pm
sumas, same use your multiple accounts, and i found documentation that said, for region frequency advertising -- so let's say, you are targeting a very specific population, such as affluent and quirky individuals, facebook will target advertising, and maybe there are only -- facebook has controls called region frequency advertising. and you say, i don't want to reach someone more than seven times or ten times. that 30th impression, it's not very effective. facebook's internal research says that those systems were not accurate because they did not take into consideration those same or multiple account effects. and so it's facebook overcharging people. >> i presume it works the same way on instagram as well. >> yes.
7:12 pm
>> and people have multiple duplicate accounts on instagram as well. is that a concern as well? from a safety point of view? >> i think that -- i was present for multiple conversations during my time. and we discussed the idea of the real needs of policy, and the authenticity policy. those are their security features. on instagram, because they did not have the same project, there were many accounts that would have been taken down due to uncoordinated behavior, -- i think in the case of teenagers, encouraging them to make private accounts so their parents can't understand what is happening in their lives, that is really dangerous. there should be a more family centered integrity intervention.
7:13 pm
to think about the family as an ecosystem. they >> a young person engaging with harmful content could do it with a different account than the one that their parents know. do you think that that policy has changed? could this is to be made to work on instagram? >> i don't think i know a enough about instagram's behavior in that way to give a good account. >> but as a concerned citizen? >> i strongly believe that facebook is not transparent enough today and that it's difficult for us to actually figure out the right thing to do because we are not told accurate information about how the system itself works. and that's unacceptable. >> i think i would agree with that. i think that's a good summation of what we've been talking. thank you for your evidence and for visiting us here in westminster. >> thank you. >> thank you very much.
7:14 pm
faa demonstrator steve dickson spoke about aviation safety before senate committee. he testified on several topics, including were forced demands, unruly passengers and the impact of the pandemic.

17 Views

info Stream Only

Uploaded by TV Archive on