Skip to main content

tv   Facebook Whistleblower Testifies Before UK Parliament Committee  CSPAN  December 1, 2021 6:06am-7:27am EST

6:06 am
growing the company is kept away from the side that highlights harms and that's in a world where it's not integrated. that causes dangers and makes the problem worse. >> thank you for that. it's 25 to similar platforms on
6:07 am
children's brains as they are developing? >> i think there's an opponent question to be asked, which is what is the incremental value added to a child after some number of hours of usage per day. i'm not a child psychologist. i'm not a neurologist. i can't advise somewhat that time limits should be. i i think we should have a trait of which is i think possible to say there is value that's given from instagram but there's a real question of is the first hour, how valuable is the second hour after the first hour? the third hour after the second hour? the impacts are probably more than cumulative. they probably expand over time. i think there's a great questions. i don't have a good answer for you. >> thank you. do you think from your experience as senior leadership
6:08 am
including mark zuckerberg it facebook actually cares they're doing harm to the next generation of society, especially children? >> i cannot see into the hearts of men and so i don't know, i don't know what their position is. i know there's a philosophy and said the company i seen repeated over and over again which is people focus on the good. there's a culture of positivity, and that's not always about -- but the problem is that when it's so intent that it discourages people from looking at hard questions, then it becomes dangerous point i think it's really -- we haven't adequately invested in secured and safety and if consistently when they see a conflict of interest between prophets and people they keep choosing profits. >> so you would agree it's a sign that perhaps don't care, they've investigated or done research into this area? >> i think they need to do more research and a think they need to take more action of any to
6:09 am
accept it's not free, that safety is important and is a common good and that the need to invest more. >> thank you. if i may just on a slightly different point if i may. you're obviously a now globally known whistleblower and when the aspects we looked at over the past few weeks as threat anoy and one of the regular points that's made that if we push on anonymity within this bill that that would do harm to people who want to be whistleblowers in the future i just want to get your sense of whether you agree with that, and if you have any particular view on anonymity? >> i worked on google+ in the early days. i was actually the person in charge of profiles on google+. when google internally had a small crisis of whether or not real names should be mandated. it was a movement inside the company called real names considered harmful. you to kill it at great length
6:10 am
all the different populations that are harmed by like excluding anonymity and that's groups like domestic abuse survivors who their personal safety may be at risk if they are forced to engage with the real names. anonymity, i think it's important to wait what is the incremental value of requiring real names. real names are difficult to implement your so most countries in the world do not have digital services will be can verify like someone's id versus their picture and a database. in a world where someone can use a bpn and claimed they are in one of those countries and register profile, that means they can still do whatever action you are afraid of him doing today. the second think it's a facebook knows so much about you and if they're not giving you information to facilitate investigations, that's a different question. facebook knows a huge amount about you today like that your question about on of us on
6:11 am
facebook is not accurate for what's happening and we still see these harms. but the third thing is the real problem is systems of amplification. like this is not a problem about individuals. it's about having a system that prioritizes and mass distributes divisive polarizing extend content, and that in situations where you just limit content -- not limit, when you just show more content from your family and friends you get for free safer, less dangerous content. i think that's the greater solution. >> so very finally just in terms of anonymity for this report, then you think we should be focusing more on the proliferation of content to larger numbers than we should on anonymity, the source of the content pgh yes. the much more scalable effective solution is thinking about how its content is distributed on this platforms, one of the vices of algorithms, what are they distributing more of an
6:12 am
concentration. like people getting pounded with that content. for example, this happens on both sides, both people being hyper exposed to toxicity and hyper exposed to abuse. >> thank you. i will put it back to the chair. thank you much. >> i just would ask you about the point you just made on anonymity. what you're saying it sounds like anonymity currently exists to hide the add-in of the abuser from the victim but not the identity of the abuser to the platform. >> platforms have former information about account that i think people are aware of. platforms could be more helpful in identifying those connections in cases, it's i think it's a question of facebook's willingness to act to protect people more so than the question of are those people anonymous a facebook. >> one of the concerns is if you
6:13 am
say well, the account should isolate the account user is so they could comply with it. some people say if we do that there's a danger of a system being hacked or information getting out of the way for what you're saying is practically the company has that data, information anyway. and all renewals so much, you have to have your name on the counts anyway in theory. i think we're saying anonymity doesn't exist because the company knows so much about you. >> you can imagine facebook in a way were as you use the platform or you have got more reach, right? like the idea that reach is earned is not a right. in that world as you interact with the platform or, the platform will learn more and more about you. the fact that today you can make a throwaway account, like that opens all sorts of doors. i want to be clear in a world where you require people like
6:14 am
ids, , you're still going to have that problem because facebook will never be able to mandate that for the whole world, because lots of countries don't have the systems and as long as you can pretend to be in that country and register that account you'll still see all those harms. >> if i could joint of the collects in thank you so much for being here today. this is important. the bill as it stands exempt legitimate use publishers and the content becomes legitimate use publishers from its scope. but there is no obligation on facebook and did the other platforms to carry that journalism. instead it's up to them to apply the codes which are laid down by the regulator directed by government in the form of the secretary of state. ostensibly to make their own judgment about whether or not to carry it. it's going to be ai which is doing that.
6:15 am
it's going to be a black box which is doing that, which leads to the possibility in effect of censorship by algorithm. and what i would like to know in your work experience do you trust ai to make those sorts of judgments, or will we get to a situation where all news, legitimate news about terrorism is, in fact, censored out because the black box can't differentiate between use about terrorism and content which is promoting terrorism? >> i think there's a couple of different issues there to unpack. the first question is around exempting you know excluding journalism. right now my understanding is how the bill is written is a blogger could be treated the same as an established outlet that has editorial standards.
6:16 am
people at shown over and over again they want high-quality news. people are willing to pay for high-quality news. it's interesting, once highest rated subscription you is c18. can people understand the quality of high-value new. >> when we treat a random blogger, and establish high-quality newsroom the same we actually dilute the access of people to high-quality news. that's the first issue. i'm very concerned that if you just exempted across the board you're going to make the regulations ineffective. the second question is around can ai identify safer standard content? part of why we need to be forcing facebook to publish which integrity systems exist in which languages and performance of data is right now those systems don't work. like facebook's own documents a fifth trouble differentiate between content promoting terrorism and counterterrorism at a huge rate. the number i saw a 76% of
6:17 am
counterterrorism speech, and this at risk country was getting flagged with terrorism taking down. and so any system where the solution is ai is a system that's going to fail. instead we need to focus on slowing the platform down, making to human scale and letting humans choose what we focus on, not letting an ai which is going to be misleading us, make that decision. >> and what practically could we do in this bill to deal with the problem? >> great question. .. >> and this is for facebook and
6:18 am
the shareholders. in the back towards the public good and right now facebook doesn't have to disclose and does not have to come up with solutions. and when they were regulating the mandated. [inaudible]. and we will come back to you and ask you again pretty facebook now has incentives to instead of giving 10000 engineers, give 10000 engineers to make us safer predict that is the world we need. >> we need to regulate that. >> i believe that if facebook does not have standards, they will give you a bad risk assessment and they established over and over again when asked for information to the public, i don't have any expectations,
6:19 am
under the looming articulate what one looks like and you have to have a mandate to get a solution. because a lot of these problems, facebook is not the heart about how to solve the more because there is no incentive, or from those shareholder interests, they need to make little sacrifices here like 1 percent gross here 1 percent growth here come the jews growth. >> some very general questions, do you think that this bill, looking them up awake at night. >> i'm incredibly excited and proud of the uk for taking such a world leading stance with regard of thinking about regulating social platforms. in the global - is does not currently have the resources to save their own lives. and they have a tradition of meeting policy and ways they are
6:20 am
around the world. i cannot imagine mark is not paying attention to what you are doing because this is a critical moment from the uk to stand up and make sure these platforms are in the public good and are designed for safety. [inaudible]. make sure that is the case. >> that's a very compelling argument in which the way they work and do think it's disingenuous to say that we welcome regulation and they actively want this to be regulated. and yet none of the things that you said and does not share any of that information and trace it to what it does. >> i thank you so important to understand the company's work with the content they are given and i think that today facebook is scared that if they disclose
6:21 am
information, classified regulator, that they might be open for a lawsuit and i think there really scared about doing the right thing because they are a private company, they have to fiduciary duty to maximize the maximize shareholder may be 10 percent information and 1 percent, think choose concessions and growth over and over again i think there is actually an opportunity for the face book employees make a better a rank-and-file employees better by giving them - four look what is a safe place because right now i think there's a lot of people in the company more uncomfortable about the decisions they are being forced to make between the incentives that exist for creating different incentives for regulations gives more freedom to be able to do things. there might be aligned there
6:22 am
>> the engagement and the revenue and this is what this is all about pretty and maybe this will become habit. [inaudible]. >> and again i think like a said before, i can't speak on motivations but what i do know, i'm a nerd and i have an mba and given what the laws are, they have to act in the shareholders interests are justified something else i think here a lot of the long-term benefits are harder to prove. i think if you make facebook safer and more pleasant it will be more profitable ten years from now because are slowly losing interest but at the same time the actions in the short term are easy and i think they worry that they do the right thing, though have shareholder loss.
6:23 am
>> thank you so much for being here and we truly appreciate you and everything you've done to get yourself here as well over the last years of what would it take for mark zuckerberg and the executives to actually be intangible and do you think a actually have opportunity cost, and there have been human price on this. [inaudible]. >> i think it is very easy to focus on the positive over the negative and i think to remember that facebook is a product is made for harvard students for other harvard students and facebook employee will see a safe pleasant place for pleasant people to come together. their immediate profession about the product is and what is happening in the facebook area, they are completely - and i think there is a real challenge of incentives there i do not
6:24 am
know if all the information that is really necessary it's very high up in the company. the good news trickles up but not necessarily the bad news so i think it's a similar with the executives, they see all of the convention ready they can write off the bat in the cost of that >> i am guessing that they could all be very much aware that what is going on we truly hope they are in the other sessions we have had people coming here, the story that was just caught and so has ever been entirely and they have got it wrong. >> many employees internally, over and over again in the reporting on these issues is that countless employees that we have lots of solutions, lots of solutions that don't involve
6:25 am
having bad ideas, it's not about or it is about the design of the platform at half past and is and how growth optimize it as we could have a platform it can work for everyone in the world. i think there is a real problem that those voices do not get amplified internally because they are making something slower in the company relies on growth. >> perhaps on functions in the conference, do you believe that there should be more sanctions. >> mine for criminal sanctions as they act like gasoline, like if we ever have really strong, they will amplify those constant since but at the same could be true for pausing so hard for me to articulate where whether or not i would support but i think
6:26 am
there is a real thing that makes consequences that when they were taken more seriously. depends on where the line is drawn. >> just quickly, and i know that you touch on this earlier on that conversation, quick question, if there's anger, is a bright accident pretty soon if facebook has repeatedly stated we have not set out to design a system that promotes anger or divisive, we never did that, we never set out to do that predict there's a huge difference between what you set out to do which was advertised content based on illicit engagement and the consequences of that and so i don't think they set out to accomplish these things but they have been negligent in not responding to data as its produce and there's a large number internally who been raising these issues for years
6:27 am
and the solutions facebook has supplemented, and the classifiers which is not very many countries in the world pretty they are removing some of the most dangerous terms from engagements but that is not the fact that the most vulnerable sections in the world are linguistically diverse like ethiopia as a hundred million people in six and we just and facebook always forced to. so there is this real thing of if we believe in diversity, the design of the platform. >> and they've been out there for some time and were all aware of it and is very much in the public domain. why aren't the tech companies doing anything about it. why are they having to wait for this bill to make the most obvious changes to what is
6:28 am
basically the human loss and why aren't they doing something know about this. >> i think if you look at the harms of facebook, we need to think about the saying this as systems like the idea that the systems are designed products with their intentional choices is often difficult to see the forest through the trees and that facebook is a system of incentives of good kind contentious people are working that incentive and there is a lack of inside of the company to raise issues about flaws in the system and lots of rewards for amplifying the system to grow more. so think there's a big challenge of facebook's management philosophy, just the metrics and we the people people run and found themselves in a trap where it will like that, how do you propose changing the metrics. it is very hard because thousand
6:29 am
people are trying to get into that metric in changing metric, will disrupt all of that work. i do not think it was intentional i don't think they set out for the strap but in their attractiveness why we need regulation administrative administered of action to help. >> thank you. and again, and my colleagues want to thank you for coming here. i just wanted to ask you, i saw an interview he did recently and he said facebook is consistently dissolved conflict in favor of its own profit. i wonder if your testimony that you have given, if you could pick two or three that you can re- highlight. >> i think overall, their strategy of engagement based
6:30 am
ranking is safe, which of line and i think the flagship of showing how facebook has tools that could be is the platform space in each one of those for example really need research. that will carve off some growth where clients can click on that link before you re- sheridan twitter is on this. but twitter and that but facebook wasn't not willing to and there are lots of things wrong language coverage in facebook could be doing much more progress for the languages they support and doing a better job with ra identified the most in the world but were not giving them equal treatment or not even out of the risk zone. and i think that's a behavior of think and willing. >> okay looking specifically in
6:31 am
washington on the sixth of january, there is been a lot of talk about facebook, and the involvement in that. and it's a moment in his being evidence look at kind of so with or with somebody have highlighted this and have particular concern. am absolutely horrified about the risk management in the organization and i think it is a gross lack of responsibility. can you give us one example where facebook is aware of the potential harm this could create. and that was created that they chose not to do anything about it.
6:32 am
>> is particular problematic to me is that facebook looked at its own product for the 20 elections, and identified a large number of settings, things as subtle as you know, should we amplify live videos 600 times or 60 times. the human said that setting is great for promoting and making that product grow and having interest in the product but is dangerous because on january 6, it was used actually for coordinating. facebook looked at that along with intervention and said, we need to have these in place the election printed facebook's and the reason they turned them off as they don't they thought that it was a delicate issue and i thought this was most of these interventions have nothing to do with content. for example they have live video
6:33 am
600 times versus a 50 times, and i don't think so. so they turned those off because they didn't weigh on solutions and on the day of january 6, most of interventions were still off at 5:00 p.m. eastern time. and that is shocking because like they could've turned them on seven days before and so either they're not paying enough attention or they are not responsive when they see those things. i don't know the root cause is but i do know that that's an unacceptable way to treat something that powerful. >> your former colleague made the point that we have freedom of expressions that we don't have freedom about justification and news that something that you would agree with in terms of the censorship. >> the current parts of the
6:34 am
company are almost like they refuse to acknowledge with the power that they have an the rankings, they justify them based on growth. and if they came in and they said we need safety first and safety by design, think they would choose different parameters in terms of customizing because i want to remind people that we liked the version of facebook that didn't have that like we saw our friends and our families and there was a more human scale. and i think there's a lot of value for that. >> a very important point about this is a private company, they have a responsibility to their shareholders. do you think there should be conditions or again, is there a
6:35 am
conflict there pretty. >> there are two issues in the one is that i think in terms of the private companies can define them, but that is like doing their own homework like their defining what is bad and we know now, they don't design it around what they say is bad or no accountability in the second question is round the duty outside of the shareholders. and we have had a principal for a long long time, the companies cannot subsidize their profits, they cannot pay for their profits using public expense. the public has to pay for toxic water facebook is sacrificing our safety they don't want to invest enough and they don't fully spent $14 billion in safety, that is not the question from the question is how much do you need to pay. >> one of the things that the
6:36 am
bill are constantly looking at in terms of that you care is that something that we should be considering carefully. >> the duty of care is very important. we have let facebook act freely for too long the demonstrated that most criteria necessary for facebook and versus when they see conflict of interest between themselves in public good there was along to the public good and the second is the catlike to the public. both cases they have violated both areas predict. >> thank you very much my final question is can you think the regulators would be optimism predict. >> i'm not lawmaker, i don't know the design for the military funnies but but i do think that things like mandatory risk assessment and certain levels of
6:37 am
quality try 19 and as long facek is required to articulate solutions, that might be a good enough dynamic to resolve this input for risk assessment and family actually a step over time because the reality is facebook is right around the edges we have that can continue over time. or on specific instances or pieces of content. >> thank you and following up on the discussion. some positions in the middle, and the day to day of facebook. the distinction between content such as terrorism and legal content reading the question about how they define what is harmful and the idea that a company like facebook would need to be foreseen that something was causing harm rated over the
6:38 am
last few weeks but may be published at or done more research is pretty. and new harms, and. [inaudible]. >> i'm extremely worried about facebook during this important research and a great illustration of how dangerous this is and how powerful facebook as they all went to ask questions is facebook like we probably need something like a program for public-interest people are embedded in the company for a couple of years and then they can ask questions and again for about the system and will have the problems and go out and train the next generation of integrity workers. i think are big questions around legal content, it's dangerous. for example covid-19
6:39 am
misinformation that actually lead to people losing their lives there's a large social consequences of this am also concerned that if you do not cover content, we will have a much much smaller impact and resolve and especially that impact the children for example in a lot of the content here is illegal and harmful. or it's legal but is harmful content. >> and arms and that they should be shared externally with the academics and regulators to your point in my second question is said the company is found a new type of harmful content and you trend is content that is using physical force mental harm and we talked about how complex that facebook world is and what it's about content information or messaging. how did you go about auditing
6:40 am
and affecting how the storm is being shared within the facebook environment. how would you actually go about doing it pretty. >> one reason i'm vatican of having like a big fire hose, where there's thousands of people view this content, is that you can include each piece of content for example, the company's growth, what group in which groups like imagine if we could tell the facebook is actually serving a lot of content to children. i think there is a really interesting opportunity that more data is acceptable outside of the company and i think a private industry will spring up these academics and researchers and i would start to teach people about hit.
6:41 am
it. i think there's opportunities where we can develop the muscle of oversight but only develop the muscle of oversight as we have a lease the people willing to look into facebook. >> there's a new type of harm that is created in a new trend is been appearing and private entities and the amount of content is really difficult for us to find and assess that in your saying is untrue they do have the capability to do that as i said earlier, we should be able to have the public surface like hey, you don't actually look for example self harm content and the kids and their
6:42 am
exposed and so we don't track that. we don't have a mechanism today to force facebook to answer those questions we need to have that to be mandatory we need to be tracking this. i'm sorry i forgot your question. >> do they have the capacity, no suggesting - and i'm interested in your experience how the different teams and previously, the program the you might suggest with these algorithms and disinformation and will know bits about that but the program is about and how it works. then the compliance of pr team in order to produce this to prevent to the regulators and my
6:43 am
concerns are the real truth about what is happening in may not necessarily force it in that order. and to our regulators, am i wrong that they would work together well and establish what to make together pretty. >> i thank you so important to know there's a conflict of interest between the teams and the one thing that is been raised is the fact that in twitter, the team that is responsive from policy for example reports separately to the ceo and responsible for external relations for the government and a facebook supports the same person so the person is responsible to keep them happy is the same person is to find out what is harmful or not harmful content i think there's a real problem around the left hand does not see the right hand of facebook for example that i gave earlier and on one hand the integrity is saying, the problem with the
6:44 am
diction like one of the signs of it and then someone saying you know this. giving looked at it for three months. nobody is really responsible rated but in her testimony tony pressure which was pressed on the decisions, she could not articulate those who are responsible and is a real challenge of facebook that there is not any system or responsibility for the government's and so you end up in situations like that where you have one team, and knowingly, possibly to cause more and more addictions. >> there should be more therefore risk tendencies and. [inaudible]. >> it might be important to include what are your organizations not just the product risk assessment because
6:45 am
the organization organizational choices are introducing into the system. >> and is this a piece of law in the uk have research and referred it evidence before and employees as of the company's here in london, but there are ready-made say in california do you think there is a risk and await the power is structured in california in which the uk and the laws may not be able to do what they need to do. >> facebook today is kind conscientious people works with a system of incentives since unfortunately has bad results and the results that are harmful to society. there's definitely a center, and definitely a greater priority on growth metrics and safety
6:46 am
metrics and they even have safety teams to their actions will be greatly hindered it and even rolled back on behalf of growth pretty is a real problem between again the key medications between right hand in the left-hand and on facebook, they talk about integrity team might push and spend months on this but because it is so poorly understood, people gladden factors that basically re-create what the term was. and so over and over, you will get that behavior. >> thank you. thank you to everybody else for the incredible picture you painted and we recognize who is
6:47 am
fantastic - something sensible inside. and just a few minutes ago, is almost like you said before, with some of your descriptions and talked about the names given a possibly other organizations and seems to imply this issue. all the talk about the culture, he said there are lots of people in facebook who what you are saying, the promotions structure and they pointed in a different direction. now organization and culture of their own so this is my question is really not culture, do you think that there is a possibility with regulator structure being seen as a way
6:48 am
forward in the way the world deals with these huge companies in order to while there are sufficient people who do farms and others to rescue d feel somehow that you're talking about the left-handed the right hand that will never ever recover itself as an agency. do you think there's a way in which this could happen. >> i think in less than facebook changes, the incentives, they operate in the changes on facebook so i think it is a question on facebook kind conscientious good people but the systems reward in the wall street journal has reported on how people have advanced inside of the companies and the managers and the leaders of the integrity and safety teams. but the past management and
6:49 am
growth and that seems deeply problematic predict and i think there is a need to provide an external way to move it away from just being optimized on short-term and immediate shareholder profitability. and more on the public good. and i think it will be more profitable ten years from now >> inside of the company, things that you have been saying sing n the corner, think he said that, what is it is stops it from being an official policy and is there actually a gatekeeper on the growth group that just said don't do that, just move on.
6:50 am
>> i don't think there's a exclusive gatekeeper. i think there is a real bias taking in for review and the cost incentives and facebook as characterized some of the things that i talked about in terms of we are against censorship. and what i'm talking about is saying that smes the platform and humanscale and how do you move that into grow. [inaudible]. and ways to move toward solutions that work in all languages but in order to do that yet to accept the cost of little bits of growth being lost in a with same what if facebook was not profitable for one here like but as of one year focus on maintenance, what would happen. quote with the results beat that i think there's a real thing of until the incentive changes.
6:51 am
>> if i were a terrorist at work say on the cell phones and i've got this organization the profile lots of people are attracted to it and i want to reach out to other people like-minded people see if we could help before harm and i can do that than i could look facebook is a in order to reach safety and they would happily sell that to me to do it. [inaudible]. and i would show to the young people for viewing and advertising and there could be a simpler way and yet u.s. the same question around what about the challenge and help people there are actually in danger from this and how you stop it how you reach out.
6:52 am
they would do themselves, they continue feeding these vulnerable people and make them even more vulnerable and i don't see how that can happen in a company that is trying to be conscientious. >> is a difference between systems, one of the incentives, and what is a system do for the incentives and when they created i can only tell you what i saw facebook too, they were conscientious people but they were limited by the actions of the systems they worked under this part of why the regulations are so important. and amplification of interest, so facebook turning in a exponent where very like healthy recipes, just like following the recommendations on instagram, they like to lead to anorexic content very fast. because extreme polarizing content like it rewards this.
6:53 am
i've never heard describe what you just described. and if you want to target this today you can take an audience of people who brought your product previously and is very profitable and advertising is a great tool and never thought about using it to reach out critical content when people's life is in danger right now the current on the tools like facebook has tool to protect the kids and protect people who might be vulnerable in those tools trigger several times a day. hundreds and so i thank you so unquestionably the facebook should have to do things like that in a partnerships with people who can help connect them for the vulnerable population you're right they do have the tools.
6:54 am
. [inaudible]. and without this accommodation. >> i had the conversation, there is a sensibility within the company where they are very careful without any action which is nonsensical and things that are statistically likely that are not privy to that, very heavy in the scale just imagine they got a terrorist and other people are being recruited for terrorism. it happened in mexico they were recruiting of people predict you can imagine it helping people who are at risk and facebook wod come back and say there's no guarantees those people are at risk and we shouldn't label them in a negative way. and so i think there are things where coming in and changing the
6:55 am
incentives and making them articulate the risks and fix the rest, i think rapidly they would solve the problems. >> and it would be sensible and no interest in the subject to be sensible but not defensible to reach out. >> i'm not sure facebook allows that. >> like you can speak of the advertising with addictions and reach out into help them. >> and that was targeted to children and there was an image of a bunch of pills and tablets and drugs and something like you and have your party this weekend a reach out or something and is still apparently is for drug partying without, there is a
6:56 am
real thing where facebook says they have policies for things like a but there's tons of ads with that. [inaudible]. >> one part of this, in the human existence, and people with addictions and yet the other part of this is keeping people safe is largely sitting in the dark. >> there's a great thing with keeping the company safe they symmetry. >> and the people with the information today could help and they would say is not sensible to do that pretty. >> i never saw that being a logical thing to do like
6:57 am
facebook should. [inaudible]. >> and if they would do this, it would've been done for some reason it appears that has not printed. >> i think there are cultural issues and other issues were it's not enough to say they've invested $14 billion and is a no, and have a safe platform of the important part is the design to be safe, not that she they be safe. >> i was really struck that you said they know i think to act and think was the phrase and i think in my mind, there was a particular phrase and duty and there was a suicide into the year to get anything beyond an
6:58 am
ultimate response. and i have access and there was a small intervention it and maybe anonymous but basically see you said no you cannot predict is. but what they were saying is that the courtesy. [inaudible]. so i really, i just really wanted you to say what you think that is okay or whether actually this report in the complaint piece is another thing to look
6:59 am
at because actually to giving these grieving parents to give them some sort of completion. >> of the way you describe, i think that there's a distinction around private versus public content and they can come in and said, these for the public like the global content, we can show you that content and i think a published they could've done that and i wouldn't be surprised they no longer have the data, they can leave after 90 days so unless she was like a terrorist and they were tracking her, they would have lost all of their history within 90 days. they know that whatever their sins are, it just takes time of that 90 days. >> in the arrangement for not
7:00 am
giving a parent of a deceased child access to what they were saying and i'm just caught up in that. >> i think there is an unwillingness for them to acknowledge that the responsible for anyone and there's lots of ways to have the done in a privacy conscious way but they just don't want to do and facebook as shown over and over again they don't want to raise that data even when they do it, like it literally released this information using this misleading information. [inaudible]. and i am sure the facebook has not thought holistically about the experiences of tears that have had, the trauma on the platform right because i'm sure they're not the only ones who have suffered that way. and i think it is cruel facebook cannot think about even taking
7:01 am
minor responsibility after something like that. >> the children in a particular interest of mine and another thing is specifically preserving and this is something i'm very worried about and sadly could drive surveillance and could drive more resistance into relations and i'm just interested to know your perspective on that. >> i think there's kind of a twofold situation so on one side, there are many algorithm make things that they could keep children off of the platforms. and facebook currently does not disclose what they do and so we can't is also a society, we
7:02 am
actually have a much larger tool just look at it we also don't understand the privacy laws are we have no idea what they are doing in the second thing is that we could be denigrating their homework which is that facebook has a system for estimating the age of any user and within a year or two of them training and teams like an of their actual that they can estimate actually but the real age of that person is in facebook chef to publish how they do that. i publish the results in saying one and two and three and four years ago, how many ten -year-olds were on the platform, and a 12 -year-olds were on the platform because they notice and they're not disclosing that to the public. that would be a good enforcement to better protect people on the platform. >> and the risk assessments and
7:03 am
transparency and mandatory safety and moderation with human driven lobes i think you said and how the murder they can designs and application of their own policies. and will that in the regulators and also what we are looking at. we keep children safe. would it save lives and what is stop abusing wouldn't be enough. i think it would be much more safer platform and facebook would take the time to calculate risk as a mark on the can just and dissolve those risks and like you have solution, it needs to be high quality because remember, the company that has accidents and coming out and saying you know, really sorry,
7:04 am
like were going to get better and better but we do hear that from facebook over and over again i think between having or transparency and privacy conscious, and having a process the conversation around what the problems are the solutions and essay for facebook. [inaudible]. >> yes thank you, and the subscriptions have been prevented, and on this committee in terms of regulatory risk and certainly security risks.
7:05 am
and also encryption. can you clarify what your position as if there's anything that you have for us on whether we should be concerned. >> going to be clear, that is mischaracterized on my opinions on any encryptions printed and rescinded over the internet and did another devices read i'm a strong supporter of access to open and encryption platforms. and if you're someone who has made as a journalist, my primary form of social software is an open source encryption chat platform and part of life that is a and is so important is you
7:06 am
can see and anyone can go and look at it so they have this open platform, that the only way that you're allowed to do the chats in the united states and facebook, i thank you so concerning because we don't know what they're going to do and we don't know what it means like we do know people's privacies and we and is also a different context like sony open when i like to use, there is no directory when you can find this like there is no directory reading go and find this community in bangkok and on facebook, there are nations and so on the clear that if i'm not against, but i do believe that the public has a right to know
7:07 am
what is this mean. are they really. [inaudible]. because that they see if they're doing the encryption they don't really do that, people don't either. and i personally don't trust facebook to tell the truth and i fear that they are waving their hands on the but there are concerned about the issues for the dozy dangers anymore i'm concerned about memphis construing the products they build and the need regulatory oversight and that is my position. >> and if you ended up with integration of some of the other things that you did on facebook, the encryption, and it could be a dangerous place predict. >> i think there's two sides, like i will make super clear, i support the access and i do do this every day my work was
7:08 am
currently on an open encryption service and i am concerned on one side, that the consolation of factors related to the facebook, it makes it even more of oversight of how we do this. like access to the directory but the second one is the security. if people think they're doing and encryption product and facebook's interpretation of that is different than what they would do an open source product we do, we can look at it and sure what it says on the label is in the can but it facebook has a different encryption and there's a vulnerability, that's why i am concerned, the public oversight of anything for this encryption because they are making people feel safe when they might be in danger. >> thank you very much. >> a quick follow-up, facebook
7:09 am
announcing in relations to the human cost of misinformation, say for example covid-19 is a hoax or anti- vaccination information and they try to quantify that in terms of illness, death, human cost rated. >> they been many studies looking at the miss information board and is not shared, you have things like the most exposed people in this mess information. and putting people into these areas were people into extreme relief, because them off in their communities because and i have friends or flatterers and. [inaudible]. and the united states, is
7:10 am
thanksgiving ruined like did they go consume too much information on facebook at thanksgiving dinner and i think when we look at the social cost, over the health cost, i think for example facebook on the comments right like summer shorter or defendant takes a hard to figure it out right now, groups have really struggled even with free ad credits that facebook has given them that they will promote tons of information about the vaccine about ways to cure covid-19 and they will get piled on comments. so how much more impact with those algorithms have within toxic content. >> thank you and want to talk about a comment that you made
7:11 am
earlier, and it occurred to me actually the platforms of the result in terms of the language in different types of languages been thinking that within english, huge distances between english english and other english is so this language issue and when you're mentioning this, isa someone on facebook, they been on the previously. [inaudible]. and it turns out that it actually means something else and this kind of someone had reported this to facebook and they said it didn't break any rules in the reports and it eventually got to the page done all by facebook. and this was during an election campaign and there were two
7:12 am
people publicly on the platform and when it was reported that it meant a different word. [inaudible]. i wonder, do you know within facebook, we learn from that point it wouldn't know and whatever that word, that they need to stop at her with that just be lost. >> i think it would likely get lost and facebook is very cautious but how they involve with a speech and i did not see a great deal of regional organization which is what is necessary to do the content that based intervention and i did not see them investing in a high level regional organizational it would be necessary to do that effectively i think it's interesting design questions were like if we as a community,
7:13 am
government academics, and the researchers came together and they said, i think about how facebook gathers enough structured data will get that right. like in the case the other members. like how do you do that. i think they look at closer than what google has done they would likely have a substantially safer product and google submitted to be 5000 languages and honey make google and the help content available. on the way they did that was they invested in a community program and they needed the help of the community to make this successful in a facebook event if they had the collaborations with academia or other governments, to figure out collaborative strategies of this structured data, we would have a safer facebook and i think these
7:14 am
other strategies with the content solutions are not right but they could be so much better. best to make it safer and if you want to continue actually do it in a way to protect, not just from american english. >> and on these platforms, to learn the future proofing field against the exchanges. and i facebook and google is what you see on the screen but increasingly, they are working in the realms of virtual reality and risk as well which would increasingly enable future engagement. do you know whether these principles, around safety and reducing harm being discussed for the future. because my concerns is that will be this bill right and actually
7:15 am
the world will shift into a different type of platform actually want to cover the bases. >> i am actually excited about augmented reality because it attempts to re-create interactions. like in this room we have maybe 40 people a whole, and the interactions that we have socially hard at humanscale. and most of the reality screens is they re-create the dynamics of an individual or communications with a handful of people in those systems have a very different consequence and in a hyper amplification systems that facebook has built today but the nature of facebook is that individual, it is about systems and amplifications and disproportionally give people saying extremely polarizing things and so agree with you that we have to be careful.
7:16 am
but i think that the mechanisms that we started out earlier in the idea of having the risk assessments that are not just for the company great for the company but we also need to be regulator gathering from the community is saying we should be concerned about this. a tandem approach like that that these company's could articulate the solutions i think that's an approaching might work for putting on time. as we mandatory. as it facebook and i guarantee you this fundamental predict. >> thank you and thank you so much. >> just a couple of final questions for me, we heard the information last week. in the way the recommendation works to make it safer.
7:17 am
>> there is a difference of the intended goal and we never intended that amplifies extreme polarized system and the consequences of it and they intended to give you content you enjoy because that will make you stand the platform with the realities that that the algorithmic ai systems are very complicated. ... ... though their chosen choices that have unintended side effects. i'll give you an example. autoplay.
7:18 am
i think autoplay on youtube is a super dangers. instead of having you choose what you want to engage with it chooses for you and it keeps you in a stream, a flow where it just keeps you going. there are still conscious action of picking things or of whether not to stop, right? that's where those rabbit holes come from. >> if someone signs up to a group without your consent, that's fixed on anti-vax, you engage on when the postings use and newsfeeds, you're not a member of the group and probably not just that you get more from the group but that's quite interesting for stuff you done before and perhaps the whole system will give that type of content. that's what i meant between the system recognizing, a line of inquiry can be used and then typing it. >> that's what's so scary. there's been some reporting on a story about a test user.
7:19 am
facebook says it takes two to tango. a book in markets that don't blame us for the extreme cottages and facebook. you chose your friends, you chose your interests. it takes two to tango. when you make a brand-new account and you follow the mainstream interests, for example, foxnews, trump, melania, it will lead you very rapidly to qanon, to white genocide content but this, this is a true on the right. it's true on the left. these systems lead to amplification and division, at a think you're right like it's a question of like the system wants to find the content that will make you engaged more and that is extreme polarizing content. >> like it's your fault. again a massive misrepresentation of the way the company actually works. >> facebook is very good at -- they have very good communicators and the reality is like at the business model is
7:20 am
leading them to dangerous actions. [inaudible] >> the other party is facebook, not another usurer actually. on fake accounts, we heard about how -- authentic activity. based on your work at facebook how big a problem do you think that is on things like civil integrity around elections? were talking networks of hundreds of thousands of accounts have been taken down at how much of a problem? >> i'm extremely worried about fake account and want to give you guys some context on fake accounts. so there are lots. these things are automated -- bots. facebook is recently good at detecting bots. then there's a fix things called manually driven fake accounts. i manually driven fake account is for example, there are a cottage industries and certain pockets of the world, certain parts of pakistan, parts of africa, certain pockets were people realize you can pay a
7:21 am
child one dollar to play with an account like like a 12-yeae a fake 35-year-old for a month, and during that window you will pass the window scrutiny of facebook and look like a real human because you are a real human. and that account can be resold to someone else because it now looks like a real human account. those accounts, there's a lease 100,000 of them among -- there's 800,000 i believe -- back when i left him a approximate 800,000 facebook connectivity accounts. facebook is subsidizing your internet. among those there were 100,000 of these manually driven fake accounts that were discovered by a colleague of mine. they were being used for some of the worst offenses on the platform, and i think there's a huge problem around level face because done in preventing them from spreading harm on the platform.
7:22 am
>> how confident are you the number of active users on facebook is accurate, that those people are real people? >> i -- i think there's interesting things around the general number. i think october 4, this is this tradition of things. on social networks things are not necessarily evenly allocated. facebook is published a number of ugly 11% of its accounts are not people, , like they are duplicates. amongst new accounts they believe the number is closer to 60% but that is never been disclosed in my awareness in a public statement. so there's this question of if investors are interpreting the value of common base on a certain number of new accounts any month and 60% of those are not actually new people, they are over inflating the value of the company.
7:23 am
>> selling people you've got reasonably fake, real people to advertiser. >> there is a problem same user multiple accounts and we had documentation that said, or just the iphone documentation that said for -- let's say you're targeting a very specific population, maybe they're highly affluent and slightly quirky individuals and you will sell them some very specific product. facebook is amazing for these niche is because maybe there's only 100,000 people in the trait that you want to reach but you can get all of them. facebook is put in control of reach and frequency advertising. you say i do want to reach someone more than seven times or maybe ten times because that 30th impression is not very effective. and facebook control and research says the systems the reach and frequency advertising systems were not accurate because it didn't take into consideration the same user multiple account effects. so that is a definitely facebook
7:24 am
over overcharging people for the product. >> works the same with instagram as a. >> i'm sure does. >> have duplicate accounts on instagram, is that something you share from the safety point of view? >> i think that there were -- i was present for multiple conversations during my time in civic integrity where they discussed the idea that on facebook the real names policy and the policy against, like their authenticity, those are security features and then on instagram because they didn't have the same contract the were many accounts that would have been taken down on facebook for chordata behavior and other things but because they were in authentic it was hard to take the debt. in the case of teenagers
7:25 am
encouraging teenagers to make private accounts so say their parents can't understand what's happening in the lives i think is really dangerous and there should be more family centric integrity interventions. think about the family -- [inaudible] >> as you say a young person engaging with improper content using a different account while the parents see the one they think they should have. do you think policy needs to change? do you think they can be made to work on instagram as it does today? >> i don't think they know enough about instagram behavior in the way to give a good opinion. >> but as a concerned citizen -- >> i strongly facebook is not transparent and is difficult for us to actually. the right thing to do because we are not told accurate information about the system itself works. >> i think that's a good summation a lot of what we've been talking up the second.
7:26 am
i think that concludes the question from the committee so just would like to thank you for taking the trouble to visit us he


info Stream Only

Uploaded by TV Archive on