Social Media Companies' Efforts to Counter Online Terror Content & Misinformation (EventID=109710)

the Committee on Homeland Security will come to order the committee is meeting today to receive testimony on examining social media companies efforts to counter online terror content and misinformation in March a white supremist terrorists killed 51 people and wounded 49 more at two mosque in Christchurch New Zealand our thoughts and prayers continue to be with the victims and their families the motive the hand attack is not in question the terrorist had written an extensive manifesto outlining his white supremacist white nationalist anti-immigrant anti-muslim and fastest beliefs his act was horrifying beyond words and it shook the conscience shockingly the terrorist was able to livestream the attack on Facebook where the video and it's gruesome content went undetected initially instead law-enforcement officials in New Zealand had to contact the company an axe that had be removed when New Zealand authorities called on all social media companies to remove these videos immediately they were unable to comply human moderators could not keep up with the volume of videos being reposted and their automated systems were unable to recognize minor changes in the video so the video spread online and spread it around the world the fact that this happened nearly two years after Facebook Twitter Google Microsoft and other major tech companies established the global internet forum to counterterrorism or pronounce gifts et is troubling to say the least that gifts et was created for tech companies to share technology and best practices to combat the spread of online terrorist content back in July 2017 representatives of the gifts ET brief dis committed on this new initiative at the time I was optimistic about its intentions and goals and acknowledged that its members demonstrated initiative and willingness to engage on this issue while others have not but after a while and white supremists terrorists were able to exploit social media platforms in this way we all have reason to doubt the effectiveness of the gifts ET and the company's efforts more broadly on March 27th of this year representatives of gifts ET briefed this committee after the Christchurch massacre since then myself and other members of this committee have acted important questions about the organization and your companies and have yet to receive satisfactory answers today I hope to get answers recalled regarding your actual efforts to keep terrorist content off your platforms I want to know how you prevent content like the New Zealand attack video some spreading on your platforms again this committee will continue to engage social media companies about the challenges they face in addressing terror content on their platforms in addition to terror content I want to hear from our panel about how they are working to keep hate speech and harmful misinformation off their platforms I want to be very clear Democrats respect the free speech rights enshrined in the First Amendment but much of the content I'm referring to is either not protected speech or violates the social media company's own Terms of Service we've seen time and time again that social media platforms are vulnerable to being exploited by bad actors including those working at the behest of foreign governments who seek to sow discord by spreading misinformation this problem will only become more acute as we approach the 2020 elections we want to understand how companies can strengthen their efforts to deal with the persistent problem at a fundamental level today's hearing is about transparency we want to get an understanding of whether and to what extent social media companies are incorporating questions of national security public safety and integrity of our domestic institutions into their business model I look forward to having that conversation with the witnesses here today and to our ongoing dialogue on behalf of American people I thank the witnesses for joining us and the members for their participation with that I now recognize the ranking member of the full committee the gentleman from Alabama mr. Rogers for five minutes for the purpose of an opening statement Thank You mr. chairman concerns about violent and terror related online content has existed since the creation of the Internet this issue is piqued over the past decade with the growing sophistication in which foreign terrorists and their global supporters have exploited the openness of online platforms to radicalize mobilize and promote their violent messages these tactics proved successful so much so that we are seeing domestic extremists mimic many of the same techniques together followers and spread hateful violent propaganda public pressure has grown steadily on the social media companies to modify their Terms of Service to limit post linked to terrorism violence criminal activity and most recently the hateful rhetoric of misinformation the large and mainstream companies have responded to this pressure in a number of ways including the creation of the global internet forum to counter terrorism are just CTE they're also updating their Terms of Service and hiring more human context content moderators today's hearing is also an important opportunity to examine the constitutional limits placed on the government to regulate or restrict free speech advocating violent acts and recruiting terrorists online is illegal but expressing one's political views however repugnant they may be as predicted into the First Amendment I was deeply concerned to hear the recent news reports about Google's policy regarding President Trump and conservative news media Google's head of responsible innovation ginge and I recent end quote we all got screwed over in 2016 the people got screwed over the news media got screwed over everybody got screwed over so we've rapidly been like what happened there how do we prevent this from happening again close quote in Michigan Jenai again on video remarked quote Elizabeth Warren is saying that we should break up Google that will not make it better it will make it worse because all these smaller companies that don't have the same resources that we do will be charged with preventing the next Trump situation close quote I'm is jenna is entitled to her opinion but we're in trouble if her opinions are Google's policy that same report details alarming claims about Google's deliberate attempt to alter search results to reflect the reality Google wants to promote rather than objective facts this report and others like it are a stark reminder of why founders created the first amendment in fact the video I just quoted from has been removed from YouTube that platform is owned by Google who is joining us here today I have serious questions about Google's ability to be fair and balanced when it appears they've colluded with YouTube to silence and negative press coverage regulating speech quickly becomes a subjective exercise for government or the private sector noble intentions often give away to bias and political agendas the solution to this problem is complex it will involve enhanced cooperation between the government industry individuals while for taking the constitutional rights of all Americans I appreciate our witnesses participation here today I hope that today's hearing can be helpful in providing greater transparency and understanding of this complex challenge and with that I yield back the Jimmy thank you very much other members of the committee are reminded that under the committee rules opening statements may be submitted for the record I welcome our panel of witnesses our first witness miss Monica dicker is the vice president for global policy management at Facebook next we're joined by mr. Nick pickles who currently serves as a global senior strategist for public policy at Twitter our third witness is mr. Derek Slater the global director of information policy at Google finally we welcome miss nadine strossen who serves as the John Marshall Harlan the second professor of law at New York law school without objection a witness's full statement will be inserted in a record I now ask each witness to summarize here her statement for five minutes beginning with Miss Bickerton Thank You chairman Thompson ranking member Rogers and members of the committee and thank you for the opportunity to appear before you today I'm Monica Becker Facebook's vice president for global policy management and I am in charge of our product policy and counterterrorism efforts before I joined Facebook I prosecuted federal crimes for 11 years at the Department of Justice on behalf of our company I want to thank you for your leadership combating extremism terrorism and other threats to our homeland and national security I'd also like to start by saying that all of us at Facebook stand with the victims their families and everyone affected by the recent terror attacks including the horrific violence in Sri Lanka and New Zealand in the aftermath of these acts it's even more important to stand together again against hate and violence and we make this a priority and everything that we do at Facebook on terrorists content our view is simple there's absolutely no place on Facebook for terrorists they're not allowed to use our services under any circumstances we remove their accounts as soon as we find them we also remove any content that praises or supports terrorists or their actions and if we find evidence of imminent harm we promptly inform authorities there are three primary ways that we are implementing this approach first with our products that help stop terrorists in their propaganda at the gates second through our people who help us review terrorist content implement our policies and third through our partnerships outside the company which help us stay ahead of the threat so first our products Facebook has invested significantly in technology to help identify terrorist content including through the use of artificial intelligence but also using other automation and technology for instance we can now identify violating textual posts in 19 different languages with the help of these improvements we have taken action on more than 25 million pieces of terrorist content since the beginning of 2018 of the content that we have removed from Facebook for violating our terrorism policies more than 99 percent of that is content that we found ourselves using our own technical tools before anybody has reported it to us second are people we now have more than 30,000 people who are working on safety and security across Facebook across the world and that is three times as many people as we had dedicated to those efforts in 2017 we also have more than 300 highly trained professionals exclusively or primarily focused on combating terrorist use of our services our team includes counterterrorism experts former prosecutors like myself former law enforcement officials former intelligence officials and together they speak more than 50 languages and they are able to provide 24-hour coverage finally our partnerships in addition to working with third-party intelligence providers to more quickly identify terrorist material on the Internet we also regularly work with academics who are studying terrorism and the latest trends and government officials following the tragic attacks in New Zealand Facebook was proud to be a signatory to the Christchurch call to action which is a nine point plan for the industry to better combat terrorists attempts to use our services we also partner across industry as mr. chairman and the ranking member mentioned in 2017 we launched the global Internet forum to counterterrorism or gif CT with YouTube Microsoft and Twitter gifs et the point of that is we bring companies together from across industry to share information and also to share technology and research to better combat these threats through gifts et we've expanded an industry database for companies to share what we call hashes which are basically digital fingerprints of terrorists content so we can all remove it more quickly and help smaller companies do that too we've also trained over 110 companies from around the globe in best practices for County and terrorists use of the Internet now Facebook took over as the chair of get CT in 2019 and along with our fellow members we have this year work to expand our capabilities including making new audio and text hashing techniques available to other member companies especially these smaller companies and we've also improved our crisis protocols in the wake of the horrific choice church attacks we communicated in real-time across our companies and were able to stop hundreds of versions of the video of the attack despite the fact that bad actors were actively trying to edit the video to upload it to circumvent our systems we know there are adversaries are always evolving their tactics and we have to improve if we want to stay ahead and though we'll never be perfect we've made real progress and we're committed to tirelessly combating extremism on our platform I appreciate the opportunity to be here today I look forward to answering your questions thank you Javan Thompson ranking member Rodgers members of the committee thank you for the opportunity to appear here today to discuss these important issues of combating terrorists content online and manipulation for the public conversation we keep the victims tell you my account thank you we keep the victims their families and the effective communities of the attack in Christchurch and around the world in our minds as we undertake this important work we have made the health of Twitter our top priority and measure our efforts by how successfully we encourage healthy debates conversations and critical thinking on the platform conversely hateful conduct terrorist content and deceptive practices detract from the health of the platform I like to begin by outlining three key policies firstly Twitter takes a zero-tolerance approach to terrorist content on our platform individuals may not promote terrorism engage in terrorist recruitment or terrorist acts since 2015 we've suspended more than 1.5 million accounts for violations of our rules related to the promotion of terrorism and continue to see more than 90% of these accounts suspended through protect proactive measures in the majority of cases we take action at the account creation stage before an account has even tweeted the remaining 10% is identified through a combination of user reports and partnerships secondly we prohibit the use of Twitter by violent extremist groups these are defined in our rules as groups who whether by their statements on or off the platform promote violence against civilians or use violence against civilians to further their cause whatever their ideology since the introduction of this policy in 2017 we've taken action on 184 groups globally and permanently suspended more than 2,000 unique accounts thirdly Twitter does not allow hateful conduct on its service an individual on Twitter is not permitted to promote violence or directly attacked or threatened people based on protected characteristics where any of these rules are broken we'll take action to remove the content and will permanently remove those who promote terrorism of artists Remus groups and Twitter as you've heard Twitter is a member of the global internet forum to counterterrorism a partnership between YouTube Twitter Facebook and Microsoft it facilitates information sharing and technical cooperation across industry as well as providing essential support for smaller companies we learned a number of lessons from the Christchurch attacks the distribution of media was manifestly different from how is or other terror organizations have worked this reflects a change in the wider threat environment that requires a renewed approach and a focus on crisis response after Christchurch an array of individuals online sought to continuously reupload the content created by the attacker both the video and the manifesto the broader internet ecosystem presented then and still presents a challenge we cannot avoid a range of third-party services were used to share content including some forums and websites that have long hosted some of the most egregious content available online our analysis found that 70% of the views of the video posted by the Christchurch attacker came from verified accounts on Twitter including news organizations and individuals posting the video to condemn the attack we're committed to learning and improving but every entity has a part to play we should also take some heart from the social examples we've seen on Twitter around the world as users come together to challenge hate and challenge division hashtags like prey for Orlando just be Charlie or after the Christchurch attack hello brother reject turreted terrorists narratives and offer a better future for resolved in the months since the attack governments industry and civil society have united behind our mutual commitments to a safe secure open and global Internet in fulfilling our core commitment to the Christchurch call we will take a wide range of actions including to continue investing in technology so we can respond as quickly as possible to a future instant let me now turn to our approach to dealing with attempts to manipulate the public conversation as a uniquely open service enables the clarification of falsehoods in real-time we proactively enforce policies and use technology to halt the spread of content propagated through manipulated tactics our rules clearly prohibit coordinated account manipulation malicious automation and fake accounts we continue to explore how we may take further action through both policy and products on these types of issues in the future we continue to critically examine additional safeguards we can implement to protect the health of the conversation occurring on Twitter we look forward to working with the Committee on these important issues thank you thank you for the you had testimony I now recognize mr. Slater to summarize his testimony for five minutes Jeremy Thompson ranking member Rogers and distinguished members of the committee thank you for the opportunity to appear before you today I appreciate your leadership on the important issues of radicalization and misinformation online and welcome the opportunity to discuss Google's work in these areas my name is Derek Slater and I'm the global director of information policy at Google in my role I lead a team that advises the company on public policy frameworks for online content at Google we believe that the Internet has been a force for creativity learning and access to information supporting the free flow of ideas is core to our mission to organize and make the world's information universally accessible and useful yet there have always been legitimate limits even where laws strongly protect free expression this is true both online and off especially when it comes to issues of terrorism hate speech and misinformation we take these issues seriously and want to be a part of the solution my testimony today I will focus on two areas where we're making progress I'll protect our users first on the enforcement of our policies around terrorism and hate speech and second and combating misinformation broadly on YouTube we have rigorous policies and programs to defend against the use of our platform to spread hate or incite violence over the past two years we've invested heavily in machines and people to quickly identify and remove content that violates our policies first YouTube's enforcement system starts from the point at which I use uploads a video if it is somewhat similar to videos that already violate our policies is sent for humans to review if they were if they determine that it violates our policies they remove it and the system makes a digital fingerprint so it can't be uploaded again in the first quarter of 2019 over 75 percent of the more than 8 million videos removed were first flagged by a machine the majority of which were moved before a single view was received second we also rely on experts to find videos that the algorithm might be missing some of these experts sit at our Intel desk which proactively looks for new trends in content that might violate our policies we also allow expert NGOs and governments to notify us a bad content in bulk through our trusted flag or program finally we go beyond enforcing our policies by creating programs to promote counter speech examples of this work include our creators for Change Program which supports YouTube creators that are acting as positive role models in addition alphabets jigsaw group has deployed the redirect method which uses targeted ads and videos to disrupt online radicalization this broad and cross-sectional work has led to tangible results in the first quarter of 2019 YouTube manually reviewed over 1 million subs suspected terrorist videos and found that only fewer than 10% about 90,000 violated our terrorism policy as a comparison point we typically remove between 7 and 9 million videos per quarter which is a tiny fraction of a percent of YouTube's total views during this time period our efforts do not stop there we are constantly taking input and reacting to new situations for example YouTube recently further updated it's hate speech policy updated policy specifically prohibits videos alleging that a group is superior in order to justify discrimination segregation or exclusion based on qualities like age gender race caste religion sexual orientation or veteran status similarly the recent tragic events in Christchurch presented some unprecedented challenges in a response we took more drastic measures such as automatically rejecting new uploads of videos without waiting for human review to check if it was news content we are now reexamining our crisis protocols and I've also signed the Christchurch call to action finally we are deeply committed to working with government the tech industry and experts from civil society and academia to protect our services from being exploited by bad actors including do during Google's chairmanship of the gift CT over the last year and a half on the topic of combating misinformation we've a long and long natural long term incentive to prevent anyone from interfering with the integrity of our products we also recognize that it is critically important to combat misinformation in the context of democratic elections when our users seek accurate trusted information that will help them make critical decisions we have worked hard to curb misinformation in our products our efforts include designing better ranking algorithms implementing tougher policies against monetization of misrepresentative content and deploying multiple teams that identify and take action against malicious actors at the same time we have to be mindful there that our platforms reflect a broad array of sources and information and there are important free-speech considerations there is no silver bullet but we will continue to work to get it right in conclusion we want to do everything we can to ensure users are not exposed to harmful content we understand these are difficult issues of serious interest to the committee we take them seriously want to be responsible actors who do our part thank you for your time and I look forward to taking your questions thank you for your testimony I now recognize a mistress on to the summarize house statement for five minutes so so sorry thank you so much chairman Thompson and ranking member Rogers and other members of the committee my name is nadine strossen I am a professor of law at New York law school and the immediate past president of the American Civil Liberties Union of Great pertinents last year I wrote a book which is directly pertinent to the topic of this hearing called hate why we should resist it with free speech not censorship I note mr. chairman that you refer to hate speech as problematic content in addition with terror content and misinformation all of these kinds of speech while potentially harmful present enormous dangers when we empower either government or private companies to censor and suppress the speech for this reason the concepts of hate speech terrorists content and misinformation are all irreducibly vague and broad therefore having to be enforced according to the subjective discretion of the enforcing authorities and the discretion has been enforced in ways that both under suppress speech that does pose a serious danger as a number of the members as the chairman pointed out and the ranking member pointed out but also do suppress very important speech as also has been pointed out speech that actually counters terrorism and other dangers what's worse is that in addition to violating free speech and democracy norms these measures are not ineffective in dealing with the underlying problems and I thought that was something that was pointed out by comments by my co-panelists in particular Nick pickles testimony written testimony talked about the fact that if somebody is driven off one of these platforms either way they will then take refuge in darker corners of the web where it is much harder to engage with them to use them as sources of information for law enforcement and counterterrorism investigations so we should emphasize other approaches that are consistent with free speech and democracy but have been lauded as at least as effective and perhaps even more so than suppression I was very heartened that the written statements of my co-panelists all emphasized these other approaches Monica be Kurt's testimony talked about how essential it is to go the root causes of terrorism and the testimony of Nick pickles and Derek Slater also emphasized the importance of counter speech counter narratives and redirection now I recognize that every single one of us in this room is completely committed to free speech and democracy just as every single one of us is committed to countering terrorism and disinformation after all the reason we oppose terrorism and disinformation is precisely because of the harm that they do to democracy and liberty before I say anything further I do have to stress something that I know everybody here knows but many members of the public do not that these social media companies are not bound by the First Amendment free speech guarantee so none of us has a free speech right to air any content on their platforms at all conversely they have their own free speech rights to choose what will be and what will not be on their platforms so it would be unconstitutional of course for Congress to purport to tell them what they must put up and what they must take down to the extent that the takedowns would go beyond First Amendment unprotected speech and chairman Thompson you did completely accurately of course note that much of the content that is targeted as terrorists is unprotected but much of it is protected under the Constitution and most much of it is very valuable including human rights advocacy that has been suppressed under these necessarily over broad and subjective standards although the social media companies do not have a constitutional obligation to honor freedom of speech given their enormous power it is incredibly important that they be encouraged to do so and in closing I'm going to quote a statement from the written testimony of Nick pickles which I could not agree with more when he said that we will not solve the problems by removing content alone we should not underestimate the power of open conversation to change minds perspectives and behaviors thank you very much I thank all the witnesses for that testimony and I remind each member that he or she will have five minutes to question the panel I'll now recognize myself for questions misinformation is some of this committees challenges as it relates to to this hearing as well as the terrorist content let's take for instance the recent doctored video of Speaker Nancy Pelosi that made her appear to be drunk slurring her words Facebook and Twitter left up the video but you too took it down everybody agreed that something was wrong with it Facebook again took a different approach so I won't Miss Bickerton mr. pickels to explain how you decided the process for leaving this video up on Facebook and Twitter and then mustace later I want you to explain to me why YouTube decided to take it down miss bicker Thank You mr. chairman and let me first say misinformation is a top concern for us especially as we're getting ready for the 2020 elections we know this is something that we have to get right and we're especially focused on what we should be doing with increasingly sophisticated manipulated media so let me first speak to our general approach with misinformation which is we remove content when it violates our community standards beyond that if we see somebody that is sharing misinformation we want to make sure that we are reducing the distribution and also providing accurate information from independent fact-checking organizations so that people can put in context what they see to do that we work with 45 independent fact-checking organizations from around the world each of which is certified by pointer as being independent and meeting certain principles and as soon as we find something that those fact-checking organizations rate false on our platform we dramatically reduce the distribution and we put next to it related articles so that anybody who shares that gets a warning that this has been rated false anybody who did share it before we got the fact-checkers rating gets a notification that the content was has now been rated false by a fact checker and we're putting next to it those related articles from the fact-checking organizations I understand how long did it take you to do that for the Pelosi video the Pelosi video was uploaded to Facebook on Wednesday May 22nd around late morning and on Thursday around 6:30 p.m. a fact-checking Organization rated it is false and we immediately down ranked it and put information next to it and that's something where we think we need to get faster we need to make sure that we are getting this information to people as soon as we can it is also a reason that at 6:30 p.m. it so it took you about a day and a half yes thank you mr. pickels so as Monica said the the process for us as we review this against our rules any Content that breaks our rules we will move we're also very aware that people use manipulated tactics to spread this content fake accounts automation so we'll take action on the distribution as well as the content and this is a policy area we're looking at right now not just in the case of where videos might be manipulated but also where the videos are fabricated and where the the whole process of creating media may be artificial we think that the best way to approach this is with a policy and a product approach that covers in some cases removing I understand but just get to why you left it up so I present the video doesn't break our rules and then the account posting it doesn't break our rules but it's absolutely a policy area we're looking at right now about whether our rules and our products are the correct framework for dealing with this challenge we so if his false or misinformation that doesn't break your rules and not the president no thank you muscles later on YouTube we have tough community guidelines to lay out the rules of the road what's in bounds to be up on the platform what's out and violative content when it is identified to us via machines or users we will review and remove in this case the video in question violated our policies around deceptive practices and we removed it so again our committee is tasked with looking at misinformation and some other things we're not trying to regulate companies but terrorist content can also be a myth bladed document and so mr. asana talk to us about your position with that the difficulty in the inherent subjectivity of these concepts chairman thompson is illustrated by the fact that we have three companies that have subscribed to essentially the same general commitments and yet are interpreting the details very differently with respect to specific content we see that over and over again ultimately the only protection that we are going to have in this society against disinformation is constraining education starting at the earliest levels of a child's education in media literacy because Congress could never protect against misinformation in traditional media right unless it meets the very strict standards of defamation that is punishable and fraud that is punishable content including the Pelosi video is completely constitutionally protected in other media thank you I yield to the ranking member for his questions Thank You mr. chairman mr. Slater the video I referenced in my comments with miss Jenai and you're in your employee would you like to take this opportunity have you seen it congressman I have not seen the full video but I'm broadly aware of what you're talking about yes okay would you like to take an opportunity to respond to the comments that I offered about what was said could you be specific congressman what would you like me to respond she basically for example said did we can't let Google be broken up because these smaller companies won't have the same resources we had to stop Trump from getting reelected thank you for the clarification let me be clear this employee was recorded without her consent I believe these statements were taken out of context but stepping back to our policies how we address the issue you're talking about no employee rather in the lower ranks up to senior executives has the ability to manipulate our search results based on our products or our services based on their political ideology we design develop our products for everyone we mean everyone and we do that to provide relevant results throw rotative results we are in the trust business we have a long-term incentive to get that right and we do that in a transparent fashion you can read more on our house search work site we have search rater guidelines that are public on the web that describe how we look at writing and we have robust systems and checks and balances in place to make sure those are rigorously adhered to as we set up our our systems okay I've recognized that she was being videotaped without her knowledge but the statements that I quoted from were full complete statements that were not edited so it is concerning when you see somebody who's an executive with Google and there were more than one in that video by the way a making statements that indicate that it's management's policy within Google to try to manipulate information to cause one or another candidate for president United States or for that matter any other office to be successful not be successful so that that is what gave rise to my concern is it do we have reason to be concerned that Google has a a pervasive nature in the company to try to push one political party over another in the way it conducts its business congressman I appreciate the concern but let me be clear again in terms of what our policy is from the highest levels on down and what our practices and structures and checks and balances are about we do not allow any one lower level higher level to manipulate our products in that way okay I hope it's not the culture at any of your platform because you are very powerful in our country mr. Asin you raised concerns and your testimony that while social media companies legally can decide what content to allow on their platforms such censorship stopped us free speech and results in vice coverage what are your recommendations to these companies regarding content moderation without censorship I would thank you so much ranking member Rogers I would first of all endorse at least the transparency that both you and chairman Thompson stressed in your opening remarks and in addition other process related guarantees such as due process the right to appeal and a clear statement of standards I would also recommend standards that respect the free speech guarantees not only in the United States Constitution but of international human rights that the United Nations Human Rights Council has recommended ended in a non-binding way that powerful companies adopt and that would mean that content could not be suppressed unless it posed an emergency that it directly caused certain specific simha serious imminent harm that can't be prevented other than through suppression short of that as you indicated for example ranking member Rogers politically controversial even repugnant speech should be protected we may very much disagree with the message but the most effective as well as principled way to oppose it is through more speech and I would certainly recommend as I did in my written testimony that these companies adopt user empowering technology that would allow us users to make truly informed voluntary decisions about what we see and what we don't see and not manipulate us as has been reported many times into increasing rabbit holes and echo chambers but give us the opportunity to make our own choices and to choose our own communities thank you okay Thank You chair recognize no lady from Texas miss Jackson Lee for five minutes I thank the chair and I thank the ranking member committee members for this hearing let me indicate that there is known to the public the Fourth Estate and I might say that we have a fifth state which is all of you and others that represent the social media empire and I believe it is important that we work together to find the right pathway for how America will be a leader in how we balance the responsibilities and rights of such a giant entity and the rights and privileges of the American people in the sanctity and security of the American people social media of statistics from 2019 show that there are 3.2 billion social media users worldwide and this number is only growing that equates to about 42 percent of the current world population that is enormous and certainly I know the numbers are just as daunting in the united states so let me ask a few questions and I would appreciate brevity because of the necessity to try to get as much in as possible on March 15 2019 worshipers were slaughtered in the midst of their prayers in Christchurch New Zealand the gunman livestream the first attack on Facebook live so my question to you miss Beckett bickered is that can you today assure the committee that there will never be another attack of this nature that will be streamed as it is happening over Facebook live you mentioned 30,000 and 300 and so I hope they may contribute to your answer but I yield to you for your answer congresswoman thank you and the video was appalling the attack of course is an unspeakable tragedy and we want to make sure we're doing everything to make sure it doesn't happen again and it's not livestream again one of the things we've done is we have changed access to facebook live so that people who have a serious content policy violation are restricted from using it so the person who live streamed the New Zealand at a bit is the likelihood of you being able to commit that that would not happen again in terms of the new structures that you put in place well the technology were working to develop the technology is not perfect so artificial intelligence is a key component of us recognizing videos before they are reported to us and this video was not about fewer than 200 people saw it while it was live on Facebook nobody I mean my time is short do you have a percentage 50 percent 60 percent with the technology I can't give a percentage I can say that we are working with governments and others to try to improve that technology so that we will be able to better recognize the pickles and mr. Sleater if you would miss Vic it did raise a question of artificial intelligence and so if you would respond as to the utilization of AI and individuals as briefly as possible please so one of the challenges Twitter house is that there's not a lot of content 280 characters a maximum of 2 minutes 20 video and so one of the challenges in Christchurch was we didn't see the same video uploaded we saw different snippets that took different lengths so we're investing in technology to make sure that people can tree upload content once it's been removed previously we're also making changes to make sure that for example where people manipulate media we can move quicker he's using human subjects and AI it's machine learning on humans yes all right mr. Sleater Thank You congressman the congresswoman we use assembly a combination of machine learning and people to review it's being overall in the first quarter of 2019 75 percent of the 8 million videos we removed removed they were first flagged by a machine and they were the majority were moved before single view when it comes to violent extremism it's even stronger so over 90 percent of the violent extremist videos that were uploaded and removed in the past six months removed before a single human flag and 88 percent with less than 10 views thank you let me ask the question about deep fakes because my time is going for each of you in the 2020 election what you will do to recognize the fact that the fates can be a distortion of a election that is really the premise of our democracy can you quickly answer that question at the same time I just want to make mention the fact that free speech does not allow incitement fighting words truth that threats and otherwise could you just answer that please yes congresswoman the deep fakes as briefly as you can absolutely we are working with experts outside the company and others to make sure that we understand how deep fakes can be used and come up with a comprehensive policy to address them in the meantime or focused on removing fake accounts which are disproportionately responsible for this sort of content and also making sure that we're improving the speed at which we counts for misinformation with actual factual articles and reduce the distribution pickles so similarly we're working on a product and policy solution but one of the things that we already have in place is that if anyone presents any misinformation about how to vote that lends to voter suppression we will remove that now and that policies been in place for some time later so really we're investing significantly in working with researchers and others to build capacities in the space we have an intel dust that scans the horizon for new threats and constantly is looking at this sort of issue thank you for your courtesy eel bet is German gentleman from North Carolina mr. Walker for finishing its nakedness chairman while we were sitting here today I just looked up on the internet put in Facebook apologizes Google apologized as Twitter apologizes and there were more pages than I could count going through those apologies there I listened closely to the the words or how you framed it both mr. pickles and mr. Slater when you talked about one of you used hateful content mr. pickles yeah mr. Slater you use the expression hate speech and you listed several different people that were protected what I did not hear you you say in that group of people that you listed were those that were wanting to express their faith in April one of the larger apologies I think you guys have made in April Kelly Harkness brought us to the attention of Abby Johnson's life story in a movie called unplanned that movies gone on to make twenty million dollars at the box office but you're but Google listed that is propaganda my question for you today was that a machine that listed that it was that an individual congressman I'm not familiar with a specific video in question I'd have to I'd be happy to go back it's a video it's a movie it was one of the larger stories in April this year a major motion picture and not familiar with that didn't come across your radar no so I'm not familiar that specific video all right when we talk about the difference between hateful content and hate speech I know mr. pickels in June and just earlier this year Marco Rubio brought the tension that Twitter was banning any kind of language that was made offensive to China later came back and apologized question for you is how does Twitter use their discretion to block information without discriminating against different individuals or groups well firstly as you say our rules are identify hateful conduct so we focus on behavior first so how do two accounts interact and we look at that before we look at the speech that they're sharing so there are offensive views on Twitter and there are views that people will disagree with strongly on Twitter the difference between that and targeting somebody else is the difference between content and conduct so our rules don't have ideology in them they're enforced without ideology and impartial II and where we do make mistakes I think it is important for us to recognize and I know one of the challenges we have is that where we remove someone from Twitter and they come back for a different purpose our technology will recognize that person trying to come back on Twitter and we don't want people to come back to the platform that we've removed sometimes that does catch people who are having a different purpose so there is both a value to technology but we should recognize what we made mistake mr. Sleater how does Google audit their content moderation policies policies to ensure that they are being followed and that they are not being driven by bias Thank You congressman for that question we can broadly we have a robust system of both the development and the enforcement of our policies we are constantly reviewing and analyzing the policies themselves to understand whether they are fit for purpose whether they're drawing the right lines our reviewers go through extensive training to make sure we have a consistent approach we draw those group reviewers from around the country around the world again train them very deeply and are constantly reviewing and I appreciate it I need to keep moving what type of training if any do you provide for your human content moderators regarding subjectivity and avoiding bias mr. Slater again we provide robust training to make sure that we are applying a consistent rule robust training what is what does that mean what's robust training so when reviewers are brought on board before they are allowed to review we provide them with a set of educational materials and detailed steps in addition they are reviewed by man and others to make sure that they can correct mistakes then learn from those mistakes and so on miss bicker do you think that AI will ever get to the point where you can rely solely on it to moderate content or do you think human moderation will always play a role thank you for the question congressman at least for the near future human moderation is very important to this technology's good at some things it's good at for instance matching known images of Terra propaganda or child sexual abuse it is not as good at making the contextual calls around something like hate speech or bullying hmm final couple questions is a win down my time mr. pickels do you have any idea how many times Twitter apologizes per month for missing it own content I know I know that we take action on appeals regularly every decision we made you had a number on that and I don't have a number I can happily follow up mr. Sleater do you have any idea how many times Google apologizes for mismanaging the content per month congressman similarly we have a pills process so there are times where we don't get it we have a number I do not today but I would be happy to conduct you know I think you guys have apologized more than Kanye West has to Taylor Swift at some point without a yield back she I recognize gentlelady from Illinois miss Underwood in March two weeks after the Christchurch terror attack Facebook announced it would start directing users searching for white supremists terms so life after hate an organization that works to rehabilitate extremists life after hate is based in Chicago so I met with them last month when I was at home in Illinois they told me since Facebook's announcement they seem quote a large bump an activity that hasn't slowed down Facebook and Instagram have three billion users combined life after hate is a tiny organization whose federal funding was was pooled by this administration they do great work and simply don't have the resources to handle every single neo-nazi on the internet on their own miss bicker has Facebook consider providing continuous funding to life after hate for the duration of this partnership congresswoman thank you for that question life after hate is doing great work with us and and for those who don't know basically we are redirecting people who are searching for these terms to this content and we do this in some other areas as well and like for instance with self-harm support groups and we do see that sometimes they are under-resourced so this is something that we can come back to you on but we're definitely committed to making sure this works okay so right now there's no long-term funding commitment but you'll consider it I'm not sure what the details are but I will follow up with you on them so Facebook has made life after hate a significant component of its strategy against online extremism and so we really would appreciate that follow up with exact information mr. Sleater over the years YouTube has put put forward various policy changes in an attempt to limit how easily dangerous conspiracy theory video spread for example YouTube announced over a year ago that it would display quote information cues in the form of links to wikipedia next to the conspiracy videos mr. Sleater in the 15 months since this policy was announced with percentage of users who view videos with information cues actually click on the link for more information thank you for the question and I think this is a very important issue we do both display these sort of contextual cues to Wikipedia and Encyclopedia Brittanica as well as take a number of other steps and of a specific percentage on how many have clicked through but would be happy to come back to you okay if you can follow up in writing that'd be appreciated most Wikipedia articles can be edited by anyone on the Internet we've all seen some with questionable content does YouTube vet the wikipedia articles that it links to on information cues to ensure their accuracy or do you all work with Wikipedia to ensure that the articles are locked against most edits we work to raise up authoritative information ensure that what we are displaying is trustworthy and correct any mistakes that we may make so you you all have corrected the YouTube I'm sorry the wikipedia pages if it's incorrect no I'm sorry we do before we display such things we look to ensure that our we have a robust process to make sure that we're displaying accurate information the question is about what you're linking to yes okay so can you just follow up with us and riding on that one great miss Vickers Facebook has displayed links to additional reporting next to content that just that contains this information what percentage of users click through to read that additional reporting I don't have that percentage for you congressman I'm sorry about that but I will follow up and writing quickly Thank You mr. chairman at this point like to ask the clerk to display the two screenshots my staff provided earlier on the TV screens last month Instagram announced that it would hide search results for hashtags that display vaccine disinformation so yesterday I did a simple search for vaccine on Instagram from two different accounts these are the top results as you can see the majority of these responses display anti-vaxxer shags and popular accounts with titles like quote corrupt vaccines quote vaccines uncovered and quote vaccine injury awareness these are not in terms this content is not hard to find and vaccine disinformation is not a new issue this Vicker clearly instagrams efforts here have some gaps and tiebacks content is a deadly threat to public health what additional steps can instagram commit to taking to ensure that this content is not promoted congresswoman thank you for that question and vaccine hoaxes and misinformation are really top of mind for us and we have launched some recent measures but I want to tell you how we're working to get better on those one thing we're doing is when accounts are sharing misinformation we are trying to down rank them and down write them in the search results as well that's something that's ongoing it requires some manual review for us to make sure that we're doing that right but we are getting better at that another thing is actually surfacing educational content and we're working with major health organizations to provide that so when people go searching for this at the top of the search results they will see that informational content we're working with those health organizations right now and we should have that that content up and running soon and I can follow up with you with the details on that please wow this is a new initiative for your organization it's critically important that that information is shared with users at the time that they search for it which we know is ongoing look everyone in this room appreciates that online extremism and disinformation are extremely difficult problems that require broad coordinated solutions but these aren't new challenges and failing to respond seriously to them is dangerous the research is clear social media helps extremists find each other helps make their opinions more extreme and helps them hurt our communities my constituents and I want strong policies from your companies that keep us safe and while I truly believe that your current policies are well intentioned there's a lot more that needs to be done and frankly some of it should have been done already I'm looking forward to working with your companies and my colleagues in Congress on broad real solutions thank you and I yield back thank you chair recognizes the gentleman from New York mr. Catco for five minutes Thank You mr. chairman and thank you all for being here today it's obvious from this conversation that this is a very difficult area to maneuver in and mistress and I understand your concerns about First Amendment infringement I also understand and I applaud the company's desire to try and find that delicate balance and quite frankly since you're not a government entity you have more flexibility in how you do that and it's kind of up to you as a stewards of that flexibility to do the best job you possibly can so I'm going to get back to in a minute of a couple of questions I just my follow up mr. Sleater make sure I'm perfectly clear with what you're saying here and I'm well aware from your testimony previously that what the policies and practices are at Google but they're that video does that mr. Rogers did reference did show that people were look like they're talking about a very serious political bias and it and their intent to implement that bias in their job whether or not that happened I don't know I'm not asking about the policies and practices I'm asking you if you personally have ever been made aware of anyone that has done that use political bias and and at Google to to alter content or whether they first of all have you ever heard that then Google and I know what your policy practice are so I don't want a long answer I just want to know if you've heard that congressman I'm not aware of any situation our robust checks and balances and processes would prevent that okay so you personally have not ever heard of that ever since your time at Google correct okay and in the movie in the count the allegation that congressman Walker referenced about the abortion movie you did you haven't heard nothing about people limiting contact but contact with respect to that as well context I would say content I'm not familiar with that video now okay all right and you've never heard anybody limiting content in that regard for any sort of issue oriented things and we would remove content where violates our policies but not but our policies with regard to ranking nod I'm aware with your pals practice I'm just saying have you ever heard that yourself just a difference you understand the difference it's not what your policies and practice hours what you're personally aware of yes congressman I believe I understand and I am not aware of any situation like that okay thank you now I want to talk to mr. slay all of you here today this internet forum gif CT gives CT which is a lamest acronym ever by the way global internet forum to counterterrorism can someone just can give mr. pickle's perhaps could you just me a little detail but exactly the goal is of this that this forum sure an equally a spoken google of both chaired the organization happy for them to add I think the critical thing is give CT is about bringing together four companies who have expertise and investment on countering terrorism but recognizing that the challenge is far bigger so the three strands support small companies as we remove content it goes across the internet we need to help those small companies fund research so we can better understand so we have a research network and then finally sharing technical tools so you've heard people reference these digital fingerprints to make sure that whether it's a fingerprint or in Twitter's case and we share the URL so if we take down a piece if we take down an account for spreading a terrorist manual and we see it's linked to a company will tell that other company hey a terrorist account this link to something on your service you should check it out it's similar to what you're doing the malware arena correct yeah so there's a collaboration is really the heart step okay now what companies are members of this is there a whole bunch or just a limited number and so when we found that it was Google Twitter so YouTube Twitter Microsoft and Facebook Dropbox have now joined and one of the things we have is a partnership with tech against terrorism which allows small companies to go through a training process so they learn things like how to write the Terms of Service how to enforce their Terms of Service and by mentoring them that's where we're at we're hopeful that we'll have more companies joining and growing this but the hash sharing consortium has many members fifteen members we share URLs with 13 companies so it's broad but we wanted to have a high standard we want membership to be the companies who are doing the best and that's why we want to keep a high bar and bring people in understand it now as far as the encrypted messaging platforms I take it they're not all members of this they're not all participants and this are they I'm probably not the best place to so it's like who would would you know miss Becker sure thank you for the question congressman so the main members are as mr. pickels mentioned those five companies in terms of the smaller companies who've been trained that does include some of the encrypted messaging services because some of this is about this understanding what are the right lines to draw how to work with law enforcement authorities which encrypted communication services can definitely do some of them some of the my biggest concern is that while the big players in this field all of you at the table seem to be endeavoring to try and do the right thing especially with respect to counterterrorism that the encrypted messaging platforms by and large have a much broader field to play in and there's not doesn't seem to be much we can do to stop their content from from spreading their filth and their and their violence so I would love to hear any suggestions I know my time is up to perhaps in writing as to how we could try and entice some of them to be part of this effort the encryption it's obviously a a breeding ground for white supremacist the violence of all sorts and trying to get the companies be more responsible just not worried about the bottom line profit-making would be would be great to hear from you guys so thank you Jay I recognize it and the gentlelady from Michigan the slide confer five minutes good morning thanks for being here I wanted to switch gears for just a second and talk about the influence and the spread of foreign based information foreign based political ads in particular in our in our political process many of us read the mullah report page by page and I was interested in misses but miss bickered that the Facebook General Council stated for the record that for the low low price of a hundred thousand dollars the Russian associated internet research agency got to a hundred and twenty six million American eyeballs and I'm interested in this because the political ads that they put forward were specifically targeted to swing states and Michigan is one of those states so we saw an overabundance of these ads they were specifically paid for by foreign entities and they were advocating for or against a candidate in our political process I have a serious problem with that so separate from the issues of speech and what an American does or does not have right to say can you speak specifically to Facebook's reaction to the fact that they spread foreign purchased information and it doesn't matter to me that it was Russian it could be Chinese or Iranian and what steps you have taken since 2016 to prevent the spread of foreign information absolutely congresswoman thank you for the question and this where we were in 2016 I mean we are in a much much better place and so let me share with you some of the steps we've taken first of all all of those ads came from fake accounts and we have a policy against fake accounts but we've gotten much better and we had it then but we've gotten much better at enforcing it and now we are actually stopping more than a million accounts fake accounts per day at the time of upload and we published stats on how many fake accounts were removing every quarter and you can see how much better we've gotten in the past two years another thing that we're doing with political ads specifically is we are requiring unprecedent levels unprecedented levels of transparency now if you want to run a political or political issue ad in the United States you have to first verify your identity you have to show you're an American which means we actually send you something because we have seen fake IDs uploaded from advertisers we send you something through the mail and you actually then get a code and you upload for us the government ID so we verify that you are a real American and then we also put a paid-for disclaimer on the political ad and we put it in an ads library we've created that's visible to everybody so even if you don't have a Facebook account you can go and see this ads library and you can search what types of political ads are appearing who's paying for them and other information about how they're being targeted and so forth that's good to hear I'm glad to hear it I would love to see if there are reports I'd love to just be directed to them so I can see them for the others at the table can you talk about your specific in brief please your specific policy on the spread of foreign political ads for or against a candidate running for office in the United States so the first thing we did was to ban Russia today and all of its associated entities from using any of our advertising products going forward we took all of the revenue from Russia today and their associated entities and our funding research and partnerships with organizations like the Atlantic Council and like the December lab in Brussels to research better how we can prevent against this we then took the the unprecedented step of publishing every tweet not just the paid for ones every tweet that was produced by a foreign influence operation in a public archive so you can now access more than 30 million tweets that runs to more than a terabyte of videos photographs in a public archive those include operations from Russia Iran Venezuela and other countries thank you for the question looking backwards at 2016 we found very limited improper activity on our platforms that is a product of our threat analysis group and our other tools to root out that sort of behavior looking forward we continue to invest in that as well as our election transparency efforts requiring verification of advertisers for federal candidates disclosure in the ads and then a transparency report great what about the spread of information through BOTS what kind of disclosure requirements so that when someone is receiving or viewing something they have some way of knowing who produced it who's spreading it whether it's a human being a machine when we start with Facebook Thank You congresswoman one of our policies is that you have to have your real name real name and be using an account authentically and so when we're removing bought accounts we're removing them for being fake accounts those are all numbers that we publish every week we challenge between 8 and 10 million accounts for breaking our rules on suspicious activity including malicious automation so we're removing those accounts about 75% of those eight to ten million challenged pay all those challenges and are removed every week congressman for our part we have strict policies about misrepresentation in ads impersonation we are looking out again through our threat analysis group for coordinated inauthentic behavior and we'll take action where appropriate thank you I know my time is expired thank you thank you chair recognizes the gentleman from Louisiana for five minutes Thank You mr. chairman mr. Slater are you ready yeah get your scripted answers Rick ready sir Google and YouTube are developing quite a poor reputation in our nation clear history of repetitively silencing and banning voices conservatives are liberal doesn't concern me right now we're talking about freedom of speech and access to open communications we're here today to discuss extremist content violent threats terrorist recruiting tactics an instigation of violence to get the same justification your platform uses to quell true extremism is often used to silence and restrict the voices that you disagree with we don't like it for example Prager University a series of five-minute videos which discuss political issues religion economic topics from a conservative perspective has had over 50 of their videos restricted some of their restricted videos include why America must lead ouch that's a question should be directed to the mirror America leads because of our stands for freedom for all voices to be heard the ten commandments do not murder video pulled by your people what's wrong with the Ten Commandments might I ask why did America fight the Korean War a legitimate reflection on a significant part of the history of our nation hope additionally you two removed a video from Project Veritas which appears to show it's a senior Google executive acknowledging politically motivated search manipulation with intent to influence elections outcomes none of us here want that on side of this aisle I don't know a man a woman present that's not a true patriot and loved our country we have varying ideological perspectives yes well we love our country and will stand for freedom including against Google a frequent reason provided by YouTube is that the content in question harmed the broader community what could be more harmful than the broader community than the restriction of our free speech and open communications regardless of our ideological stance please define for America what do you mean by harmed the broader community as it's used to justify restricting the content on Google YouTube and point out is harm limited to physical threats and the incitement of violence as it should be or is it a convenient justification to restrict the content that you deem needs to be restricted please explain to America how you determine what is harmed the broader community what does that mean let's have your scripted answer congressman thank you for the question I appreciate the concern and the desire to foster robust debate we want YouTube to be a place where everyone can share their voice and get a view of the world but you don't allow everyone to share their voice I've given examples in my brief time and Thank You mr. chairman for recognizing my time the First Amendment protects Americans right to express their viewpoints online it's something that offends an individual or something an individual agrees with does that meet your company's definition of extreme we have community guidelines that lay out the rules of the road about what is not from it on the platform including incitement to violence hate speech harassment and so on you can clarify what you're asking about specifically I'd be happy to try and answer mr. Sleater god bless you sir Google's in a bind today America is watching today America is taking a step back we're looking at the services we're looking at the platforms that we use and we'll find into our horror that they can't be trusted today America's is looking carefully at Google in a word reverberates through the minds of America freedom shall it be protected shall it be preserved or shall it be persecuted and subject to the will and whim of massive tech companies mr. chairman thank you for recognizing my time and I yield the balance thank you for holding this hearing today thank you chair recognizes the gentlelady from New York for five minutes miss Clara thank you very much mr. chairman and I thank our panelists for appearing before us today I want to go into the issue of deep fakes because I've recently introduced legislation the first ever in the House bill to regulate the technology and if my bill passes what it would do is it would make sure that deep fake videos include a prominent unambiguous disclosure as well as a digital watermark that can't be removed and the question I have is whether your platforms when it comes to your attention that a video has been substantially altered or entirely fabricated how your companies decide whether to do nothing label it or remove it and that's for the panel thank you for the question Commodore it's so when it comes to deep fakes this is a real top priority if it's especially because of the coming elections right now our approach is we try to use our third-party fact-checking organizations there are 45 of them worldwide and if they rate something as being false they can also tell us that something that's been manipulated and at that point we will put the information from the fact-checking organization next to it so much like the label approach this is a way of actually letting people understand that this is something that is in fact false we also reduce the distribution of it we are also looking to see if there's something we should do specifically in the area of deep fakes we don't want to do something in a one-off way we want to have a comprehensive solution and part of that means we have to get a comprehensive definition of what it means to actually have a deep fake and those are conversations that we look forward to having with you yeah my bill would require that there's a digital watermark and and that it shows how similar to how your companies do sort of a hash of terrorist content if there was a central database of deceptive deep fake hashes would you agree to utilize that I'm happy to pick up on that and the previous question I was at a conference in one than a few weeks ago hosted by the BBC and an NGO called Digital witness and they actually work on issues around verifying media from war zones of war crimes and so I think actually this as Monica says this this policy goes from a whole spectrum of content from synthetic to edited to manipulated so I think certainly from our point of view every partnership is one we want to explore to make sure that we have all the information and I think your your framing of how in some circumstances there may be situations to remove content in other circumstances it's about providing context to the user and giving them more information I think that is that's the best balance I think of making sure that we have all the tools available to us and that's the approach that we're developing now your time at the time is not your friend here and what we're trying to find is something Universal that creates transparency respects the First Amendment but also make sure that you know it's something that as you know as Americans whose eyes are constantly on video you something you can identify right away if you have to go through all of these sources to determine and each platform has a different way of indicating it almost nullifies that so I wanted to put that on your radar because I think that there needs to be some sort of a universal way in which Americans can detect immediately that what they're seeing is is altered in some form of fashion and and and that's what my bill seeks to do imagine if Russia's just days before the 2020 election released a fake video of a presidential candidate candidate accepting a bribe or committing a crime if your companies learn of a defect video being promoted by a foreign government to influence our election will you commit to removing it how would you handle such a scenario have you thought that have you thought about it give us your thoughts a lot of time congresswoman we do have a real name requirement on Facebook and we also have various transparency requirements that we enforce so if it's shared by somebody not in a real name or otherwise violate our transparency requirements we would simply remove it we are we have to have a clear policy on affiliated behavior so activity affiliated with an entity we've already removed and as I said we've moved millions of tweets connected with the internet research agency we'd remove any activity affiliated with that organization thanks for the question there's critical issue and I think we would evaluate such a video under our policies including our deceptive practices policies and look as we would at any sort of foreign interference thank you very much mr. chairman now you better look forward to talking to you further about this cuz we got to get to that sweet spot and we're not there it's very clear from here Thank You chair recognizing gentlelady from Arizona miss let's go for five Thank You mr. chairman years ago required reading I had was the book 1984 and this committee hearing is scaring the heck out of me I have to tell you it really is because here we are talking about what you know if somebody Google's vaccines the answer was oh we're going to put above what the person is actually looking for what we think is best who are the people judging what's best what's accurate this is really scary stuff and really goes to the heart of our First Amendment rights and so I don't always agree with ACLU and you're the past president of ACLU miss Johnson but I agree with you wholly on this we have to be very careful my colleagues on this because what you deem is inaccurate I do not deem as inaccurate or other people may not deem we had in a previous briefing on this issue one of the members said well I think President Trump's tweets incite terrorism well are we now going to ban what President Trump says because somebody thinks that that it incites terrorism this is some really scary stuff and I'm very concerned and I'm glad I'm part of this because boy we need more of us standing up for our rights whether it's what you believe or what I believe I have a specific question and this is to mr. Sleater in this project very dose video which I did watch last night they allege that there are internal Google documents which they put on the video and this is what it said for example imagine that a Google image query for CEOs shows predominantly men even if it were a factually accurate representation of the world it would be algorithmic unfairness in some cases if they it be appropriate to take no action if the system accurately effects current reality well in other cases maybe desirable to consider how we might help society reach a more and equitable state via product intervention what does that mean mr. Sleater Thank You congresswoman for the question I'm not familiar with a specific slide but I think what we are getting at there is when we're designing our products again we're designing for everyone we have a robust set of guidelines to ensure what we're providing relevant trustworthy information we work with a set of raters around the world around the country to make sure that those sort trader guidelines are followed those are transparent and available for you to read on the web all right well I personally don't think that answered the question at all but let me go to the next one you asked mr. clay Higgins a specific example in Assam mr. Sleater he was talking about Prager University I just Google done and I used Google on Prager University and it came up and on Prager University website it says conservative ideas are under attack YouTube does not want young people to hear conservative ideas over 10% of our entire library is under restricted mode why are you putting Prager University videos about Liberty and those type of things unrestricted mode Thank You congresswoman I appreciate the question to my knowledge Prager University is a huge success story on YouTube with millions of views millions of subscribers and so on remains so to this day there is a mode that users can choose to use called restricted mode where they might restrict the sorts of videos that they see that is something that has applied to many different types of videos from across the board consistent not with respect to political viewpoints but apply different since the daily show other sorts of channels as well and to my knowledge it was applied it's been applied to a very small percentage of the those videos on Prager University again that channel has been a huge success story I think with a huge audience on YouTube and mr. pickels regarding Twitter and President Trump has said I think I'm multiple but occasions that he's accused Twitter of you know people having a hard time being deleted from followers this actually happened to my husband he followed Donald Trump and then all of a sudden it was gone so can you explain that what is happening there why does that happen because I tell you a lot of conservatives really think there's some conspiracy going on here well I can certainly look into the case of your husband to make sure there wasn't an issue there and what I can say is that President Trump is the most followed head of state anywhere in the world and he is the most talked-about politician anywhere in the world on Twitter and although he did lose some followers when we we recently undertook an exercise to clean up compromised accounts President Obama lost far more followers in the same exercise so I think people can can look at the way that people are seeing President Trump's tweets widely and be reassured that the issues that you're outlining there are not representative and Twitter's approach and mr. Chairman I ran out of time but if we have another round I really want to hear from miss Johnson I want to hear her views because she hasn't had a lot of time to speak so I hope some of my fellow colleagues ask her Thank You chair recognize gentlelady from California miss Barragan for five minutes thank you very much mr. chairman this is a four Miss Bickerton and mr. Sleater I want to talk a little bit about your relationship with civil society groups that represent communities targeted by terrorists content including white supremacist content I'm specifically referring to content that targets religious minorities ethnic minorities immigrants LGBTQ and others can you help by describing your engagement with civil society groups in the u.s. to understand the issues of such content and develop standards for combatting this content thank you for the question congressman yes any time that we are evolving our policies which we're doing constantly we are reaching out to civil society groups not just in the US but around the world have a specific team under me actually called stakeholder engagement that is what they do and when they're doing this one of their jobs is to make sure if let's say we're looking at our hate speech policies one of their jobs is to make sure that we are talking to people across the spectrum so different groups that might be affected by the change people who will have different opinions all of those people are brought into the conversation well similarly we have a teams around the world who are speaking to civil society groups every day something we're also doing is training them I think it's really important is that because Twitter's a unique public platform and a public conversation when people actually challenge hatred and offer a counter-narrative offer a positive narrative their views can be seen all over the world so you know just be Charlie was seen all over the world after an attack in Paris and similarly after Christchurch hello brother or even hello Salaam which was a gentleman in Kenya and who challenged a terrorist who were trying to separate Christians and Muslims so we talk to civil society groups both about our policies but also how they can use our platform more to reach more people with their messages okay and mr. Slater before you answer because I want to make sure you incorporate this is one of my concerns is the onus to report the hateful content it's placed on the very communities that are targeted by the hateful content that can make social media platforms hostile places for people and targeted committees so can you also tell us what your companies are doing to alleviate this burden so mr. Slater and then I like to hear from the two of you on that sure speaking of how we enforce our community guidelines including against hate speech including again as we said we've updated our hate speech policies to deal with people expressing superiority to justify discrimination and so on we use a combination of machines and people machines cos scan across for broad patterns and so on compared a previous violative content so we do take our responsibility here very seriously our ability to detect that first review it before it's been flagged and you know we're making great strides in that we also do rely on flags from users as well as flags from trusted flaggers that is civil society groups other experts we work with very closely both in the development of our policies and again in flagging those sorts of videos yeah and so just did the two of you about the burden this is something that we've said previously there was too much burden on victims a year ago twenty percent of the abuse we removed whispered surfaced proactively that's now forty percent so in a year we've been able to double the amount of content that we find proactively without waiting for a victim to review it and we're continuing to invest to raise that number further can you the three of you provide an example where you had community engagement and because of that feedback there was a policy change that you made I can actually let me share a slightly different example which is how we write a better policy to prevent that and so when we were crafting our policy on non-consensual intimate imagery that covers not just media shared by an ex-partner but it might share creep shocks which I think have been so various countries start asking do you have a policy on creep shots and because we'd spoken to those very civil society groups our policy from the beginning was written broadly enough to capture not just the original problem but all those different issues yes and let me adjust your the second question that you asked about putting the bernam on the victims we have invested a lot in artificial intelligence so there's certain times when artificial intelligence has really helped us in other areas where it's very much in its infancy with hate speech over the past few years we've gone from zero proactive detection to now in the first quarter of this year the majority of content that we're removing from violating our hate speech policies we are finding using artificial intelligence and other technologies so huge huge gains there there's still a long way to go because all of those posts after they're flagged by technology have to be reviewed by real people who can understand the context in terms of where our engagement has led to concrete changes one thing I would point to is the use of hate speech in imagery the way that we originally had our policies on hate speech it was really focused on what people were saying in text it was only through working with civil society partners that we were able to see how we needed to refine those policies to cover images too and another thing I would point to is a lot of groups told us it was hard to know what exactly how we define hate speech and where we drew the line that was a contributing factor among many others in why a couple years ago we published a very detailed version of our community standards where now people can see exactly how we define hate speech and how we implement it right thank you I yield back Thank You chair recognizes gentleman from Texas for five minutes mr. krenshaw Thank You mr. chairman and thank you for some of the some of the thoughtful discussion on on how you combat terrorism online and there's worthy debates to be had there and there's good questions on whether some of this content provides education so that we know of the bad things out there or whether it's radicalizing people those are hard those are hard discussions to have and anywhere I don't know that we're going to solve them today but the problem is is that the testimony doesn't stop there the the policies at your social media companies do not stop there it doesn't stop at the clear-cut lines of terrorism and terrorist videos and terrorist propaganda unfortunately that's not exactly what we're talking about it goes much further than that it goes down the slippery slope of what speech is appropriate for your platform and the vague standards that you employ in order to decide what is appropriate and this is especially concerning given the recent news and the recent leaked emails from Google they show that labeling mainstream conservative media as Nazis is a premise upon which you operate so I've even a question according to those emails the emails say given that Ben Shapiro Jordan Peterson and Dennis Prager are Nazis given that that's a premise what do we do about it two of three of these people are Jewish very religious Jews and yet you think they're Nazis it begs the question what kind of education people at Google have so they think that religious Jews are Nazis three of three of these people had family members killed in the Holocaust ben shapiro is the number one target of the alt-right and yet you people operate off the premise that he's a Nazi it's a pretty disturbing and it gets to the question do you believe in hate speech how do you define that do you can you give me a quick definition right now is it written down somewhere Google can you give me a definition of hates each congressman yes so hate speech again as updated in our guidelines now extends to superiority over protected groups to justify discrimination violence and so on based on a number of defining characteristics whether that's race sexual orientation veteran staff you have an example of ben shapiro or jordan peters senator dennis prager engaging in hate speech give one example I'll stop your head so congressman we evaluate individual piece of content based on that content rather than based on the speaker okay let's get the next question do you believe speech can be violence alright there's there's not not can you incite violence that is very clearly not protected but can speech just be violence do you believe the speech that isn't specifically calling for violence can be labeled violence and therefore harmful to people is that possible congressman I'm not sure I fully understand the distinction you're drawing certainly again incitement to violence or things that aren't I urging dangerous behavior those are things that would be against our policies here's the thing when you call somebody a Nazi you can make the argument that you're inciting violence and here's how as a country we all agree that Nazis are bad we actually invaded an entire continent to defeat the Nazis it's normal to say hashtag punch a Nazi because there's this common thread among this in this country that they're bad and that they're evil and that they should be destroyed so when you're operating off of that premise and it's a frankly it's a it's a good premise to operate on well what you're implying then is that it's okay to use violence against them when you label them we're one of the most powerful social media companies in the world labels people as Nazis you can make the argument that's inciting violence what you're doing is wholly irresponsible it doesn't stop there a year ago it was also made clear that your fact-check system is blatantly targeted conservative newspapers do you have any comments on are you aware the story I'm talking about I'm not familiar with necessarily the specific story congressman I am aware that from all political viewpoints we sometimes get questions of this sort I can say that our fact check labels generally are done algorithmically based on markup and follow our policies before that for the record they specifically target conservative news media and oftentimes they don't even they have a fact check on there that doesn't even reference the actual article but Google makes sure that it's right next to it so as to make people understand that that one is questionable even though when you actually read through it it has nothing to do with it you know a few days ago to this go Steve is Miss Burkett one of my constituents posted photos on Facebook of Republican women daring to say that there are women for Trump Facebook took down that post right away with no explanation is there any explanation for that without seeing it it's hard for me to apply and that doesn't violate our policies but I'm happy to follow up on the specific example with you thank you listen here's what it comes down to if we don't share the values of free speech I'm not sure where we go from here you know this practice of silencing millions and millions of people it will create wounds and divisions in this country that we can not heal from this is extremely worrisome you've created amazing platforms we can do amazing things with what what these companies have created but if we continue down this path it'll tear us apart you do not have a constitutional obligation to enforce the First Amendment but I would say that you absolutely have an obligation to enforce American values and the First Amendment is an underpinning of American values that we should be protecting until the day we die thank you and thank you for indulging me mr. chairman thank you let's fasten chair gonna take priority and allow you to make a comment okay thank you so much for protecting my free speech rights mr. chairman the main point that I wanted to make is that even if we have content moderation that is enforced with the noblest principles and people are striving to be fair and impartial it is impossible these so-called standards are irreducibly subjective what one person hate speech is an example was given by congressman Higgins is somebody else's cherished loving speech for example in European countries Canada Australia New Zealand which generally share our values people who are preaching religious texts that they deeply believe in and are preaching out of motivations of love are prosecuted and convicted for engaging in hate speech against LGBTQ people now I obviously happen to disagree with those viewpoints but I absolutely defend their freedom to express those viewpoints at best these so-called standards and I I did read every single word of Facebook standards and the more you read them the more complicated it is and no to Facebook enforcers agree with each other and none of us would either so that means that we are entrusting to some other authority the power to make decisions that should reside in each of us as individuals as to what we choose to see and what we choose not to see and what we choose to use our own free speech rights to respond to and that I think these platforms have I cannot agree more about the positive potential but we have to maximize that positive potential through user empowerment tools through radically increased transparency one of the problems of them not going I limit your speech I'm a limit your time for five minutes Thank You mr. chairman Thompson and the ranking member for holding this very very critical hearing and very interesting very important issues want to turn a little bit the Russian interference in 2016 Moeller report on lines indictment of 13 Russians three companying for three companies for conspiring to subvert our elections them 2018 we signed occasions that again Russians were at it again 2020 former secretary Homeland Security Nielsen before she was resigned she resigned but up the the fact that the Russians were out of 20/20 again those other countries also trying to affect our election system and so like I'm hearing your testimony and my question of course I missed rawson addressing the issue first amendment this is the First Amendment cover fake videos online we talked a little bit about the Pelosi fake video and maybe you say yes I probably say probably not and I'll tell you why because if this that's a damaging video with false content and although you may be private companies when I hear my children tell me I saw it on this platform the assumption is that it is factual and and in this briquette it took you 24 hours to take that video down the others didn't take it down you are essentially a messenger and when your information shows up online this population believes that you're credible and that information on there is probably credible too and that's what's damaging towards to our country to our democracy moving forward we've got another election happening now and as this information continues to be promulgated through your social media through your companies we have a First Amendment issue what have an issue also of democracy and keeping it whole any thoughts mr. Pickett thank you for the question congressman we share the focus on making sure that we are ready 24 hours is not fast enough so are we playing here defense or offense are we reacting are you being proactive so the next Nancy Pelosi video is something you can take down essentially faster than 24 hours congressman we are being proactive I do agree that there's a lot that we can do to get faster our approach when there's misinformation is making sure that people have the context to understand it we don't want people seeing it in the abstract we want to make sure we're informing people and we have to do so quickly so that's something that we are focused on getting better at so let me ask you something on the Pelosi video who put it up it was uploaded by a regular person with a regular account and also somebody at home with some very smart software and a good platform was able to put together fake video and put it up congressman the the technique that was used was to slow down the audio which is the same thing we see a lot of comedy shows frankly do with a lot of politician what were the consequences to this individual for putting up essentially a video of somebody defaming you know hurting her reputation congressman that video our approach to misinformation is we reduce the distribution and then we put content from fact checkers next to it so that people can understand that the content is false or has been manipulated mr. pickels well one of the things we talked about earlier was how to provide context to users so our focus now is developing what are you or your policies changing so that you'll be able to take it now next time or use you're gonna let it right well we're looking at all of our policies in this area are you gonna look at taking it down are you gonna let it right a yes or no well then we're looking at both how do you give mr. Sleater what are you gonna do I didn't get an answer it sir mr. slay what are you gonna do next time you see a video like this with respect to that said use all clear we took it down under our deceptive practices policy and mr. Rossman not to you know violate your freedom of speech here you think this false videos online they're constitutionally protected there is a very strict definition of false speech that is constitutionally unprotected the Supreme Court has repeatedly said that blatant outright lies are constitutionally protected unless so let me switch in my stomach seconds I have left will you write policy so outright lies do not have the devastating effect on our voters that they had in the 2016 election as I said we're looking at the whole issue thank you Miss break it mr. Sleater any thoughts we two congressmen are making sure that we have the right approach for the election thank you absolutely we want to raise up authoritative content reward it and then demote borderline content harmful misinformation and remove violative content if I may say this is exactly the reason why President Trump wants to change the libel laws because it is now legal to lie about politicians and government officials maybe there's an area we'll work together on some issues huh mr. Chairman I yield Thank You chair now recognizes gentlelady from New Jersey mr. Watson Coleman for five minutes let me ask you a really quick question yes or no the deficit is that right give Asik gif CT your collaboration does keeping your trade secret secret interfere with your sharing standards and you know working together – yes no I don't know okay I know you use this platform for terrorism do you use that platform at all for sort of the sort of hate groups not the president but certainly after New Zealand that highlighted that we do need to broaden our approach to different issues mm-hmm so in my amaya briefing dog whistling it has been mentioned as a a certain kind of political messaging strategy that employs coded language to send a message certain groups flies under the radar and it's used by some white supremacist groups often and it is rapidly evolving on social media platforms and and there has its space and targeting of racism and other sort of isms that we find abhorrent in this country how do you saw the challenge of moderating dog-whistle content on your platform especially when it's being used to encourage these isms that we aboard so much I'm happy to start and let others finish I would yeah take one two three sorry I'll take you any way you want well firstly we have enforce our rules and one of the things that our rules are lies about behavior so if you're targeting somebody and because of the membership of a protected characteristic that's the important factor the words come secondary if CT has an entire stream of research and one of the reasons for having that research stream is that we can investigate what are the latest trends what are the things we need to be learning about those kind of terms and then finally when we see whether it's different kinds of extremist groups speaking for Twitter we've banned more than a hundred and eighty groups from our platform for violent extremism across the spectrum both in the US and globally so we have a policy framework and also the industry sharing thank you Thank You congresswoman I would echo that a lot of this is about getting to the groups we do have a hate speech policy but beyond that we know that sometimes there there are groups they're just engaging in bad behavior and so we ban not only violent groups but also white supremacist groups and other hate groups and we've removed more than 200 of them from our platform today thank you for the question we do as I said remove hate speech on our platform this is and the sort of concerns you're talking about it's what motivated the more recent changes we also recognize that things may brush up against those policies be borderline but not quite across them and for those we do work to reduce demote demote them in the frequency and recommendations and so on if I could have just ten sent you a quick a little bit more than that thank you this is a very quick question miss vicar did you bring any staff here with you today any many employees from yours we did could you please have them stand up but those that have accompany miss vicar could you please stand up Thanks – thank you very much mr. pickels you Thank You mr. Sleater thank you very much um a couple of things that you mentioned you talked about making sure that people are real and that they're American when they're gonna do advertisement then you said we're gonna send information to you you have to send it back and it just simply proves that you can maybe pretend to be an American living and really living here or having a domicile here an address here still doesn't necessarily guarantee that they're legitimate and so that's a challenge I think that we might have is that understandable mr. Sleater or am i confusing you if you could clarify the question I would appreciate not a question is statement so you're talking we're talking earlier about making sure that people who are doing political advertising etc are not foreign national said there are Americans that these did we not had this discussion about this advertisement and it was stated by somebody there thank you Frank that you do verification to make sure the person is an American is does live in America and isn't this false whatever coming from another nation I said that really doesn't necessarily prove that as far as I'm concerned congresswoman I'm just to clarify that is Facebook's approach we do verify to get rid of ten percent I've got everything and we also we look at the government ID it's my question to you is are there trigger words that come out of some of this speech that you think that should be protected that needs to be taken down because it is sites all of them it's a problem and I wanted to give an example from a story in Bloomberg news today that talked about Twitter's I'm sorry YouTube's recent new policy of broadening the definition of unprotected hate speech and on the very first day that it went into effect one of the people that was suppressed was an on-line activist in the UK against anti-semitism but in condemning anti-semitism he was of course referring to Nazi expression in Nazi insignia and hence he was he was kicked off so there's no trigger words and it seems to me that I think it was mr. pickles did you do did did you do the definition of of hate speech for us earlier the hateful conduct under Twitter yeah I think that that probably covers the president of United States of America unfortunately Thank You mr. chairman I yield back chair recognizes the gentleman from New York mr. rose chairman thank you and thank you all for being here two months ago in the immediate aftermath of the Christchurch incident we sent out a letter to you all asking how much money you spending on counter terror screening and how many people do you have allocated to it and we've had interesting conversations over those in suing months and the three basic problems that you have brought to me are that one can't do that oversimplifies it because there's also an AI component to this well yesterday we did a hearing that showed a I alone cannot solve this impossible and not into the future you all agree with that the second thing though that you have all said to me is that this is a collective action problem we're all in this together and we have the gift CT so I've got some very basic questions about the gift CT I'd appreciate if you could just immediately answer yes or no and then we can get into the details first question does the gift CT have any full-time employees miss Picard as does the gifts to CT have a full-time employee dedicated to it to run it no we have people at Facebook full-time dedicated to give CT okay the same we have people at Twitter working with give CT but we don't have CT okay yes our answer is the same does the gift CT have a brick-and-mortar structure if I want to go visit the gift Ct could I do so mr. picker no congressman we do host the database physical at Facebook okay pickles no our collaboration is for companies working together we meet in person we have virtual meetings it's about collaboration not about a physical building miss later that's right nothing further no no you know brick and mortar structure but I presume you have a Google hangout or maybe a Facebook hang I don't know how you would decide that but adhesive and sealant Council an association located in Bethesda Maryland and the adhesive and sealant Council it has five full-time staff it has a brick and mortar structure and you all cannot get your act together enough to dedicate enough resources to put full-time staff under a building dealing with this problem I think it speaks to the ways in which we're addressing this with this technocratic libertarian elitism and all the while people are being killed all the while there are things happening that are highly preventable ai are there any AI systems that any of you all have that are not available to the gifts et congressman yes depending on how our products work they all work differently so artificial intelligence works differently what we've and we actually worked for some time on doing this we had to come up with one common technical solution that everybody could use we now have that for videos and we do give it for free to smaller companies but that's but one okay technique we have and please just keep it I just want to know if you have any AI not not that the gift CT doesn't have available well I'd also say that this isn't just AI that's why we share URLs very low tech low fiber but if you're a small company and someone gives you a URL to content you don't need a lighter look at that so I think that's why it's a combination solution nothing further dad to those comments okay but my understanding is is that there were no officially declared pocs for the gift CT that were made public from each company until after the Christchurch shooting I know that they were there but they were not declared established POCs at each year companies until after the Christchurch shooting two months ago is this the case congressman we have a channel that people can use that goes gets routed to whoever is on conference but is that the case that there were no established pocs and this is the information you all have given me already I'm just asking you put it on the record no established pocs at the gift CT until after the Christchurch shooting is that correct perhaps not publicly listed but certainly people know who to know establish public POCs until after the Christchurch shooting well I draw a distinction between the PR CS and the companies we work it's gonna every week every day I think the point you were getting at is crisis response I'm getting to the fact that you're not taking it seriously because there were no pub there's no public building there's no full-time staff there were no public pocs until after the Christchurch shooting well I think that's what I'm speaking to how are how is anyone supposed to think that you all take this collective action problem seriously if you have no one working on it full-time this is not something that technology alone can solve this is a problem that we are blaming the entire industry for rightfully so and there are the smallest of associations in this town and throughout the country that do so much more than you do and it is insulting it is insulting that you would not at least apologize for saying that there were no established pocs prior to the Christchurch shooting it was a joke of an association it remains a joke of an association and we have got to see this thing dramatically improved lastly how if there were terrorist content shown to be on your platforms by a public entity would you take it down so mr. picker why when the whistleblower Association reveals that you are Facebook is establishing through its AI platform al Qaeda community groups such as this one a local business out Qaeda in the Arabian Peninsula with 217 followers I have it right here on my phone by the whistleblower Association it is considered the most active of al-qaeda's branches or franchises that emerge due to weakening central leadership is a militant Islamist organization primarily active in Yemen and Saudi Arabia why is this still up we have every right right now to feel as if you are not taking this seriously and by we I do not mean Congress I mean the American people thank you thank you chair recognizes gentlelady from Florida Ms Dimon's for five minutes thank you so much mr. chairman we have already talked about the massacre at the Christ Church and we also know that it was the it was law enforcement who notified Facebook about what was going on Miss Piggy Curt I'd like to know if you could talk a little bit about your work in relationship with law enforcement and share some of the specifics specific things that you are doing to further enhance your ability to work with law enforcement to continue to work to prevent incidences like this from happening again Thank You congresswoman we have a special point of contact from our law enforcement Engagement Team so people from within our company usually former law enforcement who are assigned to each company and those relationships are well functioning and are the reason that New Zealand law enforcement were able to reach out to us and once they did within these minutes you surely believe that they would have been able to reach out to you if you didn't have a law enforcement team right wouldn't that have been part of their responsibility any law enforcement agency that saw what was happening live on your platform to notify you congressman we want to make it very easy if they see something that they know exactly where to go it's also a reason so here with New Zealand when they reached out to us we responded within minutes we also have a an online portal through which they can reach us and that is Man 24 hours a day so if there's any kind of emergency were on it finally if we see that there's an imminent risk of harm we proactively reach out to them I will also tell you anytime that there is a terror attack or some sort of mass violence in the world we proactively reach out to law enforcement to make sure that if there are accounts we should know about or names of victims any sort of action that we should be taking that we are on it immediately okay moving right along mr. pickels you said that we will not solve the problems by moving content alone is that correct what you you said okay and I know that most companies do a pretty good job in terms of combatting or fighting child exploitation or pornography and I just like to hear you talk a little bit about your efforts to combat terrorism and share some of the similarities because we can't solve the problems by just taking down content alone so if you could just show some of the similarities in terms of your efforts of combating terrorism along with your efforts to combat child pornography I know we put a lot of you put a lot of resources and combat in child pornography rightly so but could you talk about the similarities in the two goals absolutely what we have the similarities and differences in this limited space we are able to use similar technology to look for an image we've seen before if that appears again we can proactively detect the image and stop it being distributed and then critically work with law enforcement to bring that person to so we work with the National Center for Missing and Exploited Children who work with law enforcement around the world so that process of discovering content working with law enforcement is seamless because I think in particularly for child sexual exploitation but also for violent threats what about for it we embedding terrorism so I think either case if someone is posting that content removing the content is our response but there's a law enforcement response there as well which holds people to account potentially prosecutes them for criminal offenses and that that working in tandem between the two is very important we have a similar industry body that shares information and we also work with governments to share threat intelligence and analysis of trends so that we can make sure we're staying ahead of bad actors but the biggest area of similarity is the bad actors never stay the same they're constantly evolving so we have to constantly be looking for the next opportunity to improve okay all right thank you at the beginning of this conversation we talked about the Chairman asked a question about or reference the video of the speaker and why some of you removed it and some did not and and mr. Sleater I was so pleased to hear your answer which was you look for deceptive practices it was deceptive you removed it correct could you just talk a little bit more about it seemed like such a because and and mistrusts and if you would you said that the social media platforms free speech right is their ability to decide what is posted and what is not posted it's just that simple right there they can decide what it's posing and what it's not posted so mr. Sleater if you could just talk a little bit about your process and it was deceptive you took it down happy to Congress from an important question and we have Community Guidelines one of those guidelines is about deceptive practices we review each bit of content thoroughly to make sure whether it is violative or whether it may fit into an exception education documentary and so on and so forth and what and do that on a individualized basis to see if the context has been met and we present those guidelines publicly on our website for anyone to read thank you very much mr. chair I yield back I get chair recognizes the gentleman from Texas mr. Taylor for five minutes Thank You mr. chairman it's just a quick question so is Google an American company congressman we are headquartered in California yes are you loyal to the American Republic committee is that something you think about or you think of yourselves an international company we build products for everyone we have offices all across this country have invested heavily in this country and are proud to be founded and headquartered in this country do you so if you found out that a terrorist organization was using Google products would you stop that would you in that we have a policy congressman of addressing content from designated terrorist organizations to prohibit it make sure it's taken down I'm not asking about content I'm saying he found at al-nusra terrorization was using Gmail to communicate inside that terrorist relation would you stop that do you have a policy in that if you don't have a policy that's fine I'm just trying to when we are youing this certainly where appropriate we will work with law enforcement to provide information about relevant threats illegal behavior and so on and similarly will respond to valid requests for information from law enforcement I'm not asking if you respond to subpoenas and I appreciate that it's good to hear you aim to be legal what I'm asking is if a terrorist organization uses a Google product and you know about that do you allow that to continue or do you do you have a policy are they going appropriate circumstances and where we have knowledge we would terminate a user and provide information to law enforcement okay so I'm you'll forgive me for not your your answer is a little opaque I'm still trying to figure this out so if a terrorist organization is using a Google product do you have a policy about what to do about that Thank You congressman I'm attempting to articulate that policy I'd be happy to come back to you with further information if it's unclear okay the gentleman yield sure listen to the answer about referring it to law enforcement I think that's an appropriate response because that there is a suspicion that criminal activity is afoot you would want to refer it to law enforcement and law enforcement make the call on that Tom so just to kind of okay maybe help you a little bit with that particular portion of it but they back to the policy thank you appreciate so it it is to kind of follow up with that so the Islamic Republic Atlanta of Iran is the largest state sponsor of terrorism in the world right they are a terrorist you know pieces of the result Republic or terrorist organizations do you have a specific ban on that terrorist organization and their ability to use your Google products congressman we have prohibitions on designated terrorist organizations using products uploading content and so on okay so so you you seek to ban terrorist organizations from using Google products and I try to put where's your mouth I'm just going to understand your your position doesn't add terrorist organizations you have prohibitions on that on that sort of organization and I'm not NSS 'king about content I'm asked about the services you provide right you provide Gmail you provide iCalendar you've had a whole host of different services that people can use try to ask about the service it's not the content I realize that the focus of this hearing is about content which is why you're here but I'm asking about the actual services the best my knowledge if we were to have knowledge and again as my colleagues have said these bad actors are constantly changing their approaches trying to gain the system and so on but we do everything we can to prohibit that sort of illegal behavior from those sorts of organizations do you have screen setup to try to figure out who the users are to try to you know pierce the veil so to speak and to an anonymous account figure out where that is or who that might be where its sourcing from or you are you looking at that is that something that's that part of how you operate as an organization that Google does absolutely congressman we use combination of automated systems threat analysis to try and ferret out behaviors that that may be indicative in that way all right thank you I appreciate your answers and with that mr. chairman and I appreciate the panel for being here is an important important topic and thank you Thank You mr. chairman thank you very much chair now recognizes the lady from Nevada mr. Dez for five minutes Thank You mr. chairman we've heard a lot about incidents but we haven't mentioned much about one that occurred in my district of Las Vegas this was a deadliest shooting in the United States and modern history October 1st 2017 gunmen opened fire on a music concert a festival and after that attack there was a large volume of hoaxes conspiracy theories misinformation that popped up all across your platforms including about a Mis identity of the gunman his religious affiliation and some of the fake missing victims some individuals even called it a false flag in addition when you put up a search safety check site on Facebook and when we're loved ones could check in to see who was safe and who wasn't there were all kinds of things that popped up like links to spam websites that solicited Bitcoin donations they peddled false information the claiming that shooter was associated with some anti Trump army just a lot of myths there where people were trying to make contact I wonder if you have any specific policy or protocols or algorithms to deal with the immediate aftermath of a mass shooting like this all three of you Thank You congresswoman and let me say that the Las Vegas attack was a horrible tragedy and we we see we have improved since then but I want to explain what our policies were even then and how we've gotten better so with the Las Vegas attack we remove any information that is praising that attack or the shooter and we also took steps to protect the accounts of the victims sometimes in the aftermath of these things we'll see people try to hack into accounts or do other things like that so we take steps to protect that victims and we also worked very closely with law enforcement since then one area where we've gotten better is crisis response in the wake of a violent tragedy so for instance with Christchurch you had these companies at the table and others communicating real-time sharing with one another URLs new versions of the video of the attack and so forth to make sure and it was literally a real time for the 24 hours operation where we were sharing in that first 24 hours on Facebook alone we were able to stop 1.2 million versions of the video from hitting our site so we've gotten a lot better technically but this is an area where we'll continue to invest thank you and as you've just heard I think one of the challenges we have in this space is different actors will change their behaviors try and get around our rules one of the things that we saw after Christ Church which was concerning was people uploading content to prove the event had happened so this the suggestion that because companies like ours were removing content at scale people were calling that censorship so there were people uploading content to prove the attack had happened that's a challenge that we haven't had to deal with before and is something we we're very mindful of and we need to figure out what's the best way to combat that challenge and we have policies against the abuse and harassment of the survivors and victims and their families so someone's targeting someone who's been a victim or a survivor and he's denying the event took place or is harassing them because of another factor like political ideology and we would take action for the harassment in that space and then finally the the question of how we we work with organizations to spread the positive message going forward so that's where you know if there's groups in your communities who are affected by this and working with the victims to show the kind of the positivity of your community and then we begin to work with those organizations wherever they are in the u.s. to to spread that message of positivity yes thank you congressman this is of the utmost seriousness there was a tragic event I think for a country for society personally someone who lived in both Las Vegas and New Zealand both of these events I hold deeply in my heart we take a three-fold approach to the sort of misinformation and other conduct that you were talking about we trying on YouTube raise up authoritative sources of information particularly those breaking news event to make sure that authoritative sources out case those who might wish to misinform and so on we will strike move denials of well-documented violent events or people who are spreading hate speech towards the survivors of that event and we will also seek to reduce exposure to content that is harmful misinformation including on conspiracies and alike well these people have already been victimized and the worst sort of way you hate to see them then become victims of something that occurs over the Internet one thing we heard from law enforcement was that you might think about I think this relates kind of what you were saying mr. Sleater using your logarithms to elevate posts that come from law enforcement so people seeking help go to those first as opposed to some of this other information just that comes in randomly and in your working with law enforcement will you consider that I know you were addressing the Chiefs questions earlier speaker Thank You congresswoman that's something that we we can explore with law enforcement we certainly try to make sure that people have accurate to information after attacks our systems didn't work the way we wanted them to after Las Vegas we learned from that and I think we're in a better place today I appreciate it if you look into that I think law enforcement what you Thank You mr. chairman thank you chair recognizes the gentleman from Mississippi for five minutes Thank You mr. chairman first of all to our representatives from Facebook Google and Twitter I want to thank you for being here today I want to thank you for previously appearing for a closed briefing that we had earlier this year and and and so we seek to continue to examine this complex issue of balancing First Amendment rights against making sure that content that is on social media does not promote terroristic activity and Professor strossen you were not here during that closed briefing so I want to ask a couple questions to you during your testimony your written testimony you highlight the potential dangers associated with content moderation even when done by private companies and of course with their First Amendment rights you make a case for social media companies to provide free speech protections to users you even state in the conclusion of your written testimony you say how to effect to counter the serious potential adverse impact of tear content and misinformation is certainly a complex problem while restricting such expressions might appear to be clear a simple solution it is in fact neither and moreover it is wrong now I know you that was a conclusion of an 11 page report that you provided but could you just briefly summarize that for the purpose of this hearing thank you so much congressman guests yes the problem is the inherent subjectivity of the standards so no matter how much you articulate them and I'm I think it's wonderful at Facebook and the other companies have now fairly recently shared their standards with us you can see that it is impossible to apply them consistently to any particular content reasonable people will disagree the concept of hate the concept of terror the concept of misinformation are strongly debated one person's fake news is somebody else's cherished truth now if a lot of attention has been given to the reports about discrimination against conservative viewpoints in how these policies are implemented I want to point out that there also have been a lot of complaints from progressives and civil rights activists and social justice activists complaining that their speech is being suppressed and what I'm saying is that no matter how good the intentions are no matter who is enforcing it whether it be a government authority or whether it be a private company there is going to be at best unpredictable and arbitrary enforcement and at worst discriminatory enforcement and let me ask you know as an expert in the First Amendment do you feel that content moderation by social media companies has gone too far I think that you know first of all they have a First Amendment right I think that's really important to stress but given the enormous power of these platforms which has the Supreme Court said in a unanimous decision two years ago that this is now the most important forum for the exchange of information and ideas including with elected officials those who should be accountable to We the People so if we do not have free and unfettered exchange of ideas on these platforms for all practical purposes we don't have it and that is they threat to our democratic republic as well as it is to individual liberty and there's a lot that these platforms can do in terms of user empowerment so that we can make our own choices about what to see and what not to see and also information about that will help us evaluate the credibility of the information that's being put out there and finally mr. Olson do you have any recommendations that you feel would help balance First Amendment individuals First Amendment rights versus trying to protect social media from terrorists being able to use that as a platform that you would recommend first to the social media companies and then are there any recommendations that you would have of this body things that Congress should consider that would help us as we navigate this very difficult situation I think that Congress's oversight as you're exercising very vigorously is extremely important I think encouraging but not requiring the companies to be respectful of all of the concerns human rights concerns of fairness and transparency and due process as well as free speech but also concerns about potential terrorism and dangerous speech I actually think that the United States Supreme Court and international human rights norms which largely overlap have gotten it right they restrict discretion to enforce standards by insisting that before speech can be punished or taken to suppress that there has to be a specific and direct tight causal connection between the speech in that particular context which causes an imminent danger and we never look at words alone in isolation to get back to the question that I was asked by the congresswoman because you have to look at con text if in a particular context there is a true threat there is intentional incitement of imminent violence there is material support of terrorism there is defamatory statements there is of fraudulent statements all of that can be punished by the government and therefore should those standards should be enforced by social media as well that would give us in my view that is exactly the right way to strike the balance here Thank You mr. chairman I yield back thank you very much chair recognized gentleman from Missouri Reverend cleaver for five minutes thank mr. chairman I'm have a little different approach to math than my colleagues Miss Johnson in 1989 I was a member of city council in Kansas City and the Klan had planned a big March and Swope Park all this still online you can look at it and I fought against them and the ACLU supported the right to march and and that if I had passed an ordinance I was also vice mayor at a time if I passed an ordinance they were challenged in court I'm not mad I'm not upset I I was a former board member of ACLU and so I think that free speech has to be practiced even when you're sober now for everybody else in some ways I kind of feel sorry for your not enough to let you have without yeah we I'm afraid of for our country I mean we have entered an age of respected people who are people respect an alternative truth and it's just so painful for me to watch it in I don't think I'm watching it in isolation alternative truths that people will just will say something that's not true and continue to say it it doesn't doesn't matter I saw it last night work the president said Barack Obama started this border policy and I tried to I'm correcting it and what what they did and this is what I want you to consider what one of the TV networks did is put up people making statements about what was happening – Oh Jeff Sessions when he had first announced the separation policy and so forth and you know the problem is that Churchill said that a lot can travel halfway around the world before the truth puts on its shoes and that is true that is if we start a 20 20th century a new Bible that would be should be one of the scriptures because it's a fact and the truth cannot always be uncontaminated with sprinkles of deceit so you guys have a tough a tough job I don't I don't want to make it seem like it's on there but you can do easily our system of government I think even beyond that our moral connections or dependent a lot more and I realize this I spent five years in seminary I didn't realize this until recently but but we depend significantly on shame I mean there are some things that that laws can't touch us and so our society functions on shame and so when when shame is dismembered I'm not sure what else we have left but but what I would like for you to react to and maybe even consider is you know instead of taking something down and once again some instances why not just put up the truth next to it I mean the truth I'm not talking about somebody else's response I'm talking about the truth where you like the video I wish I could brought to you well they said here's the lie and hear the truth anybody help me okay so this is a very important issue congressman yes yes absolutely and so one of the things we've been trying to do is twofold with respect to harmful misinformation so one is where there is a video that says say the moon-landing didn't happen my grandmother so the or the earth is flat video may be up but you will see a box underneath it that says here's a link to the Wikipedia page about the moon landing or the Encyclopedia Britannica page where you can go and learn more I think that speaks to this sort of feature that you're talking about the other thing we try and do is really do that now we do that today yes sir and the other thing we try and do is reduce the exposure the frequency of the recommendations to information that might be harmful misinformation such as those sorts of conspiracies thank you I think you rightly highlighted the interplay between what's on social media companies the news media what's on TV and how that cycle of information works together is a critical part of solving this I think the one thing that for Twitter because we're a public platform very very quickly people are able to challenge to expose to say that's not true here's that here's the the evidence here there's the data there is something incredibly important about these conversations taking place in public that I think is something as we move into the information century we need to bear in mind thank you congressman thank you I'm similar to what my colleague referenced we actually if there's something like misinformation that a third party fact-checking organization has debunked and we work with 45 of these organizations worldwide they all meet objective criteria they're all pointer certified what we do is we actually take the articles from those fact checkers and put it right next to the false content so that people have that context and we say if you go to share some of that content we say this content has been rated false by a fact-checker and we link them to it similarly when it comes to things like misinformation about vaccines we are working with organizations like the CDC and the World Health Organization to get content from them that we can actually put next to vaccine related misinformation on our site we do think this is a really important approach to office it takes a lot of resources another thing we're trying to do is I guess what I would say is empower those and this is similar to what mr. pickles mentioned empower those who have the best voices to reach the right audience on this so we invest heavily in promoting counter speech in truthful speech thank you Thank You mr. chairman thank you very much before we close I'd like to insert into the record a number of documents the first of several letters from stakeholders addressed to the Facebook as well as Twitter and YouTube about hateful content on their platform the second is a joint report from the Center for European Policy Studies and the counter extremism project the third is a statement for the record for the from the anti-defamation league the fourth are copies of community standards as of this day for facebook twitter and google without objections so ordered I thank the witnesses for their valuable testimony and members for their questions the members of the committee may have additional questions for the witnesses and we ask that you respond expeditiously in writing to those questions the other point I'd like to make for facebook you were 30 hours late with your testimony and staff took note of it and for a company your size that was just not acceptable for the committee's so I want the record to reflect that what I objection is a committee record shall be kept open for 10 days hearing no further business – committee stands adjourned we had grace under pressure hopefully we'll see each other again in there

6 thoughts on “Social Media Companies' Efforts to Counter Online Terror Content & Misinformation (EventID=109710)

  1. I am completely in awe. I have no idea how to contact all of these men and women who stood up against these companies, but i want to give the sincerest thanks i can to each and every one of them. this was incredibly powerful, and i HATE politics. its not a circle i want to be a part of. but seeing these people stand for not only their beliefs but also our constitution has changed how i view the entire political sphere. all ive known about politics have been through social media- and it radically puts those men and women i saw working together on complete opposite extremes.

  2. Either Google can be sued for the what they publish or Google can be sued for who they censor. This, not a platform not a publisher, double immunity has never been tolerated.

  3. Terrorism was defined as "government by intimidation" in the late 18th century, and it's still true today. Also defined as: "someone who uses violent action, or threats of violent action, for *POLITICAL PURPOSES*"

    Who else has a (false) "legitimate" monopoly on the initiation of violence other than Government? Government is, and of necessity must be, a coercive monopoly, for in order to exist it must deprive entrepreneurs of the right to go into business in competition with it, and it must compel all its citizens to deal with it exclusively in the areas it has pre-empted. Any attempt to devise a government which did not initiate force is an exercise in futility, because it is an attempt to make a contradiction work. Government is, by its very nature, an agency of initiated force. If it ceased to initiate force, it would cease to be a government and become, in simple fact, another business firm in a competitive market.

  4. Oh my…. this is EPIC! How much of America's present division and rage was intentionally caused by BIG TECH's politically biased agenda? How many wounds can we heal as a nation by understanding how much was forcefully distorted by google's social engineering? Is this not election interference…. the very thing the left claim to be "so afraid" of? 🤔

Leave a Reply

Your email address will not be published. Required fields are marked *