Elon Musk has reached his limit with Rep. Adam Schiff (D-CA) and publicly scolded the Democrat for making “false” claims that “hate speech” has increased on Twitter.
Schiff has taken to Twitter and listed figures that supposedly suggested that “slurs” on the social media platform had risen dramatically.
It’s unclear where Schiff acquired the data, however, as he didn’t provide a shred of evidence to support the claims.
“On Elon Musk’s Twitter: Slurs against black people have tripled,” Schiff said.
“Slurs against women are up 33%.
“Slurs against Jewish people are up 61%.
“And slurs against gay men are up 58%.
“These numbers are abysmal – and unacceptable.
“Today, @RepMarkTakano and I are demanding action.”
Schiff made the comment early this morning.
Musk ignored him for most of the day but set the congressman straight in the late afternoon.
Musk said: “False, hate speech impressions are actually down by 1/3 for Twitter now vs prior to acquisition @CommunityNotes.”
False, hate speech impressions are actually down by 1/3 for Twitter now vs prior to acquisition @CommunityNotes
— Elon Musk (@elonmusk) December 8, 2022
Schiff also sent Musk a letter that said:
As Members of Congress, we are deeply concerned about the recent rise in hate speech on Twitter. Analysis by independent researchers indicates Twitter has become an increasingly toxic place for our constituents, and we are reaching out to you to understand the actions Twitter is taking to combat this increase in harmful content.
Although you tweeted that one of your goals as Chief Executive Officer of Twitter was strong content moderation on the platform, the results of your leadership have been the opposite. Multiple reports have shown that since you became CEO in late October, hate speech has dramatically increased on Twitter. Under your leadership, there has been an extreme spike in the number of tweets that include slurs, the level of engagement with these tweets, and the popularity of spreading this harmful rhetoric.
According to the Center for Countering Digital Hate (CCDH), the number of tweets containing slurs has grown since you have become CEO compared to the 2022 average. Slurs against Black people have tripled in daily mentions.
Slurs against women have increased 33 percent from the 2022 average mentions, and slurs against gay men have increased by 58 percent. Before you assumed the role of CEO, engagement with these tweets averaged 13.3 replies, retweets, or likes. Now, engagement with slurs has increased 273 percent, with the average number of replies, retweets, or likes averaging 49.5 on tweets containing hate speech.
Of particular concern to us is the rise in anti-LGBTQ+ rhetoric on Twitter under your supervision. Based on data analysis, anti-LGBTQ+ extremists are picking up followers at quadruple the pace since the change in leadership.
With increased followers, these actors are seeing wider circulation of their hateful tweets on the platform, which we fear might spark even more real-world violence against the LGBTQ+ community.
After the Colorado Springs Shooting, in which the LGBTQ+ community was specifically targeted, we saw anti-LGBTQ+ hate become viral on Twitter. Research found that tweets from prominent extremists have been “viewed tens of millions of times in the wake of the Colorado Springs Shooting” and that just 20 of the most prominent hateful tweets “can be estimated to have picked up a total of 35 million views.”
You tweeted that the “New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max deboosted & demonetized” but we have yet to see any evidence of follow-through on Twitter.
We have also seen a significant increase in antisemitism on the platform. The Anti-Defamation League recently found that there was a “61.3% increase in the volume of tweets (excluding retweets) referencing ‘Jews’ or ‘Judaism’ with an antisemitic sentiment” since you became CEO.
Simultaneously, Twitter has decreased its content moderation, as researchers found that Twitter went from “taking action on 60% of antisemitic tweets to taking action on only 30%.” We are glad to see you have suspended Ye’s account following his antisemitic posts, but this step must be paired with further decisive and preventative action from your platform.
We find the rise of extremist actors and hate speech on Twitter demonstrably at odds with your company’s statement that human safety is a “top priority”. And despite your assertion that there has been a decline in “hate speech impressions” from the “pre-spike levels,” you have not provided data showing how you are measuring hate speech that would allow outside researchers to validate your assessment. In direct contrast, CCDH’s social media analytic tools found that the number of tweets containing slurs and engagements are still above the average 2022 levels. It appears that a byproduct of your company’s “embracing public testing” approach is harm to your users.
With rapidly changing and unclear policies on content moderation on Twitter, amid documented negative trends and public evidence, we are concerned about the individual and community harm arising from Twitter, including how that could spill from online into real life. We are seeking further information about your plans for content moderation and the capability of your workforce to implement and enforce your policies.
As part of our ongoing oversight efforts, we request answers to the following questions, as well as a briefing to discuss other areas of oversight:
What steps is your company taking in response to the recent rise in hate speech on your platform and how do you plan to make these decisions available to the public? Additionally, what is your timeline for rolling out any of these changes?
Your company has stated that human safety is a priority, but anti-LGBTQ rhetoric has increased since the Colorado Springs Shooting. We have also seen a distinct rise in antisemitism on the platform. What is Twitter’s plan to increase safety for its users, and more specifically the LGBTQ+ community and the Jewish community?
What is the current process for enforcing content moderation on your platform? How do you plan to make these processes transparent and available to the public and researchers?
With the recent drastic reduction in the number of Twitter employees, including specialist content moderators, engineers, and safety team members, what is your company’s current capability and capacity to handle the risks arising from the extreme rise in hate speech, hate actors and the growth of hate communities? What is the current risk-assessment process and response timeline for viral hate speech and disinformation?
Thank you for your attention to this matter.