Elon Musk has vowed to crackdown more harshly than his predecessors on so-called “hate speech”, and will operate a “zero tolerance” policy on content deemed offensive by a select group of moderators.
On Tuesday, Musk revealed that a small group of far-left “civil society leaders,” including the ADL, will be given tools to ban users and delete content that they deem to be “hateful.”
Musk also back-pedalled on his promise to unban controversial users such as Alex Jones, Donald Trump and Milo Yiannopoulos, and said that he no longer believes that individuals who were de-platformed for violating Twitter rules should be allowed back until the ADL has reviewed their case.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
One of Musk’s senior employees tasked with overseeing moderation on Twitter, Yoel Roth, has a history of calling Republicans “Nazis” and posting anti-Trump content on the platform:
Teslarati.com reports: Jason Calacanis, a host of the All-In podcast, is working with Twitter’s new leadership team to help Elon Musk make the necessary changes to the platform. Calacanis shared a tweet by Twitter’s Head of Safety & Integrity, Yoel Roth, and said that the coordinated, hateful conduct surge was quickly thwarted.
In his thread, Roth gave a very clear update on how Twitter is addressing the surge in hateful conduct. This is a very different Twitter since many users, including myself, have experienced hateful conduct and have seen Twitter’s slow response to it. Roth’s full thread reads as follows:
“Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter. We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero. Here’s the latest on our work and what’s next.”
“Our primary success measure for content moderation is impressions: how many times harmful content is seen by our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter.”
“Impressions on this content typically are extremely low, platform-wide. We’re primarily dealing with a focused, short-term trolling campaign. The 1500 accounts we removed don’t correspond with 1500 people; many are repeat bad actors.”
“Impressions don’t tell the whole story. These issues aren’t new, and the people targeted by hateful conduct aren’t numbers or data points. We’re going to continue investing in policy and technology to make things better.”
“Many of you have said you’ve reported hateful conduct and received notices saying it’s not a violation. Here’s why and what we’re doing to fix it:”
“To try to understand the context behind potentially harmful Tweets, we treat first-person, and bystander reports differently. First-person: This hateful interaction is happening to or targeting me. Bystander: This is happening to someone else.”
“Why? Because bystanders don’t always have full context, we have a higher bar for bystander reports in order to find a violation. As a result, many reports of Tweets that in fact, do violate our policies end up marked as non-violative on first review.”
“We’re changing how we enforce these policies, but not the policies themselves, to address the gaps here.”
“You’ll hear more from me and our teams in the days to come as we make progress. Talk is cheap; expect the data that proves we’re making meaningful improvements.”