25.9 C
London
Thursday, July 18, 2024
HomeTechnologyTwitter exec says he's moving fast on moderation as harmful content rises

Twitter exec says he’s moving fast on moderation as harmful content rises

Date:

Related stories

Elon Musk’s Twitter is leaning heavily on automation to moderate content, removing certain manual reviews and favoring restrictions on distribution rather than removing certain speech entirely, his new head of trust and security told Reuters.

Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas that include child exploitation, regardless of potential impacts on “benign uses” of those terms, the vice president of trust and safety products said. from Twitter, Ella Irwin.

“The biggest thing that has changed is that the team is fully empowered to move fast and be as aggressive as possible,” Irwin said Thursday, in the first interview a Twitter executive has given since the acquisition of the social media company. by Musk at the end of October.

His comments come as researchers report a spike in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or otherwise engaged. in “egregious spam”.

The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk cut half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees.

And advertisers, Twitter’s main source of revenue, have abandoned the platform over concerns about brand safety.

On Friday, Musk promised “significant strengthening of content moderation and protection of freedom of expression” in a meeting with French President Emmanuel Macron.

Irwin said Musk encouraged the team to worry less about how its actions would affect user growth or revenue, saying security was the company’s top priority. “He stresses that every day, multiple times a day,” he said.

The security approach Irwin described reflects, at least in part, an acceleration of changes already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that job.

One approach, captured in the industry mantra “free speech, not free reach,” involves leaving certain tweets that violate company policies but barring them from appearing in places like the home timeline and search.

Twitter has long implemented such “visibility filter” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for freer speech while reducing the potential harms associated with abusive viral content.

The number of hateful tweets on Twitter rose sharply in the week before Musk tweeted on Nov. 23 that impressions or opinions of hate speech were declining, according to the Center to Counter Digital Hate, in one example. from researchers pointing to the prevalence of such content, while Musk touts a reduction in visibility.

Tweets containing words that were anti-black that week tripled the number seen in the month before Musk took over, while tweets containing a gay slur rose 31%, the researchers said.

‘MORE RISKS, MOVE FAST’

Irwin, who joined the company in June and previously held security roles at other companies including Amazon.com and Google, rejected suggestions that Twitter did not have the resources or the will to protect the platform.

She said the layoffs did not significantly affect full-time employees or contractors working in what the company called its “Healthcare” divisions, including in “critical areas” like child safety and content moderation.

Two sources familiar with the cuts said more than 50% of Health’s engineering unit has been laid off. Irwin did not immediately respond to a request for comment on the claim, but previously denied that the Health team was seriously affected by the layoffs.

He added that the number of people working in child safety hadn’t changed since the acquisition and that the team’s product manager was still there. Irwin said Twitter replaced some positions for people who left the company, though he declined to provide specific numbers on the extent of the turnover.

She said Musk focused on using more automation, arguing that the company had erred in the past by using time-consuming and labor-intensive human reviews of harmful content.

“He encouraged the team to take more risks, move fast, secure the platform,” he said.

On child safety, for example, Irwin said Twitter had moved toward automatically deleting tweets reported by trusted figures with a history of accurately flagging harmful posts.

Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said she recently noticed that Twitter removed some content as quickly as 30 seconds after reporting it, without acknowledging receipt of her report or confirmation of her decision. .

In Thursday’s interview, Irwin said Twitter removed about 44,000 accounts involved in child safety breaches, in collaboration with cybersecurity group Ghost Data.

Twitter is also restricting hashtags and search results often associated with abuse, such as those intended to search for “teen” pornography. Earlier concerns about the impact of such restrictions on the permitted uses of the terms are gone, she said.

The use of “trusted reporters” was “something we’ve discussed in the past on Twitter, but there was some hesitancy and, frankly, just a little bit of a lag,” Irwin said.

“I think we now have the ability to move forward with things like that,” he said.

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here