Former head of Twitter's Trust & Security department answers about Twitter's tweet regulation policy and the turmoil of the first two weeks under Elon Mask's command

Yoel Roth, former head of Trust & Security, who led the team to fight harmful information on Twitter before Elon Musk replaced him, about Twitter before and after Musk became CEO. introduced.

Banning Donald Trump and meeting Elon Musk: Former Twitter safety chief gives inside account - Poynter

Mr. Ross led Twitter's content moderation and security efforts, and along with about 220 employees, established a strategy to combat harmful misinformation. Ross left the company a few weeks after Musk took over as CEO and has since worked as a technology policy researcher at the Goldman School of Public Policy at the University of California, Berkeley.

Mr. Ross took the stage at 'GlobalFact 10' of the fact check summit held in Seoul. Aaron Sharrockman of PolitiFact , a fact-checker for political information, spoke to hundreds of audience members about how Twitter is doing well in the fight against misinformation and disinformation.

Mr. Aaron Sharokman (hereinafter Sharokman):
I heard that there were about 200 people in the Trust & Security Department. You had 12 direct reports. What happened to them?

Mr. Yoel Ross (hereinafter referred to as Ross):
The people who create the rules and policies are a tiny fraction of the total, including the contractors who moderate content on the front lines. Of the 200 people who have directly or indirectly reported to me, only a few are still with the company. Only one person remains on the core team.

The downsizing was pretty severe.

What are Twitter's current Trust & Security programs and activities?

I'm sorry to say this, but it doesn't exist. It was truly amazing to see teams wiped out and efforts set back in such a short period of time.

Twitter used to be great at providing breaking news, but that trend seems to be waning these days.

One of the things Jack Dorsey did when he was CEO was moving Twitter from the 'social networking' category to the 'news' category. That's right, Twitter is built on an ecosystem of people who post news and help spread the word. However, much of that ecosystem has been crumbling so quickly that Twitter has become almost unheard of. I think the role of Twitter in the world is fundamentally different than it has been in the last 15 years.

Mr. Musk

completed the acquisition of Twitter on October 27, 2022, and the next day, key people such as the CEO and policy officers were fired one after another. You were endorsed by Mr. Musk, but you left Twitter in two weeks. What was Musk's 13 days like as CEO until he left?

Mr. Ross:
The word whirlwind is indescribable. Everything from various security procedures to figuring out who my boss was. Is Elon Musk my boss? If so, what does it mean? what does he want me to do Can I do what he wants? Thousands of employees think these things.

In an ambiguous situation where Twitter doesn't know what will happen in the future, employees are tackling all kinds of challenges, such as trolls posting racist content in large numbers, and large-scale elections in the United States and Brazil. had to deal with.

I myself remember an interesting experience from my first conversation with Mr. Musk. Mr. Musk completely caught me off guard when he made it clear at the time that he didn't want Twitter to be the source of potential violence in the Brazilian election, and Mr. It turns out that I have the same idea. There is no doubt that Mr. Musk was worthy of being my boss.

Talk about Twitter

freezing Donald Trump's account. Did Twitter have something like a war room for the Trump case? Some say the freeze was too late, but I'm curious what the process was like in these unprecedented circumstances.

Mr. Ross:
Some have speculated that the reason Trump was not banned until then was for financial reasons, but in my experience, that is not the case. For Trump, Twitter is a platform where news and current affairs happen, and his vision that there is a public interest in accessing content and the public speaking of influential figures could do a lot of harm. It should have been caught between the reality. And Twitter, on Twitter, was stuck trying to reconcile its desire to protect content for its vision of the public good with its desire to mitigate harm.

Solving this problem was a real challenge for Twitter. In the years since Trump took office, Twitter has done little to regulate Trump's content. In 2018 or 2019, Twitter introduced a public interest policy to respond to harmful content by displaying a warning message but not removing it. However, this is the same as leaving content that is substantially harmful. Frankly, I think Twitter scared me.

In May 2020, when Trump posted a series of messages attacking California Governor Gavin Newsom, Twitter put a fact-check label on Trump's tweets for the first time. Trump responded by holding a press conference, holding up the cover of the New York Post where I was published, and signing an executive order condemning social media censorship. This first fact-check is what cut Twitter's throat.

Since Election Day 2020, we've moderated over 140 posts from Trump's account. And it has reached the decision to limit Mr. Trump's account on January 6, 2021 and finally banish it on the 8th. We share that deciding on this moderation approach has been a long and painful process.

I would like to ask a few questions about your company's content moderation policy. What are your thoughts on the use of fact checkers? Ultimately, you brought in the Associated Press as a fact-checker, and then AFP until Musk bought it.

I think there were several factors. The first is the financial issue. Social media generally refers to Facebook, YouTube, etc., but the financial realities of these companies are very different from other industries. Twitter is perhaps the largest of the small businesses, especially in terms of its influence over journalists and politicians. But Twitter didn't have resources like Meta or Google. So when we started developing our strategy for dealing with misinformation in 2020, it was impossible to do it the same way Facebook did. It just wasn't realistic for the company. Couldn't get the budget.

Aside from financial considerations, we had to consider, 'Who is responsible for making these decisions?' What I've always found interesting about Facebook's decision-making structure is that a lot of the decisions about when to label a post as misinformation or fact-check are made by fact-checkers. Meta is in a very comfortable position as a platform to escape responsibility by saying 'labeling is not our decision, we just accept it.'

There will be Meta people in this room, so I'm not attacking Meta's decision. However, I think it's worth paying attention to what kind of method is desirable from the company's point of view and whether it is worth spending money on. Twitter took a different stance than Meta. We wanted to make content moderation decisions “ours.” If we intervene and label or remove something, we are responsible for the decision. If people criticize the decision, they should criticize Twitter.

As a result, we received a lot of criticism for some of our decisions, but we still felt it wasn't appropriate to pass the blame onto others.

Over the past three days, one of the big topics at the summit has been online harassment. Fact-checkers, journalists, and you are all facing increasing threats and harassment online. Musk aside, is there anything the platform can do that isn't actually being done?

I hope not. I think the original sin of social media was its failure to deal with harassment. We've acknowledged this challenge all the way back to Gamergate , where we saw the mob's power to harass, intimidate, and silence people. I believe that one of the biggest failures of content moderation and social media policy is the failure to address harassment, and harassment is really hard to deal with.

Let's say you're writing a policy for a social media company, and the policy is about profanity. Think of it as having a policy about people who say mean things and who insult you. Let's say an account posts 10 insults to the same person. It seems like it's clearly crossing the line and the account will be banned. This is much like our social media policy today.

But instead of one account making ten posts, imagine ten accounts making one post each. What should be done there and what will be the criteria for the decision? Should companies take action? Will it be censored? The reason why companies have struggled with harassment is not to justify failure, but to establish with clear evidence whether a problematic post is systematic harassment or just someone saying something mean. because it is difficult to

However, this trend is slowly starting to change. One of the policies I'm most excited about has been introduced into Meta , basically where Meta will take action if a user uses Meta's tools to engage in systematic harassment. Become. The problem of addressing systemic harassment rather than individual harassment is one of the areas where we hope social media will continue to invest.

in Web Service, Posted by log1p_kr