Twitter’s new pitch for harassment protection settings defers to trolls for some reason

Share
  • September 27, 2021

Twitter isn’t exactly great at keeping jerks out of people’s replies, as anyone who has ever had a tweet “do numbers” can surely attest. But what if — what if — things were different?

That’s what Paula Bercante, a designer for the social media company, teased in a Friday thread soliciting user feedback on some new ideas. Newly conceived but not-yet-implemented “Filter” and “Limit” controls would offer new setting switches designed to regulate the flow of tweets directed at account-holders.

The two switches are fairly self-explanatory. Filter would watch for harmful or spammed replies and block them from view for everyone other than each problem tweet’s author. Limit takes this a step further, preventing accounts “that tend to use harmful language or send repetitive, uninvited” tweets from replying to accounts that have the setting switched on.

“Filter and Limit would be all about empowering you to proactively prevent potentially harmful interactions and letting you control the tone of your conversations,” Barcante wrote in the thread. “Disagreements, debates, and criticism are still allowed.”

Barcante doesn’t say it outright, but her thread suggests that the two settings would depend to some extent on a database of bad Twitter actors. Maybe “Filter” could be powered by an AI brain, but it’s hard to imagine how “Limit” would shut down replies from accounts “that tend to” behave in a certain way without having a list to draw from.

Strangely, however, the decision to use either feature (as they’re currently designed) would be broadcast up front to all readers. Meaning a would-be troll would know beforehand if they’re about to tangle with someone who might not see what they’re going to say, because a setting is switched on.

Barcante’s thread suggests it works this way because the warning could encourage a would-be troll to rethink their response, and perhaps engage more respectfully. If that is the case, it’s reasoning that might have made sense in the first years after Twitter’s 2006 launch. But now? In 2021? Not so much.

Tipping someone off that their reply may not be OK is a strangely deferential moved aimed at would-be bad actors. It’s Twitter telling someone who may have bad intentions that their efforts to offend are likely to be wasted, so better to move on and direct that energy somewhere else.

“The warning could convince a would-be troll to respond in a more respectful manner” isn’t just a naïve belief to hang onto in this day and age, it’s also fundamentally the wrong way to approach these kinds of features. Why show deference to potential bad actors at all? Shouldn’t the goal of “Filter” and “Limit” controls be focused squarely on protecting a tweet’s poster?

SEE ALSO:

Twitter’s new ‘Communities’ let users embrace the echo chamber

When I originally came across this story (h/t The Verge), I thought: “Wow, Twitter is actually daring to imagine a world where it protects the people who use its platform.” But after a closer look, that’s not really what’s happening here, is it? These tools might offer a measure of protection to those who use them, but the way they’re built now, they also stand to arm trolls with the information they need to troll more effectively.

It’s a bizarre choice. But it’s also not a finalized feature, so here’s some feedback for you Twitter: The foundational philosophy for settings like these should exclusively prioritize protecting users. If you’re also giving their harassers the tools to act like jerks more efficiently, you’re doing it wrong.

Source :

Twitter’s new pitch for harassment protection settings defers to trolls for some reason