Concerns that TikTok may be providing data about Americans to the Chinese government have led to legislation forcing TikTok to sell or close its U.S. operations. But Tawhid Zaman of Yale University's SOM says there are less abstract threats to TikTok, which he says apply to all social media platforms where topical information is shared and opinions are formed. .
“TikTok and other platforms choose what content to show,” Zaman said. “They can promote anything or demote anything. That means they can change their minds however they want.”
As far as most social media users know, a platform's most powerful tool for steering public opinion is to permanently remove offensive content or users. But Zaman argues that there is a more powerful means by which social media platforms can control the opinions of groups over time, called “shadowbanning.” Part of this tool's power comes from the fact that it is currently nearly impossible to discover, even for policymakers and software engineering experts.
The network may be steering people to one side's point of view, but if someone tries to criticize them like a regulatory body, they will see the network as censoring both sides equally. Masu.
Shadow bans are more covert than direct bans from the platform and limit widespread visibility of a user's content without the user's knowledge. A Facebook or Instagram post that is shadowbanned will remain on the original poster's profile on his page, but will appear less often or not at all on other users' timelines.
In a new paper co-authored with Yen-Hsiao Chen, a Yale SOM doctoral student, Zaman addresses this phenomenon, not to determine whether it is happening now, but to determine how it is happening. To explain exactly what can happen and how powerful it is.
For this study, the researchers built a simulation of a real social network and used shadowbanning to change the opinions of simulated users, successfully increasing or decreasing polarization. Even though the goal was to use shadowbanning to sway collective sentiment to the left or right, to an outsider the content moderation policy appeared neutral, Zaman said. That's because they discovered it's possible to change opinions by simultaneously lowering the volume on accounts on both sides of an argument.
“It's like a frog sitting in a pot of water. The frog is relaxed and then all of a sudden it gets cooked,” Zaman said. “In fact, the network may be steering people towards one point of view, but if someone tries to criticize people like a regulatory body, the network is censoring both positions equally. It will be considered,” Zaman said. “If you leave the network alone because nothing seems to be going wrong, all of a sudden everyone starts thinking the Earth is flat. We now know that you can do that with our technology. , it's a little scary.''
If a large-scale shadowban strategy could prove dangerous, why would Zaman set out to reverse engineer the best method? Like any powerful tool, shadowbanning can be dangerous at best. Zaman says it can be used for worse. And what is urgently needed is a detailed understanding of how shadowbanning works in the first place.
For example, a better understanding of shadowbanning could help regulators recognize malicious actors and shape network opinions. It could also help social media platforms improve their content recommendation algorithms to strictly avoid inadvertent plunges into polarization.
“With this document, policymakers can clarify what kind of recommendation systems they want to allow and what kinds of shadowbans they want to allow,” Zaman said. To tell. Suppose a regulator tells platforms that the system should not increase polarization. “How do you define that? Well, I'll show you what that means in the paper.”
In understanding how online content regulation affects users' opinions, researchers' first step is to establish a model of opinion dynamics based on widely accepted research on persuasion. That was it. According to their model, users can be slightly changed in some way by the opinions expressed by their online connections, but only if that stance is relatively close to the user's existing position. Opinions that stray too far from their narrow confines cannot sway them.
With this opinion changeability model in place, the researchers' next challenge was to simulate large-scale social network conversations focused on specific hashtags. Zaman and Chen built two such simulations using tweets Zaman collected for previous research. One of these mock conversations was extracted from real tweets about the 2016 US presidential election (2.4 million tweets on this topic from 78,000 users were collected between January 2016 and November 2016). One was extracted from tweets about the Yellow Vest protests in France (2.3 100 tweets from 40,000 users on this topic between January 2019 and April 2019). 10,000,000 tweets were collected). The researchers used neural networks to measure the sentiment of each tweet.
We then created follower graphs for each topic to map who was following who in the larger conversation. Of particular importance within the map are each “edge”, or link between her two users.
Next came the simulation stage. Can we move users to the right by carefully muting all connections slightly to the left? By lowering the volume of the most extreme opinions on either side, can we reduce the overall polarization within the conversation? And can we further polarize the conversation by silencing more moderate positions? The answer to each of these questions was “yes.”
Shadow bans are difficult to identify because the opinions that are muted depend on your stance relative to other users. The result is a mix of shadowbanned and amplified users with no clear rhyme or reason. For example, if a network's goal is to shift collective sentiment to the left, the network might choose to display content from moderate users on a relatively right-leaning connection (pulling that connection to the left) But that same content may be blocked. At first glance, from a left-leaning connection's timeline (so that the connection doesn't move any further to the right), the ban appears to affect all users more or less equally.
However, Zaman and Chen say it is possible to determine whether shadowbanning exists. First, apply negative scores to edges that pull sentiment in one direction, and apply positive scores to edges that pull sentiment in the opposite direction. “Then we quantify the score of the blocked edges,” Zaman explains. “To catch biased shadowban policies, you need to think about things differently. It's not about the people you're blocking. It's the connections between people that you have to focus on.”
Zaman plans to share his research with policymakers. “I want to show them, 'What can you do with this network?' This is the danger,” he says. “If you don't want to ban, but you want to regulate content, you can do that by quantifying the algorithm. This is how we regulate all networks, like X, Meta, Instagram, YouTube, etc. .”