Why does this matter?
Bad actors may easily and inexpensively influence specific groups
Inflating the engagement of tweets through retweets or replies can create the illusion of popularity (Wild and Godart, 2020). Endorsement by domestic figures can “launder” stories from the fringes of public conversation into the mainstream (First Draft, 2018; Media Manipulation Casebook, 2021; Romm and Molla, 2017). As mentioned earlier, past research has shown that early engagements play a significant role in whether something goes viral, “Bots amplify such content in the early spreading moments before an article goes viral” (Shao et al., 2018).
Bad actors can leverage the illusion to direct public discussion and influence opinion without disclosing financial backing, or they can misinform or discourage opponents through targeted harassment (Wild and Godart, 2020; Carmichael, 2021; Kinetz, 2021; Onyango, 2021). TrendMicro, a global cybersecurity firm, wrote of the threat in 2017 (Gu et al., 2017, p. 74):
Careful and extended use of propaganda can shift the Overton window. Prolonged opinion manipulation techniques can make the public receptive to ideas that would have previously been unwelcome and perhaps even offensive at worst. The concept of the slippery slope applies: once an opinion has been changed a bit, it becomes easier to change it even more.
Concerning a case where coordinated Twitter campaigns targeted civil advocates, the activists said, “They now self-censor on the platform” (Onyango, 2021).
An in-depth investigation by Mozilla included interviews with the influencers who had accepted payments to partake in the information operation (Madung and Obilo, 2021):
“They were told to promote tags – trending on Twitter was the primary target by which most of them were judged. The aim was to trick people into thinking that the opinions trending were popular – the equivalent to ‘paying crowds to show up at political rallies,’ the research says.”
Information disorder research leaves something to be desired
Platforms routinely remove accounts operated by bad actors, but this usually also removes their interactions and history (Twitter Safety, 2021b, 2021a; Kinetz, 2021; Twitter Safety, 2019). Researchers can piece together history using archives and mentions, but we are often deprived of the full dataset needed to perform a comprehensive analysis. When still possible, the cumbersome process of documenting deleted bad-actor accounts often prevent independent researchers from providing an invaluable check on social media platforms.
Platforms frequently opt to share data with a handful of groups–sometimes the same groups with which other platforms share data. The process lacks transparency. Consequently, the arrangement and research findings are more vulnerable to malign influence and lack the checks afforded to other fields where researchers may more freely replicate results.
If platforms fail to stop and remove far more accounts and efforts than is currently recognized, there is no incentive for them to tell us.
The threats from information manipulation facing the public will only grow
In 2022, the Annual Threat Assessment of the US Intelligence Community stated that malign influence would continue to threaten the United States. The threats from Russia and China were most pronounced. The 2021 Annual Threat Assessment stated (ODNI, 2021):
“Cyber threats from nation-states and their surrogates will remain acute. Foreign states use cyber operations to steal information, influence populations, and damage industry, including physical and digital critical infrastructure. Although many countries and non-state actors have these capabilities, we remain most concerned about Russia, China, Iran, and North Korea. Many skilled foreign cybercriminals targeting the United States maintain mutually beneficial relationships with these and other countries that offer them safe haven or benefit from their activity.”
Adversaries may use acceptable messengers to spread their divisive and misleading content by simply amplifying individuals in Western society that are already aligned with their interests. As this threat grows, our ability to understand and analyze it has been limited by platforms.