ANALYSIS: Twitter, Facebook and the Fake Follower problem

The New York Times reports that Twitter will “begin removing tens of millions of suspicious accounts from users’ followers on Thursday, signaling a major new effort to restore trust on the popular but embattled platform.”

This comes after a January investigation by the Times that I think is worth your time to read. It explains how fake accounts work, the types of bots you may see in your feed, and how to spot a fake account. It’s an excellent piece of journalism that I think is both educational and extremely newsworthy.

I’m really interested to hear more about what Twitter believes constitutes a “suspicious account.” The changes seem to be coming due to pressure from businesses who want to make sure the influencers they are hiring to promote their products have real followers.

I would like to make sure they are not shutting down the legitimate accounts of journalists and activists who, especially in authoritarian countries, may be using pseudonyms to avoid being persecuted.

That said, I would also like to know how they plan to balance accounts that are spreading disinformationthat is, trying to spread untrue information deliberately. For now, it appears we’re all going to have to take this into our own hands so I want to talk a little bit about what I believe is our personal responsibility when using social media. 

Use of fake accounts to spread disinformation

The role of Russian bots and trolls to spread disinformation over social media has certainly dominated the headlines. While I think it is VERY important for us to talk about foreign influence campaigns, it is also good we’re finally having a discussion about domestic actors as well, some seeking influence for political reasons, others who seek influence for economic interests. Most use similar tactics in order to influence public opinion. They make memes. They utilize fake accounts to pose as average Americans. They plant fake websites or “blogs” in an effort to spread false information anonymously or to make it look like the opinion is more popular than it actually is.

While many of these influence campaigns seek to convince you to vote a certain way or take a certain action, I think we also need to understand that many disinformation campaigns also seek to just “pollute” the air so that nobody wants to participate. They aim to flood the debate with conflicting ideas, polarizing the debate and making people so angry that they can’t have reasonable discussions or remember the issues on which they agree. They often focus on stirring debate on controversial issues like abortion, gun rights and LGBT rights, as well as stoking culture wars and nationalism. So, if you see an overly inflammatory meme or post, or someone stirring the pot on one of these issues, it’s a red flag to take a deep breath and consider the motives of the author, and perhaps whether the “author” is even a real person at all.  

The end result of all of this “pollution” is that to average voters, the whole process feels so dirty and rigged that reasonable people give up participating entirely. This, to me, is the most damaging part to democracy because when people opt out of the process, decisionmaking usually ends up controlled by a small group of the most extreme members of society.

As we’re heading into the election cycle, I’ve already noticed bots and trolls (usually from fake accounts) trying to poison the well. I’ve even heard reports of it happening here in Kansas in local races. This is why I think it is important for us to start recognizing when we are being targeted and then slow down for a minute to consider who might be targeting us, what message they are trying to send, and exactly what kind of influence they are seeking to have on us. If we are able to recognize these efforts, we can prevent ourselves from amplifying their messages.

It is important to remember our role in this process. Dr. Claire Wardle and Hossein Derakhshan, in their report to the European Commission on this subject, explained that when we receive a message, whether it is online or in person, we as human beings use six methods to evaluate whether to trust it and share it. Those are:

  • Reputation: We consider the reputation of the agent (person or platform) based on recognition and familiarity of the source
  • Endorsement: We look to see whether others find this message credible
  • Consistency: We look to see if others are repeating this message
  • Expectation violation: We examine whether a website looks and behaves in the expected manner
  • Self-confirmation: We look to see whether the message confirms our beliefs
  • Persuasive intent: We consider the source‘s intent in trying to persuade us

I think this is why social media is prime for the quick spread of information because you are checking off a lot of the things on this list in one spot. You have people who are “friends’ spreading the message. You can quickly see if your other friends endorse or share this message. There’s no need for some shady political actor to set up a site… you already are on a familiar site that performs as you expectwith friends you trust sharing messages. Since we trust the messenger, sometimes we skip the part where we do a quick internet search to find out if the information is correct. 

So what can we do?

Along with the signs that The New York Times outlines in its piece, like looking at followers versus following, beware of accounts with extreme opinions or those that are asking questions or making statements that seem to want to provoke heated discussion. Also, beware of accounts who seem to post the same repetitive comments/arguments, yet don’t respond to the previous post or tweet.

And of course, FACT CHECK before you share and amplify inaccurate information.

We all need to take responsibility for our individual roles as amplifiers.

We all have to understand that when we spread inflammatory and/or false informationwe are continuing this cycle. We need to examine our sources and understand the emotional responses we have when we are targeted by propaganda.

There is data to support the idea that we are our own worst enemy in the fight against false information. In a recently released study, researchers from MIT found that on Twitter it was actually real live humans who spread false information faster than bots or trolls.

Personally, I think it would be great if social media companies would label bots and give users an opportunity to filter those out if they so choose. Not all bots are bad, but it would be nice to know which accounts are bots when looking at your feed.

But for now, we need to be vigilant about examining the motives of the messages that are coming across our feed, factchecking these messages, and then considering whether sharing them is productive to our democratic debate. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.