Bots are Now More Prevalent and Manipulative
In the last few years, with advances in artificial intelligence, social bots are becoming more ubiquitous. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media.
Bots can mimic human behavior and even work to influence and change our behavior.
Some bots are programmed for good, like alerts for natural disasters. Others are programmed for customer service and marketing purposes. There are also many types of bots, including 155 chatbots in healthcare.
But according to new research from the University of Southern California and Indiana University, there are a record number of malicious social bots. The report says these nefarious bots can emulate human behavior to manufacture fake grassroots political support and to promote terrorist propaganda and recruitment.
A Battle of Fact vs. Fiction
Recently, the news has been full of reports about Russian Twitter bot armies delivering propaganda to influence elections in the U.S. and around the world. The term for this is “weaponized AI propaganda.”
“We’re talking about a foreign government that, using technical intrusion, lots of other methods, tried to shape the way we think, we vote, we act. That is a big deal. And people need to recognize it.” – James Comey, Testimony to Senate Intelligence Committee
According to USC researcher Emilio Ferrara, “Much of the political content Americans see on social media every day is not produced by human users. Rather, about one in every five election-related tweets from Sept. 16 to Oct. 21 was generated by computer software programs called social bots.” The researchers conservatively estimate that 15% of all Twitter accounts were bots and not people.
In the last few months, the @realDonaldTrump Twitter account was unexplainedly growing by millions of followers. Using a tool called Twitter Audit, researchers found that about half of his almost 31M followers at the end of May were not real. There was much speculation as to why the account was growing a Twitter bot army. A quick review found a lot of retweets coming from these fake accounts with Twitter’s generic gray avatar.

Image: RealDonaldTrump/Twitter via Mashable
“Anyone can amass an exorbitant number of Twitter followers. You don’t even have to be famous. All you have to do is pay for them.” – Ryan Bort, Newsweek
In the early days of Twitter –when ‘popularity’ on social media started to take on more meaning– some folks started to use bot tools to gain fake followers. I even remember being naively impressed by some doctors with huge followings, until I later found out the followers were not real.
Preserving Credibility
“Research shows that humans can view Twitter bots as a credible source of information.” – Science Direct
My purpose is not to scare us away from social tools that many of us have grown to find valuable. Social media is 21st century communication nonetheless. But I hope that by sharing this information about malicious social bots, urologists become aware of methods used for possible manipulation.
We should also be ready for the possibility of Twitter bots designed to spread healthcare misinformation. They are already trying to shape perceptions on healthcare policy.
How to Spot a Twitter Bot
Twitter bots control a Twitter account via the Twitter API (Application Programming Interface). They may be programmed to autonomously tweet, retweet, like, follow, unfollow, or direct message other accounts.
When someone follows you on Twitter, look carefully at their profile and tweets. Bots display patterns.
Profile, be wary of:
- Long usernames (often nonsensical, a name and string of numbers).
- Young accounts (look at the date the account was established, bots are new accounts).
- Avatars without photos or with stolen pictures.
- Follower/following patterns: Bots follow accounts using the same hashtags as those programmed for the bot.
- Follower/following ratio: Bots follow a lot of people, but only a small percentage of real people follow them back.
Tweet behavior patterns:
- How often does the account tweet? Bots tend to retweet a lot.
- Are the accounts tweeting/retweeting specific political hashtags? Bots track keywords.
- Bots have very few or no @mentions and @replies.
- Bots can also hack real accounts and take over tweeting.
The following is an example of a scam romance bot impersonating a general with photos believed to be stolen from a real person. Notice the long username with a string of numbers, the recent May date that the account was established, and the lack of followers. This account is only following women, all 629 followers, including some female doctors. Only 40 people followed back.
If you do find a fake Twitter bot that does not serve a good purpose, do not follow it back. If you find it nefarious, block the account. You don’t want your Twitter account to be found to have a lot of fake followers, if you want to maintain credibility.
By blocking accounts, you also let Twitter know about potential scams. The above romance bot account is now suspended by Twitter.
Artificial Intelligence to Spot Fake Bots
Bot or Not and Botometer are tools to research bots, but they could quickly be obsolete. Initially, simple heuristics were used to spot bots and determine patterns that were different from real users. As bots become more realistic, more sophisticated methods are needed, including deep learning and human analysts looking for patterns in sentiment analysis and networks.
As time goes on, it will get become more and more difficult to differentiate bots, and that could be a threat to the future of our democracy and the transparency of research and policy in healthcare. Stay vigilant.
I would like to thank Angela Dunn who provided research for this post.