John 385620: The Fight Against Internet Bots 

Photo by Christina Morillo

Have you ever checked the comments section on a popular X (Formerly known as Twitter) post and seen numerous replies from blank profiles repeating the same phrases? Have you ever matched with a person on a dating app whose messages do not seem to make any sense? This isn’t genuine human activity but a bot.   

According to Parag Agrawal, former CEO of X, 500,000 bot accounts are deleted from the social media platform every day, with the suspected number of bots to be over 16 million roughly 5% of the platform’s users. However, this number is to be taken with a grain of salt, as the users behind the creation of these bot accounts can make new ones just as quickly as they can be deleted, if not faster. And remember, this number only applies to X. The true scale of the “bad bot” problem across the internet is impossible to quantify.   

Corporations will often use automated accounts for customer service, helping customers answer product questions and  report issues. The real problems come from a single user, or group of unidentified users, creating large numbers of automated accounts to serve their own purpose. Oftentimes, these types of bot accounts artificially inflate engagement on a singular post. By having hundreds of bots “like” or “comment” on a post, it can make the poster seem more popular online than they really are. 

Unfortunately, a person does not need to have the coding knowledge to create bots to use bots for themselves, as there are ways to buy bot social media accounts from independent sellers. There are even some businesses that will buy bots to boost their social media posts.  

Trey Comito is a Marketing Director for PAF Distribution a sports nurtrional company that uses bots.   

“I could make 1,000 accounts right now that will follow the company I work for. Because the algorithm from, say, Instagram or Twitter or Facebook will pick up that you’re getting more and more followers. Even though they are bots, the algorithm doesn’t notice that… That helps you boost to real people that you’re trying to reach,” Comito said. 

Large amounts of engagement on a post, whether it comes from humans or bots can lead to that post being “boosted” by the platform that it’s on. That post will be spread across the platform, being seen by even more people. This is a major problem when it comes to spreading misinformation. With a big election coming up, awareness of automated accounts and their influence is more important than ever.  

Just last year a person or group of people created thousands of social media accounts with the intent of boosting support of Donald Trump, while simultaneously trying to drag support for his Republican rivals down. In 2018 even the Department of Homeland Security got involved in the “bad bot” problem by posting a warning guide detailing what these bots can be used for. These ranged from simply inflating a person’s “like” count to targeted harassment, hate speech and the spreading of propaganda.   

Luckily automated accounts like these are easy to identify, as long as a user is aware of their existence. They often are newly created accounts with either a stolen profile picture or none at all. Their usernames will contain multiple numbers such as “John385620” or “Andrea004327,” They will likely have generic, safe descriptors in their bio such as “dog lover” or “sports fan.” On the other hand, they also may have intentionally bold, politically divisive words in their bio as well. Automated accounts will post dozens of times within a small time frame or be actively posting 24/7 because  they aren’t humans that are busy with work, school, or needing to sleep to be posting online constantly.   

A user can even identify a bot without needing to look at its profile, but by simply reading the contents of their posts. Spamming hashtags, links from unreliable or suspicious websites, or inflammatory/irrelevant memes are easy signs that the account posting them is inauthentic. Automated accounts like these also lack the diverse vocabulary that we have as humans, often using the exact same words in phrases in multiple posts. They also don’t use punctuation the same way humans do online. For example, a comment section full of replies such as “Wow, that’s cool!” or “This made me very upset!” likely means they are being inflated by bots.   

“Most bots are extremely formal. There’s no slang whatsoever. It’s the most formal English language that doesn’t sound natural when you read it. And that’s the biggest tell to me,” Comito said.   

Unfortunately, there is no way to permanently erase these automated accounts from the internet. It is basically an arms race between the people creating them and the platforms trying to detect and delete them. The only thing internet users can do for now is be aware and on the lookout for accounts like these, taking a moment to think about what they are reading, and the intent behind the person (or bot) posting it.   

“We’re going to need to use critical thinking skills. We should be aware of the responses it gives back and question things,” said Jodi Jones a Software Delivery Consultant at Lean TECHniques, “use critical thinking to avoid falling into those traps.”  

Leave a comment

Your email address will not be published.


*