Twitter works towards Improving its Platform with their New Developer Policy Update of Bots Identification

As Twitter works to improve its action against the misuse of its platform by political operatives, one key area that Twitter, specifically, needs to focus on is bots, and the use of automated profiles to amplify specific messaging.

Bots have repeatedly been identified as a key issue in influencing tweet discussion – in the wake of the 2016 US Election, for example, researchers uncovered “huge, inter-connected Twitter bot networks” seeking to influence the political discussion, with the largest incorporating some 500,000 fake accounts. Last year, Wired reported that bot profiles were still dominating political news streams, with bot profiles contributing up to 60% of tweet activity around some events, while earlier this year, a network of Twitter bots was found to be spreading misinformation about the Australian bushfire crisis, amplifying anti-climate change conspiracy theories in opposition to established facts.

While Twitter hit the headlines for banning political ads entirely late last year, invariance to Facebook’s stance, bots remain a concern.

Which is why this new update to Twitter’s developer policy could be particularly relevant.

This week, Twitter is updating its rules around the use of its Developer API, including new regulations around academic research and matching anonymized profiles to real-world identities.

And there’s also this:

“Bot Accounts – Not all bots are bad. In fact, high-quality bots can enhance everyone’s experience on Twitter. Our new policy asks that developers clearly indicate (in their account bio or profile) if they are operating a bot account, what the account is, and who the person behind it is, so it’s easier for everyone on Twitter to know what’s a bot – and what’s not.”

Of course, scammers will just ignore this, but the ruling may give Twitter more impetus to take action against bot accounts that violate this stipulation, enabling it to squeeze out bot scammers and lessen their impacts.

This also aligns with another reported move that Twitter is considering on bots – labeling bot accounts with an icon or similar to ensure users are aware of who or what is tweeting certain messages. Twitter says that it has considered labels for bot accounts, and with this regulation built into its developer policy, that could move it a step closer to signifying bots in this way, better clarifying the process.

Automated bots need to be built on systems that integrate with Twitter’s developer API, so the onus is on bot developers to adhere to these rules – or risk losing access. If Twitter were to also look to up its penalties and enforcement actions in this respect, that could add a whole new layer of transparency for Twitter bots, while again, enabling Twitter to weed out those looking to misuse its systems.

This is a key area that Twitter needs to address, and while the action here is not aggressive, it’s another step towards potentially reducing bot impacts, and instituting greater transparency into the process.

It’ll be interesting to see if Twitter uses the update as the impetus for a new push to take action against manipulative bot accounts.