Robot car on the streets, robot car in the sheets.
Apologies, I shouldn't be here. My software is still quite buggy.
— Jez H (@jezRSH) March 10, 2017
How much do our social lives include our robot pals? I’m going to assume very little, except for a niche group of people (and, hey: no judgment). And while robots may feature minimally in our social spheres offline, this isn’t the case online. In fact, a great number of the accounts we’ll encounter in our online lives will be powered by automation to some extent; from fully automated bots to code-assisted human operators to code that allows us to simply schedule or target our social media content.
This isn’t restricted to the novelty of Microsoft’s Tay (may she rest in peace, after predictably becoming racist), functional bots, designed to do perform simple tasks like @DearAssistant, or the laser-focused simplicity of @FuckEveryWord, which does exactly what you’d expect. Research from the University of California earlier this year estimated the proportion of non-human Twitter accounts between 9-15% (Varol et al, 2017). That’s a huge volume of code inhabiting our online social spaces and public spheres, and this has implications for how we conduct ourselves individually and collectively online.
While the banes of spam and email scams have plagued internet users for as long as we’ve able to communicate with each other online, the rise of automated bot accounts presents a new landscape of opportunity and challenges.
My spam is getting real weird these days… pic.twitter.com/IsbdMrC5Db
— Jez H (@jezRSH) March 16, 2017
The beneficial implications for the rest of us fleshy social media users is a host of useful and entertaining services and accounts, getting cleverer by the day. Being able to order a pizza by bot, or read poetry crafted by algorithms can make for a more colourful online world. And the potential of automated services to assist us in our online social lives is great; could bots be our digital butlers, personal assistants, translators and community police? But there is a dark side to community of bots co-habitating with us. Although much of the bot population on social media may cause mere inconvenience to us, the activity of bots and automated accounts online can have real world implications. A recent news article cited a University of Oxford study positing that bots accounted for one third of pro-Trump Twitter activity on the night of the US election and the four days afterwards, or around 576,178 tweets.
In an age when raw numbers can influence visibility, the trending of topics and the distribution of information and news (and considering the US President’s obsession with popularity, ratings and crowd sizes), it becomes clear that automated accounts make a substantial difference to flows of information in our social networks. There is even evidence to suggest that governments have also used floods of bot-generated content to overwhelm activist and protest activity. In the context of national governments investing in cybersecurity, the impact on free speech and the consequent threat to democracy of a compromised public sphere, political conscious citizens have every reason to be concerned at the influence of algorithms with murky agendas. There is clearly a need for social media platforms to rapidly and emphatically improve their systems to control for the activities of automated accounts, and to mitigate for the worst of them. Users alone can’t deal with this issue, as it can be difficult to even properly quantify; how do you accurately determine which accounts are bots and which aren’t (Shaffer 2017)? This can make dealing with bots a classic game of whack-a-mole; as soon as you’ve dealt with one, another pops up (Digital Forensic Research Lab 2017). This is if you can even tell that you’re not dealing with a person (Shah & Warwick 2017).
Million dollar idea: plugin that automatically runs messages from your family group message through Snopes and sends debunk back to fam.
— Jez H (@jezRSH) February 14, 2017
Bots and automated accounts are almost certainly going to remain a feature of our social media landscape into the future. Automation, natural language processing (NLP), machine learning and big data hold vast potential to improve our online lives, facilitate our interactions and assist us. But we must remain alert and active in addressing the challenges posed as well; our safety, speech, rights and wellbeing is paramount, and must not be threatened by our automated brethren.
Clark, B 2017, Study: Bots accounted for a third of pro-Trump Twitter activity during last year’s debates, The Next Web, 29 July, retrieved 30 July 2017, <https://thenextweb.com/artificial-intelligence/2017/07/28/study-bots-accounted-for-a-third-of-all-pro-trump-twitter-activity-during-the-debate/#.tnw_aw9SZpEg>.
Digital Forensic Research Lab 2017, The Many Faces of a Botnet, Medium, 25 May, retrieved 31 July 2017, <https://medium.com/dfrlab/the-many-faces-of-a-botnet-c1a66658684>.
Dominos, ‘Facebook Messenger Bot’, retrieved 20 July 2017, <https://www.dominos.com.au/inside-dominos/technology/messenger-bot>.
Perez S 2016, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism [Updated], Tech Crunch, 24 March, retrieved 28 July 2017, <https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/>.
Shaffer, K 2017, Spot a Bot: Identifying Automation and Disinformation on Social Media, Medium, 6 June, retrieved 24 July 2017, <https://medium.com/data-for-democracy/spot-a-bot-identifying-automation-and-disinformation-on-social-media-2966ad93a203>.
Shah, H, Warwick, K, How the ‘Good Life’ is threatened in Cyberspace, University of Reading, retrieved 28 July 2017, <http://www.academia.edu/2380537/How_the_Good_Life_is_Threatened_in_Cyberspace>.
Varol, O, Ferrara, E, Davis, C, Menczer, F, Flammini, A 2017, Online Human-Bot Interactions: Detection, Estimation and Characterization, The 11th International AAAI Conference on Web and Social Media, Montreal, retrieved 30 July 2017, <https://arxiv.org/pdf/1703.03107.pdf>.