Some time ago, absentmindedly tweeting about the woeful state of higher education, I received a notification that one of my tweets was liked. This being somewhat rare, I excitedly went to check out who it was from, only to find that it was one of the institutions I was directly critiquing. If they had actually read the tweets I’m sure they wouldn’t have actually ‘liked’ them, so what gives?
This isn’t the first time something like this has happened to me. Periodically, as I’m sure many of us do, I get likes, follows, and retweets that seem incongruous with the content of my posts. Some are a result of Twitter users actively seeking to aggregate info, gain followers, and increase their social media presence. Others are fully automated Twitter bots.
Twitter bots, for the uninitiated, are pieces of software that use automated scripts to crawl the Twitterverse in search of particular words or phrases, to follow, like, or retweet others. In 2014 Twitter revealed that as many as 8.5% of its active accounts were likely bots. Beyond mere annoyance at the lack of a human interlocutor behind a ‘like’ or ‘follow,’ however, why care about the presence of Twitter bots or the use of algorithms to harness the power of social media?
One answer can be found in the case of Tay, an AI chat bot created on March 23, 2016 by Microsoft for use on Twitter. Tay was to be an experiment in conversational understanding, a way to “engage and entertain people where they connect with each other online through casual and playful conversation.” Importantly, while masquerading as a simplistic digital chat companion “that can learn,” Tay was also meant to be a mechanism of data aggregation. Microsoft aimed to track the speech patterns of millennials through mining each 18 to 24-year-old’s: nickname, gender, favorite food, zip code, and relationship status. Unfortunately for Microsoft, however, within hours Tay was inundated with sexist and white supremacist data. By the end of the day, Tay was taken offline.
Tay may have proven a resounding failure from a public relations standpoint, amongst many others just this year alone, but nevertheless demonstrates a key component in the contemporary machinations of capital, namely the mobilization and aggregation of user desire, as well as its ability to be short-circuited.
From a historical perspective, Tay does not represent anything overtly new (especially as a manifestation of gendered technology, as this article by Helen Hester deftly argues). Bots have been around since before the creation of the Internet, one early example being ELIZA, a psychotherapist program created by MIT computer scientist Joseph Weizenbaum in 1966 to mimic human conversation. ELIZA’s function as a ‘computer psychiatrist’ should not be understated, as it was Weizenbaum’s intention to ‘trick’ users into thinking they were talking to an actual human through open ended questions that encouraged them talk about themselves. More recently, in the early-2000’s, SmarterChild was an AI bot that interacted via instant messaging services like AOL Instant Messenger, at one point occupying five percent of all IM traffic. SmarterChild, unlike ELIZA, had a range of utilities providing information about weather and stocks in addition to being a simple chat buddy.
Today, the near ubiquity of immersion in digital architecture has driven a resurgence in bots. From social media bots on Twitter and Facebook to digital assistants like Siri and Cortana to bots from Slack and Taco Bell, bots act as digital aides and automated middlemen, deciding which information we receive and from where. Central to their proliferation, the affective capabilities of these digital beings (e.g., their utility and personality) as opposed to being peripheral niceties, are integral in generating user interest. As Tiziana Terranova has argued, “If information is bountiful, attention is scarce because it indicates the limits inherent to the neurophysiology of perception and the social limitations to time available for consumption.” The best bots, therefore, make you want to talk to them. So while scarcity has characterized the economic landscape of much of human history (and still does for large segments of the population) we now often find ourselves in environments of excess. In such a milieu, search functions and the data-parsing algorithms represent necessities in the face of information overabundance. But, and this is crucial, these are not agenda-less tools.
If we are in an age of data overload, the ability to stand out amongst the mountains of info we are confronted with is a valuable commodity. In such a world, bot automation and mechanisms of augmented reality provide a means of access akin to highway signage, stairs, or glasses. If you don’t know the neighborhood, so to speak, these digital indicators clue you in. The problem, as seen in the case of Tay, however, is that bots are susceptible to all kinds of heinous discourse, be it racist, sexist, transphobic, ableist, or some multiplicitous horrifying amalgam. More than this, like any medium, bots are not just passive receptors for cultural ideals, but productive of them. Tay was not simply some blank slate onto which bigoted Internet users projected their own ideas of race and gender (which since those ideas aren’t “natural” they obviously got from somewhere), Tay was created to learn millennial speech patterns and thereby better advertise to them. Microsoft wanted easy access to user data, but ignored the ability of users to adversely affect that outcome. The objects we interact with on a daily basis, have biases built into them, highway signage directs you towards particular locations and away from others, stairs require a specific set of abilities to traverse while actively normalizing those abilities, and Tay was built to cultivate ad revenue, but each is affectable.
And so, these digital assemblers of desire do not represent a radical departure from similar mechanisms of medial desire production in the past. Media across the techno-ontological spectrum from radio and television, to the written word, to the most basic modes of corporeal communication are not any more objective or asensory than their digital counterparts. Nor are they any less indicative of hierarchical modes of accessibility. Each inculcates and assembles desire, fraught with relations of power that can nevertheless be re-circuited. How that desire is mobilized, however, is not so much a problem of technology, but of information. Tay was taught to be racist and sexist in a manner similar to those people who taught her. We should, therefore, be critical about the technological circulation of violence while recognizing who or what it advantages.
Elizabeth Grosz argues that we must see desire as “what produces, what connects, what makes machinic alliances…[as] an actualization, a series of practices…making reality…it aims at nothing above its own proliferation or self-expansion.” To this end, large-scale capital is not the only site of desire production. While eventually faced with a dissonant Twitter response, my initial reaction was one of excitement. Social media, therefore, cannot simply be seen as part of an ideological system, but affective, multiple, and assembled, a series of regimes networking instinct and feeling in a circuitous economy of desire, in which, likes beget likes beget likes.
If Twitter bots represent an automated iteration of desire production they also do not singularly signify an arrival of dystopian humanlessness. We too have been embedded in the circuits of machinic desire-production since well before Facebook.
Just ask the Twitter bots. It feels good.
The original article can be found @Cyborgology