Sociotechnical Systems in Fairness & Accountability Research

Exploring the concept applied to social media platform security

In contemporary discussions about fairness and accountability regarding the mass implementation of ML systems and participatory technologies, understanding a technology, like a social media platform, as a sociotechnical system is a critical and necessary approach for articulating and addressing modern threats emerging alongside these technologies. Sociotechnical studies emphasize how the social context in which a technology is used will influence its adoption and use patterns.

I often see the term employed by research groups like Data and Society, conferences like the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), and university research groups like Princeton's CITP and Cornell's Citizens and Technology Lab.

Data and Society

“Our work acknowledges that the same innovative technologies and sociotechnical practices that are reconfiguring society – enabling novel modes of interaction, new opportunities for knowledge, and disruptive business practices and paradigms – can be abused to invade privacy, provide new tools of discrimination, and harm individuals and communities.”


“A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.”

Sociotechnical systems studies aim to demonstrate how technical infrastructure and social constructs, while traditionally imagined as largely separate, can constitute a system where powerful, self-reenforcing, and often unrecognized dynamics develop between previously disparate groups and entities. In the context of social media platforms, user targeting, recommender systems, and content moderation technologies are some examples of the technical constructs that enable, reinforce and govern (or fail to govern) relationships between previously unengaged communities. These novel social proximities may ostensibly be the end platform designers intended; however, empirical research on organized social abuse (such as harassment and disinformation campaigns) demonstrates that utilization patterns, and the outcomes produced, are determined as much by the communities engaged with them as the designers of the features themselves.

I am primarily interested in empirical research investigating the proper measurement, propagation, and effects of disinformation on social media platforms, as well as in the design and consequences of content moderation systems. However, encountering discussions of sociotechnical systems so frequently, I'm taking time to understand what the community means exactly when describing an analysis as employing a sociotechnical lense.

I'm starting with a look into this paper, titled ‘Entanglements and Exploits: Sociotechnical Security as an Analytic Framework’, and authored by Matt Goerzen and Elizabeth Anne Watkins, researchers at the Data & Society Institute, and Gabrielle Lim, a researcher at the Technology and Social Change Project at Harvard's Shorenstein Center. The paper was presented at the 2019 Free and Open Communication on the INternet Conference (FOCI).

The paper focuses on the nature of threats to communities using social media platforms and the security frameworks in place to address these. By articulating the nature of participatory technologies and employing concepts from sociotechnical systems studies, the authors demonstrate how existing security frameworks drawn from national security, computer security, and network security contexts, fail to address the emerging threats and inherent vulnerabilities associated with sociotechnical systems, such as social media platforms.

Some elements of traditional security frameworks addressed are the nature of the protected objects addressed by a security framework, the nature of latent vulnerabilities of these participatory technologies stemming from deliberate design choices and incentive structures, and the systems of accountability between the public, regulators, platform creators and other responsible parties. Goerzen et al. investigate three incidents involving abuses of sociotechnical vulnerabilities on social media platforms in order to highlight the nature of these vulnerabilities, as well as to propose an alternative security framework that might better address and anticipate similar abuses in the future. The incidents include: the #EndFathersDay Hoax, the Christchurch Terrorist Attack Facebook Livestream, and the Endless Mayfly Misinformation Campaign.

As an aside, I found Paul Leonardi's 2012 paper, which is referenced in the Goerzen et al. paper, as a great background to the history of terms like “sociotechnical system”. The paper investigates the increasing utilization and evolving meaning of various terms by scholars investigating interactions between social and technological systems between the mid 2000's and early 2010's.

The meanings and targets of these terms, as Leonardi writes, shadow various simultaneous debates by scholars over theoretical frameworks and prioritizations of practical vs. theoretical research. These debates affect the aims of the studies employing terms like “sociotechnical”, and thus similarly affect the meaning of the terms themselves. The evolution of new technologies in the 20th century, becoming more consistently defined and differentiated by software, instead of hardware, also exerted an influence on this language.

All technical systems bear the capacity for abuse by users or external actors exploiting oversights in the system's design. These systems typically adapt in order to protect their intended users, and to encourage them to continue using the system.

For example, DDoS (Distributed denial-of-service) attacks, one of the oldest, but also most dynamically advancing methods of cybercrime, is an example of a threat to a system's technical vulnerability by external actors. First seen in 1996, the DDoS attack seeks to make a network resource unavailable to its intended user base by coordinating a network of servers to flood a target with junk requests, overwhelming the target's capacity to communicate, effectively severing legitimate users from the target infrastructure.

Anticipating and defending against technical vulnerabilities, like DDoS attacks, is largely achieved by focusing on the technical aspects of the attack; however, social components that link users of the target infrastructure, or the technical vulnerabilities to the potential hostile actors, offer a distinct set of factors that help model and anticipate these attacks.

On June 12, 2019, the encrypted messaging service, Telegram, was briefly disrupted for users in Hong Kong by a massive DDoS attack that was later linked to a network of bots with IP addresses mostly originating in China. The attack came as protesters in Hong Kong largely depended on Telegram to coordinate demonstrations as part of the Ani-Extradition Law Amendment Bill Movement. This is to say, the attack on Telegram's technical vulnerabilities also contained critical social components as well that, along with the political context of the technology, could have formed a framework to help anticipate this kind of attack.

Still, this type of threat is not what is addressed by Goerzen et al.. ‘Sociotechnical exploits”, the authors argue, are vulnerabilities constituted by “both social and technical components” and exploitable in ways that harm specific communities engaged with the given participatory technology.

One example related in the paper is the #EndFathersDay Hoax in June 2014, where users on 4chan impersonated African-American feminist activists and started the Twitter hashtag, “#EndFathersDay”, in an attempt to coopt, sensationalize, and obfuscate public conversations related to feminism and gender discrimination. The campaign was successful in garnering the attention and amplification by conservative media outlets, like Fox News; however, was successfully exposed and thwarted through the cooperation of other independent users.

As the Goerzen et al. state, “anonymity…long considered a pillar of free expression on the internet is just one condition that's been exploited”, in this case, by organized hostile users to impersonate members of a group, with the aim to discredit and confuse the aims of the group in the mind of the public. The authors stress that the attackers in this incident played to multiple vulnerabilities, both technical and social.

Rachelle Hampton, a writer at Slate, follows the investigation of Shafiqah Hudson as she first encountered and then began investigating the trail of the tweets associated with this hoax. Notably, Hampton notes that “To casual observers online, #EndFathersDay appeared to be the work of militant feminists, most of whom were seemingly women of color. To Hudson, the ruse was never anything but transparent…But the hashtag was already trending worldwide”.

According to Hampton, the hoax was ultimately uncovered by individuals like Hudson, who subsequently coordinated a counter campaign, using evidence. Even as a successful resolution to this attack, the efforts by people like Hudson demonstrate a critical failure of a platform like Twitter, where the virtual and physical efforts and dangers associated with combating this kind of attack are distributed to the users. That is, rather than to the platform designers, who possess the necessary access to the infrastructure to efficiently identify and address the technical factors facilitating an attack like this.

Some of the technical vulnerabilities identified by Goerzen et al. included the ease of and annonymity afforded by account creation and the manipulation of metadata, like mentions, used by Twitter techincal features to recommend content, effectively propogating the scope of the threat. Some of the social vulnerabilities identified by the authors include ignorance on the part of the public that allowed trolls to temporarily establish membership using shibboleths of the African American women. Furthermore, the eagerness of right wing media outlets to further amplify a story like the one purported by the 4chan trolls presents another social vulnerability.

These vulnerability sets constituted by both technical features and social elements define what Goerzen et al. call sociotechnical exploits. These, as the authors state, “are not related to a software bug or the failure of a particular technical component”, but instead emerge as the social and technical aspects of a system interact. That is, “a tool that works as intended in a narrow set of scenarios…may demonstrate substantial capacity for abuse in another context”.

Unlike dealing with purely technical vulnerabilities, one of the most difficult issues facing those addressing these kinds of attacks, according to the authors, is anticipating what technical features of a system might be exploited and how. Even when the social context is known, that is, even when one can identify potentially hostile and potentially vulnerable communities in a system, recognizing attacks like the #EndFathersDay hoax, is not nearly as straightforward as recognizing a more conventional attack on a system's technical infrastructure, such as a DDoS attack. As Goerzen et al. caution, however, “threats and the features that allow them to come into being, such as anonymous accounts and encrypted messaging, should not be conflated, as the latter serve a beneficial purpose”.

Furthermore, the authors refrain from defining what exactly constitutes an attack exploiting a sociotechnical vulnerability. Many instances, such as coordinated online harassment and dissemination of conspiracy theories and disinformation, are cited; however, the authors recognize that what constitutes a threat “is typically subjective” and often contingent on understanding the social context of communities engaged in a given system. Understandably, this adds a great deal of complexity to policy makers and platform designers engaged in designing security frameworks.

However, the aim of this paper is not to provide a taxonomy of attacks exploiting socio technical vulnerabilities. Rather, Goerzen et al. aim to explain the urgent need for an updated security framework that can effectively analyze and shed light on the design choices, incentive structures, and other latent factors embedded in participatory technologies that bring about the sociotechnical vulnerabilities hostile actors exploit. As the authors elaborate, some of these factors include the advertisement driven/economies of scale sustained business models of platforms like Facebook and Youtube, corporate incentive structures based on performance metrics, and stock based compensation. Again, the argument presented is not to eliminate such factors, but rather to scrutinize them and the potential vulnerabilities they might produce.

How, then, does this security framework look like in practice? One significant function is producing lines of questioning that, as the authors state, systematically identify vulnerabilities and the ways they can be exploited for guiding discussions involving policymakers, platform and technology designers, as well as for the communities that are affected.

Goerzen et al. lay out a few of these perspectives, and contrast them with similar, but insufficient lines of questioning that might be posed from more traditional security frameworks. For example, there is the question of specifying the community requiring protection, or the referent object. In other security frameworks, like those applied to information technology, network, and computer systems, frameworks emphasize technical aspects, like infrastructure integrity, as the referent object. As was shown with the #EndFathersDay Hoax episode, however, threats to communities in participatory technologies can manifest within the intended operation of the technology, and, indeed, can even be exacerbated by it. Adequate security frameworks for these technologies, then, must take an active role in modeling social and technical vulnerabilities in a system, as well as identifying what potential interactions might take shape between social and technical elements that could serve to accelerate or more widely propagate a threat.

While Goerzen et al. elaborate at many points the need for more inclusive approaches to designing accountability mechanisms, I felt it was an aspect of the sociotechnical security framework that was not sufficiently differentiated from existing frameworks. As the authors explain, accountability mechanisms are currently exercised almost exclusively by private companies, and exercised among members of the groups designing and implementing the technical features of participatory technologies. As described with the #EndFathersDay Hoax episode earlier, this approach excludes crucial and unique perspectives from affected communities of users.

The authors allude to alternative approaches that would explicitly incorporate the input of users generally, as well as from potentially victimized communities specifically. I suspect the authors have in mind mechanisms with less intermediation as the existing user driven content moderation flags on Facebook or Twitter, but they did not make this explicit. Open auditing of proprietary ML systems is one frequently discussed method for incorporating more perspectives into discussions related to accountability.

Miguel Rivera-Lanas
Data Scientist / Engineer

Currently a Data Scientist/Engineer at a hedge fund. Primarily focused on empirical methods to study quantitative and social effects of disinformation propagation, content moderation systems, and computational social science generally.