Efficiency vs. Accountability?

Algorithms, Big Data and Public Administration

Illustratie: luckey_sun. CC BY-SA 2.0

Do the things you are saying on Facebook constitute terrorist content? Which jobs will an unemployed individual be required to apply for? Should an individual be investigated by the police? Many of these questions used to be answered by human beings and are currently increasingly being answered by automated decision-making systems, not only in the private sector but also in public administrations.

These automated decision-making systems raise considerable challenges not only in each policy area they are used but also for public administrations in how they ensure efficiency while safeguarding accountability and the rule of law in the face of rapidly changing technology. As such it is perhaps unsurprising that how to respond to challenges associated with ‘algorithms’ or ‘big data’ are currently some of the most hotly debated public policy questions.

The promise of algorithms and big data is frequently seen to be the same, even if these are separate technical concepts. Large computing systems are seen as more readily able to solve societal problems that comparable non-computational systems. Either through the usage of algorithms to partially or fully automate human decision-making or through the usage of big data to produce unexpected correlations that can be used for human decision-making, both systems require large amounts of data while promising greater efficiency within the functioning of public administrations. As will be discussed below however these systems face considerable challenges in regards to their transparency, accountability and even legal and constitutional admissibility that cannot be easily dismissed.


As “software is eating the world” (Andreessen 2011) human beings are increasingly surrounded by technical systems which make decisions that “they do not understand and have no control over” (Article 29 Data Protection Working Party 2013). While this can be disconcerting it is not necessarily a negative development but rather a product of this particular phase of modernity in which globalized economic and technological development produce large numbers of software-driven technical artefacts. These “coded objects” (Kitchin and Dodge 2011) embed all manner of decision making capacities that are relevant for public policy decision makers: Which split-second choices should a software-driven vehicle make if it knows it is going to crash? Do the algorithms of quasi-monopolistic Internet companies have the power to tip elections? Is racial, ethnic or gender bias more likely in an automated system and how much – if any - bias should be considered acceptable?

None of these questions provide for easy public policy answers and yet decision makers can and should spend time trying to grapple with these challenges. Historically many of the decisions on how to develop these kinds of software have been made by private companies, following whichever economic, legal and ethical frameworks those companies saw fit. However increasingly these systems are also implemented by public administrations, raising considerable challenges for those organisations. As public administrations are held to a higher standard than the private sector organisations (Bovens, Goodin, and Schillemans 2014), public sector organisations can’t just engage in an identical implementation of big data or automated-decision making systems as happens in the public sector.

Defining Algorithms

So what actually are algorithms? The definition used here starts from Tarleton Gillespie’s assumption that “algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations. The procedures name both a problem and the steps by which it should be solved.” (Gillespie 2014:167) Thus it can be suggested that algorithms are “a series of steps undertaken in order to solve a particular problem or accomplish a defined outcome.” (Diakopoulos 2015:400)

However saying what algorithms are is not the same as defining which algorithms matter. For the purpose of this report it seems reasonable to limit the scope of the algorithms being discussed to those which are digital (Diakopoulos 2015) and are of “public relevance” (Gillespie 2014:168). Moreover in order to separate out the specific human rights dimensions of algorithms, this report will focus on algorithmic decision-making, i.e. when algorithms make decisions in an automated or semi-automated fashion. This type of decision-making is often subjective in that there is no obvious right or wrong answer but rather the judgement of a human being was previously used to make a subjective determination that is now being made by an automated system (Pasquale 2015:8).

Finally it should be noted that algorithms as discussed here do not exist meaningfully without interaction with human beings. They are deeply entangled with practice and the “promise of algorithmic objectivity” (Gillespie 2014:168), both of which serve to create the social and institutional conditions in which algorithms have effects on real life human beings. It is heavily misleading to claim the computing systems are or even can be neutral, rather technologies are deeply social constructs (Winner 1980, 1986) with considerable political implications (Denardis 2012).

New or old challenges for Human Rights?

Are the challenges related to algorithms something new? It is possible to find articles from more than 45 years ago which discuss infringements of the right to privacy (Sills 1970) associated with automated data processing. Moreover data protection regulation such as the EU’s General Data Protection Regulation has also produced some of the key regulatory instruments for algorithms such as the “right to explanation” (Goodman and Flaxman 2016) or the right of access to “knowledge of the logic involved in any automatic processing of data concerning him” (EU Directive 95/46/EC). However one of the main challenges faced in this area is that data protection is often understood in an individual rather than a collective sense (Mantelero 2016), which suggests a false sense of agency for individuals. It can also be seen in this context that the European Data Protection Supervisor (EDPS) appointed an Ethics Advisory Group to go beyond the boundaries of existing Data Protection law to search for a new Digital Ethics.

Another human right that is evidently affected by the usage of algorithms is Freedom of Expression. The report of UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye to the Thirty-second session of the Human Rights Council (A/HRC/32/ 38) suggests that “search engine algorithms dictate what users see and in what priority, and they may be manipulated to restrict or prioritize content“ (Kaye 2016:7) and that “platforms deploy algorithmic predictions of user preferences and consequently guide the advertisements individuals might see, how their social media feeds are arranged and the order in which search results appear” (Kaye 2016:16).

Another key fundamental freedom that is frequently cited in relation to human rights is the right to Protection against Discrimination. Various discriminatory patterns arise around the usage of algorithms that are frequently suggested to violate this right (Caliskan-Islam, Bryson, and Narayanan 2016; Tufekci et al. 2015). There are also suggestions that certain forms of algorithmic decision-making lead to “social sorting” (Lyon 2003). Beyond these three fundamental rights discussed above, there are numerous other areas in which human rights may be affected by algorithms. These include ensuring the rule of law (Pasquale 2015; Joerden 2015), the right to free elections (Bond et al. 2012), workers’ rights (Irani 2015) and even the Right to Life (Asaro 2013). A similar elaboration could be made for almost any other human right, but suffice to say at this point that there are evidently human rights aspects to the usage of algorithms and that they are thus worthy of further study by policy makers to understand these aspects.

Public Administration and providing Government Services

Yet despite these challenges and potential human rights implications there are strong suggestions that the public sector is employing automated-decision making in areas as diverse as social security, health care, monitoring of public officials and the justice system. For example many courts in the United States use a computer program to assess the risk of repeat offending, which has been shown to be “biased against blacks” (Kirchner 2016).

Another example relates to the practice of Profiling the Unemployed in Poland (Jędrzej Niklas, Karolina Sztandar-Sztanderska, and Katarzyna Szymielewicz 2015). During their analysis they identified several challenges which are broadly true for the usage of algorithms in other areas of the public sector service delivery as well:

1.Non-transparent rules of distributing public services.
2. Shortcomings of computer systems as a trigger for arbitrary decisions.
3. Gap between declared goals and practice.
4. System based on the ‘presumption of guilt’.
5. Categorization as a source of social stigma.
6. Risk of discrimination” (Jędrzej Niklas et al. 2015:33–37)

Finally there are risks associated with outsourcing key government functions such as the provision of government benefits to the private sector. It has been argued in South Africa that operating such privatized government services while simultaneously engaging in competitive banking and insurance markets provides an inappropriate competitive advantage to companies who operate privatized government services (Elza Van Lingen 2016). Aside from the competition concerns there are evident concerns related to privacy and data protection that also arise from such arrangements.


The usage of big data, algorithms and automated decision-making poses considerable challenges to public administrations (van Haastert 2016). While their usage is far more widespread than is currently being discussed publicly, there is a long history on the shift from decision-making by individual bureaucrats to discretion being embedded in technical systems (Bovens and Zouridis 2002). As such systems become ever more prevalent there is a need for a wider public debate on how to ensure public accountability, good governance and effective democratic control in an era of automated decision-making.

Efficiency alone is an insufficient argument to justify a wholesale delegation of procedural administrative safeguards to a black box, relatively regardless whether algorithms, big data or automated decision-making are to be found inside the black box. As pressures on public finances of for greater efficiency in the public sector increase, the struggle around what kinds of automation can reasonably be employed without sacrificing accountability or transparency will only grow more pronounced in the years to come.

De Nederlandse vertaling van dit artikel is te vinden in de Helling over Big Data, winter 2016.

Andreessen, Marc. 2011. ‘Why Software Is Eating The World’. Wall Street Journal, August 20. Retrieved 1 September 2016
Article 29 Data Protection Working Party. 2013. Opinion 03/2013 on Purpose Limitation. Brussels, Belgium: ARTICLE 29 DATA PROTECTION WORKING PARTY. Retrieved (ARTICLE 29 DATA PROTECTION WORKING PARTY).
Asaro, Peter. 2013. ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’. International Review of the Red Cross 94(886):687–709.
Bond, Robert M. et al. 2012. ‘A 61-Million-Person Experiment in Social Influence and Political Mobilization’. Nature 489 (7415):295–98.
Bovens, Mark, Robert E. Goodin, and Thomas Schillemans. 2014. The Oxford Handbook of Public Accountability. OUP Oxford. Retrieved 17 October 2016.
Bovens, Mark and Stavros Zouridis. 2002. ‘From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion and Constitutional Control’. Public Administration Review 62(2):174–84.
Caliskan-Islam, Aylin, Joanna Bryson, and Arvind Narayanan. 2016. ‘A Story of Discrimination and Unfairness Implicit Bias Embedded in Language Models Aylin’. Pp. 1–2 in Security & Privacy Week 2016. Darmstadt; Germany: TU Darmstadt.
Denardis, Laura. 2012. ‘Hidden Levers of Internet Control’. Information, Communication & Society (September):37–41.
Diakopoulos, Nicholas. 2015. ‘Algorithmic Accountability’. Digital Journalism 3(3):398–415.
Elza Van Lingen. 2016. ‘DA Refers Net1’s Abuse of Advantage to Competition Commission’. Democratic Alliance. Retrieved 31 August 2016.
Gillespie, Tarleton. 2014. ‘The Relevance of Algorithms’. Pp. 167–94 in Media technologies: Essays on communication, materiality, and society, edited by T. Gillespie, P. J. Boczkowski, and K. A. Foot. Cambridge Mass.: MIT Press.
Goodman, Bryce and Seth Flaxman. 2016. ‘European Union Regulations on Algorithmic Decision-Making and a Right to Explanation’. in 2016 ICML Workshop on Human Interpretability in Machine Learning. New York, NY: ArXiv e-prints.
van Haastert, Hugo. 2016. ‘Government as a Platform: Public Values in the Age of Big Data’. Oxford Internet Institute.
Irani, L. 2015. ‘Difference and Dependence among Digital Workers: The Case of Amazon Mechanical Turk’. South Atlantic Quarterly 114(1):225–34.
Jędrzej Niklas, Karolina Sztandar-Sztanderska, and Katarzyna Szymielewicz. 2015. Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making. Warsaw, Poland: Panoptykon Foundation.
Joerden, Jan C. 2015. ‘Zum Einsatz von Algorithmen in Notstandslagen’. in 3. Würzburger Tagung zum Technikrecht. Würzburg: Universität Würzburg.
Kaye, David. 2016. Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression to the Thirty-Second Session of the Human Rights Council. Geneva, Switzerland.
Kirchner, Julia Angwin Surya Mattu, Jeff Larson, Lauren. 2016. ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.’ ProPublica. Retrieved 31 August 2016.
Kitchin, R. and M. Dodge. 2011. Code/space Software and Everyday Life.
Lyon, David. 2003. ‘Surveillance as Social Sorting: Computer Codes and Mobile Bodies’. Pp. 13–30 in Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination, edited by D. Lyon. New York: Routledge.
Mantelero, Alessandro. 2016. ‘Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection’. Computer Law and Security Review 32(2):238–55.
Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Sills, Arthur J. 1970. ‘Automated Data Processing and the Issue of Privacy’. Seton Hall Law Review 1.
Tufekci, Zeynep, Jillian C. York, Ben Wagner, and Frederike Kaltheuner. 2015. The Ethics of Algorithms: From Radical Content to Self-Driving Cars. Berlin, Germany: European University Viadrina. Retrieved (https://cihr.eu/publication-the-ethics-of-algorithms/).
Winner, L. 1980. ‘Do Artifacts Have Politics?’ Daedalus.
Winner, L. 1986. ‘The Whale and the Reactor: A Search for Limits in an Age of High Technology’.

Directeur van het Centre of Internet & Human Rights van de Europa-Universiteit Viadrina in Frankfurt (Oder).
Alle artikelen