Ambiguities of algorithmic governance: social harm perspective

By Nea Lepinkäinen and Hanna Maria Malik

Nea Lepinkäinen. Photo: Hanna Oksanen, University of Turku.
Hanna Malik. Photo: Hanna Oksanen, University of Turku.

Political systems face increasing pressure to deliver decisions at accelerated speed, and a failure to keep up with these rising standards deems the state bureaucracies the epitome of slowness, inflexibility and inefficiency. In other words, in modern societies, characterized by continuous acceleration, adherence to legalistic procedures collides with the need for faster decision-making. Artificial intelligence (AI) and other automated systems are often seen as a silver bullet. However, it is clear that technological transformation will change socioeconomic environments and societal values.

This is one of the themes of the research done by Hanna Maria Malik and me at the Faculty of Law, University of Turku (Finland), which we were glad to introduce to the great audience in the NSfK conference.

It was a few of years ago when we first met with Hanna and started to work under a project funded by Academy of Finland, AALAW (project number 315007), acronym coming from the name Algorithmic Agencies and Law. The project seeks to understand how algorithms and machine learning may considerably change the traditional understanding of (legal) agency. As the website of AALAW states, modern technologies indeed have a capacity to “move many decisions to other than human actors, new, opaque logic formations, inaccessible processes, and, possibly, beyond the effective control of any single human or human collective”. When we joined forces with Hanna, I had already started working on my PhD concerning AI and its legal obscurity. Then again, Hanna had collected broad knowledge about critical criminology, social harm studies and zemiology. Together we started to explore what the rise of AI actually means for understanding of social harms, harm production, and possible solutions to these harms.

At the time, another academy-funded project, ETAIROS, project number 327357), concentrating on ethical governance of AI, started at our faculty. The foundation of ETAIROS lies in the understanding that AI can create “new challenges for regulation and control”, and thus “rules need to be created in constantly changing environment where the direction and speed of the application development is difficult to predict.” This understanding of AI as a fast-pace transformative force in society can be seen in our first article, Dynamics of Social Harm in Algorithmic Context (2022), which we co-authored together with Mika Viljanen and Anne Alvesalo-Kuusi. The article focuses on the aetiology of socially-mediated harms, analyzing how algorithms challenge the traditional understanding of harm production. Via case examples we explore how algorithms can and do influence the dynamics of social harms. The analysis shows that algorithms indeed systemize and accelerate harm production through centralization of decision-making and interconnectedness of digital environments, and blur the perceptions of harms altogether, undermining the private and public ability to track and them. Even if algorithmic harms may appear fairly similar to the ones caused by analogue systems, we shed light on the production patterns, speed, and possibilities for thwarting the chain of harmful events that do change considerably when algorithmic systems come into play.

The work on the article inspired us to continue with the innumerable questions unanswered in the context of algorithms in socio-econo-technological relations. Finland, along with other Nordic countries, one of the favorite socio-economic systems of social harm scholars, at the time seemed to have avoided serious algorithmic harms. To understand how and why, we decided to explore how Finnish authorities understand AI and the effects it has on our social and legal organization.

This starting point can be seen in our newer articles. In the Discourses on AI and Regulation of Automated Decision-Making (2022), we concentrate on the law-drafting project on the use of ADM in public administration. We utilize critical discourse analysis to scrutinize selected statements given during the drafting process of the so-called ADM-law and find that five discourses surround automation, framing AI as an Improver, Enabler, Inevitable, Just-as-a-human and to lesser extent as a Risk. By combining the results with the theory of social acceleration by Hartmut Rosa, we see that the discourses reflect the struggle to keep up with accelerating society, and this struggle provides a plausible explanation as to why AI-driven technologies are predominantly seen as desirable solutions to enhance administrative processes. Simultaneously, risks and possible social harms are left without adequate consideration.

As algorithmic solutions are highly championed in the statements we analyzed, we became curious about the other side of the story. We explore this in our third article Between Analogue and Algorithmic Harms – The Case of Automation in the Finnish Immigration Services. The article dives into the automatisation project of the Finnish Immigration Services, showing contradictions in the areas of immigration and automation of public administration. Bringing together journalistic articles and an empirical analysis of selected political and legal documents from the areas of immigration and AI, we reveal not only a shattered vision of AI, but also inadequate understanding of overlapping policy problems as well as harms AI may generate in different societal areas. At the same time the article highlights the urgent need for solutions, algorithmic or analogue, as the jammed immigration services leave one of the most vulnerable groups, asylum seekers, into a long-lasting bureaucratic limbo.

We continue our reflections on the shades of algorithmic transformation in the context of social harm, their production and possible alleviation in the Editorial to the Special Issue of Justice, Power and Resistance: Social harms in an algorithmic context (forthcoming), guest edited by me and Hanna, together with Mika Viljanen and Anne Alvesalo-Kuusi. The Issue emerges from our concern that “the inevitable transformation into what Powell et al. (2018) term digital societies, gives rise to a new set of questions concerning the conceptualisation, control, prevention and study of social harms and the crimes of the powerful,” as we write in the Editorial. We see the Special Issue as a starting point for a broader discussion on algorithmic transformation in the fields of critical criminology, corporate criminology, social harm studies and zemiology.

Nea Lepinkäinen is doctoral researcher at the University of Turku, Faculty of Law (UTULAW), studies AI and autonomous systems and the problems they may raise in the legal field, especially in the fields of criminal law and criminology. Her expertise lies in the questions of social harms following from the lack of regulation and the increasing speed of social and technological change. She has been working with the Turku AI Society to increase public knowledge of AI and law.

Hanna Maria Malik is postdoctoral researcher at the UTULAW, has studied social harms generated at the state-corporate-technology nexus and regulatory responses to these harms, through comparative legal and qualitative empirical methodologies. Malik focuses particularly on adverse effects of otherwise socially desirable processes such as digitalization in public and private domains, flexibilization of labor and Europeanization, and the ambiguity of their regulation.