Sub-theme 40: Organizing in the Age of the Algorithm ---> MERGED with sub-theme 11

Convenors:

Call for Papers


Artificial intelligence (AI) and algorithmic decision-making have become central concerns when contemplating the future of organizing, organizations and society-at-large (Kaplan & Haenlein, 2019, Moore, 2019). They are impacting domains as diverse as financial transactions, autonomous war machines, medical diagnosis, crime preventions, credits ratings and elderly care. Hence, some argue that organizations and organizing are becoming more algorithmic (Andrews, 2019). But what exactly does it mean that organizations and organizing are becoming more algorithmic? Thinking further still, what is it that algorithms actually do, and how does the discourse of AI potentially affect organizations? Thus, we envision this sub-theme to be a forum that invites and develops debate around these questions.
 
What algorithms actually do is of interest to organization studies for several reasons. For instance, because “authority is increasingly expressed algorithmically” (Pasquale, 2015: 8), algorithms affect decision-making processes in organizations. The organization is a polis; a location where situations are debated, information is shared and decisions are made, albeit often not in a spirit of equality or democracy. However, the way algorithms make decisions is at variance with this. In fact, it may usher a new era of formal “hyper” rationality (Lindebaum et al., 2020) that sees humans as an obstacle to efficient human societies. In addition, through the promise of greater efficiency, algorithms purport to facilitate the achievement of goals and strategies in unprecedented ways. Therefore, they are set to profoundly affect processes and outcomes within organizations and society. Yet, they pose fundamental questions about accountability, confidence, change, and control (Kaplan & Haenlein, 2019). Thus, a first point of inquiry relates to how AI and algorithms are being integrated into the everyday organizational practices and processes as decision-making tools that allow for ever-greater decision-making speed in relation to an ever-growing amount of data.
 
In terms of discourse, the sub-theme recognizes that ”talked about” phenomena (see, for example, Lindebaum et al., 2018) or technological artifacts can be mobilized and subverted in ways that can go potentially beyond the actual function of phenomena or capacity of the technology. We suggest that the “talk about” algorithms can create self-fulfilling theories (i.e., theories that not only describe but shape reality; see Marti & Gond, 2018). But it does more; the talk about algorithms inherently disciplines power relations (when it amplifies its appeal) and resistance (when it disrupts its appeal) in organizations. Discourses of power – be they organizational, economic, political, or social in kind – can imply that the actual technological capabilities of AI are made subservient to the interests of those agents that promote them. We suspect that, in the future, AI has the potential to dethrone (e.g., “globalization” or “deregulation”) as a disciplining discourse in the vocabulary of the power elite. As AI is inherently difficult to unbox and render transparent, the power discourses of AI exploit this characteristic to render it an elusive object that is super-moldable to the needs of those who desire to control power relations in organizations and beyond. In short, there is little change that cannot be legitimated by referring to the demands that AI technology has on us.
 
It is against the backdrop of these reflections that we encourage submissions that touch on this indicative rather than exhaustive list of questions:

  • How do we need to conceive of “agency” in an era of AI and algorithms, especially when they become ever more interconnected and networked? How can we better understand its decision-making; who puts these structures into place; and where do they start and stop? How, if at all, can we control and interfere in AI decision-making?

  • How are products of AI, such as machine-learning based analytical reports, transformed into powerful assets for organizational decision-making, and with what consequences? How are products of AI being deployed to control, shape and contain access to information in domains ranging from customer profiling to state-implemented firewalls and state-sponsored oppression of ethnic minorities?

  • How is AI being mobilized discursively to discipline and shape society’s relationship with its own future; what can we not do because of AI? Who wields such discourses, and what agendas are being woven into such discourses? What is the relationship between the discursive world of AI and the actual technological capabilities of AI?

  • How does AI change organizational processes in relation to issues, such as work design, decision-making, organizational learning, organizational control, or organizational accountability? How to value such changes?

 
There is no restriction to any particular body of theory, school of thinking, or methodological preference. Instead, we are looking for an eclectic and thought-provoking body of contributions that seek to tackle the emerging phenomenon on and around AI.
 
 

References

  • Andrews, L. (2019): “Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data’.” Public Administration, 97 (2), 296–310.
  • Moore, P.V. (2019): “Jerry Kaplan Artificial Intelligence: What Everyone Needs to Know.” Organization Studies, 40 (3), 466–470.
  • Kaplan, A., & Haenlein, M. (2019): “Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence.” Business Horizons, 62 (1), 15–25.
  • Lindebaum, D., Geddes, D., & Jordan, P.J. (eds.) (2018): Social Functions of Emotion and Talking About Emotion at Work. Cheltenham, UK: Edward Elgar.
  • Lindebaum, D., Vesa, M., & den Hond, F. (2020): “Insights from The Machine Stops to Better Understand Rational Assumptions in Algorithmic Decision-Making and Its Implications for Organizations.” Academy of Management Review, 45 (1).
  • Marti, E., & Gond, J.-P. (2018): “When Do Theories Become Self-Fulfilling? Exploring the Boundary Conditions of Performativity.” Academy of Management Review, 43 (3), 487–508.
  • Pasquale, F. (2015): The Black Box Society. Cambridge, MA: Harvard University Press.
  •