Sub-theme 18: Algorithms We Live by and Resist in Organizational Life -> HYBRID!
Call for Papers
Artificial Intelligence (AI) and machine learning are portrayed as an emerging promise in our evolving world, forestalling
a digital and yet unknown organizational future. Consumer-oriented mobile technologies offer new ways of capturing a digital
organizational understanding of how people behave and interact (e.g. Mayer-Schönberger & Cukier, 2014). AI is seen as
a promissory solution to urgent organizational problems like workforce shortage, unmet social needs, welfare inequality, personalized
medicine and precision welfare. However, AI is also viewed as a sincere threat to human well-being. The ‘googlization’ of
healthcare and welfare would enlarge our dependency on corporate actors like Apple, Amazon and Facebook for the delivery of
essential public goods (Sharon, 2021). Also, automated eligibility systems and predictive risk models increasingly decide
on who receives public resources, who is short-listed (or not) for employment, and who is ‘red flagged’ and investigated for
fraud – and hence who is denied instant and necessary resources (Eubanks, 2018). Rather than enhancing well-being and good
organizational life, AI would foster traditional power relationships and discriminate the ones that most need protection and
support (D’Ignazio & Klein, 2020).
In this sub-theme, we reconsider the big dreams, threats, and contested
imaginaries of AI in organizational life. By conceiving AI both as an ‘object of concern’ and an ‘object of ignorance’ (O’Doherty
& D., 2019), we explore how organizations organize for, and work with AI and the contingencies are produced (Alaimo &
Kallinikos, 2021; Stevens et al., 2020). We want to empirically scrutinize and theoretically conceptualize how AI emerges
from the micro-activities of organizational actors, data scientists and clients, and how these activities become part of AI-driven
service delivery – but also, conversely, what organizations or occupational practices may be indifferent to the algorithmic
wave. We explore how algorithms, machine learning and AI shape and reshape organizational practices, eligibility criteria,
work practices and client experiences, and how organizations (both public and private) act upon these transitions (Tore Hansen
et al., 2018; Willems & Hafermalz, 2021). We are interested in how AI emerges as, and is embedded in a new way of thinking
about organizations and organizational control, how it disciplines both workers and recipients, but how it also may ensue
practices of algorithmic indifference and resistance (e.g. in the form of ‘algoactivism’, Kellogg et al., 2020) – and with
what consequences.
We invite contributions that wish to explore the (contested) organizational life of AI
and related practices, envisioning its complexity, uncertainty, performative practices as well as the ‘caring solutions’ to
the problems encountered. Papers may address the following questions, but we are also interested in related contributions
exploring the (imaginative) practices of AI use and digital organizational futures:
How does AI shape work practices, decision-making processes, and client encounters in organizations?
How does AI impact on the relationships between organizations (e.g. in relation to its ongoing need to ‘scale up’ data)?
How does AI impact on organizational values and institutionalized organizational routines (e.g., in relation to professional values, hierarchical relationships, carrier opportunities)?
How does the disciplinary use of data and emerging forms of resistance play out in organizational life, (e.g. actions of algoactivism, algorithmic indifference)?
What caring practices does AI evoke in order to protect vulnerable groups, or, conversely, what lack of care is produced, i.e. through the exclusion of marginalized groups or topics in data?
How do organizations respond to the transition from street-level tot screen-level or system bureaucracies? What new goods are being produced in this transition?
References
- Alaimo, C., & Kallinikos, J. (2021): “Managing by Data: Algorithmic Categories and Organizing.” Organization Studies, 42 (9), 1385–1407.
- D’Ignazio, C., & Klein, L.F. (2020): Data Feminism. Cambridge: MIT Press.
- Eubanks, V. (2018): Automating Inequality. How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
- Kellogg, K.C., Valentine, M.A., & Christin, A. (2020): “Algorithms at work: The new contested terrain of control.” Academy of Management Annals, 14 (1), 366–410.
- Mayer-Schönberger, V., & Cukier, K. (2014): Big Data: A Revolution that Will Transform How We Live Work, and Think. Boston: Mariner Books.
- O’Doherty, D., & D., N. (2019): “The developments in ethnographic studies of organising: Towards objects of ignorance and objects of concern.” Organization, 26 (4), 449–469.
- Sharon, T. (2021): “Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers.” Ethics and Information Technology, 23 (Suppl. 1), 45–57.
- Stevens, M.J., Wehrens, R.L.E., & de Bont, A.A. (2020): “Epistemic virtues and data-driven dreams: On sameness and difference in the epistemic cultures of data science and psychiatry. Social Science & Medicine, 258 (2), https://www.sciencedirect.com/science/article/pii/S027795362030335X?via%3Dihub.
- Tore Hansen, H., Lundberg, K., & Syltevik, L.J. (2018): “Digitilization, street-level burecracy and welfare users’ experiences.” Social Policy & Administration, 52, 67–90.
- Willems, T., & Hafermalz, E. (2021): “Distributed seeing: algorithms and the reconfiguration of the workplace, a case of ’automated’ trading.” Information and Organization, 31 (4), https://www.sciencedirect.com/science/article/abs/pii/S1471772721000427?via%3Dihub.