Sub-theme 11: [SWG] AI at Work

Convenors:
Marleen Huysman
Vrije Universiteit Amsterdam, The Netherlands
Paul Leonardi
University of California, USA
Stella Pachidi
University of Cambridge, United Kingdom

Call for Papers



Overview & Summary

With the rapid developments in Artificial Intelligence (AI), work is fundamentally changing, and rampant predictions about job losses and job gains have captured the public’s attention. Current debates on AI and work are typically framed around its consequences for jobs.
Our theoretical insights related to technologies in practice and their impact on work and organizing need to be updated with empirically grounded research into both AI’s development and its implications for organizations and professionals. AI systems that are currently being developed and implemented in organizations are crucially different from prior ‘rule based’ expert systems, bringing about novel risks for organizations and knowledge work. Their autonomous, self-learning capabilities often black-box knowledge, thereby raising fundamental questions about what kind of expertise and skill will be needed and valued in the future.
This sub-theme calls for papers that provide a deeper understanding of how AI is developed and implemented in organizations, as well as how and why AI systems impact knowledge work. Such understanding is urgently needed to empirically interrogate the overly optimistic as well as overly pessimistic scenarios expressed by commentators on the sidelines of the actual development, implementation and use of AI at work.



Full Description of the Sub-theme

Artificial Intelligence (AI) refers to a “field of computer science dedicated to the creation of systems performing tasks that usually require human intelligence, branching off into different techniques”, in particular, machine learning concerns a narrower domain of AI “that includes all those approaches that allow computers to learn from data without being explicitly programmed” (Pesapane et al., 2018, p. 2). AI is often discussed as a new set of technologies that rapidly change work (e.g. Brynjolfsson & McAfee, 2014; Susskind & Susskind, 2015) and predictions about AI’s role in bringing about job losses have reached newspaper headlines. While it cannot be denied that for many jobs, there are clear benefits of AI at work – especially when these systems take over tedious, repetitive and dull tasks – current machine learning-based AI systems go further to take over knowledge intensive tasks. In fact, current AI is entering the domain of experts who have developed their expertise over many years of higher education and training on the job, making even ‘tacit knowledge’ (expressed by Polanyi’s [1966] famous phrase: “We know more than we can tell”) no longer safe from automation. AI in health imaging, for example, might renders obsolete radiologists’ expertise of spotting tumors in images (Lebovitz, 2019), people analytics algorithms provide insights into candidates’ profiles, not previously known to HR professionals and legal AI applications give judges recommendations for bail terms by predicting the likelihood of re-offences (Fry, 2018). This raises fundamental questions, such as who will control complex knowledge-intensive tasks, who will be accountable for complex decisions, or whether the expertise of highly educated professionals will even be needed in the future (Faraj et al., 2018; von Krogh, 2018; Kellogg et al., 2019).
 
AI systems that are currently being developed and rolled out in organizations, are crucially different from ‘rule based’ expert-systems, bringing about novel risks for organizations and knowledge work. First, in contrast to previous generations of AI systems which still depended on the motivation and ability of domain experts to contribute their expertise to the system (Huysman & Wulf, 2004; Huysman & de Wit, 2004), designers of machine learning AI use “training data” in order to create more ‘objective’ AI systems, developed almost or completely independently from domain experts (Michalski, 1986). Secondly, in contrast to earlier generations, strong AI systems are producing output that is inscrutable, even for their creators. As a result, knowledge gets blackboxed and it may become untraceable how algorithm produces a decision. The two characteristics of machine learning systems: their autonomous, self-learning capability and ability to blackbox knowledge, may increase people’s dependencies on machines and even render whole categories of knowledge work obsolete.
 
Most AI researchers approach knowledge as mainly cognitive, stored in the minds of individuals. To reveal the interaction with AI and knowledge at work in an organizational context, a practice perspective that sees knowledge as socially constructed, contextually embedded, and constantly in flux, is more helpful. For example, previous research on developing expert systems revealed the difficulty of automating expertise (e.g. Forsythe, 1993; Suchman et al., 1999) not so much because of the inability to understand individual cognitions fully (as is assumed by most AI researchers), but instead because of the social nature of knowledge that cannot be codified in systems without losing its meaning (Brown & Duguid, 2017; Dreyfus, 1992). Moreover, a practice perspective acknowledges a broader view on knowledge, including the shared worldviews and values that underpin assumptions on what is and makes knowledge. Such a broader view on knowledge helps, for example, to reveal the epistemological differences between AI developers and experts which complicates the mission to ensure that AI actually augments knowledge work. A practice perspective on AI at work is also needed to empirically test the often-pessimistic scenarios expressed by commentators at the side line, e.g.: does AI indeed negatively influence how experts make decisions and generate knowledge? What are the consequences for the organization of knowledge? What is the role of organizational decision makers in this, how do they make decisions on what knowledge can be outsourced to AI and what not?
 
Domain experts have gained legitimacy by generating, enacting, and protecting specialist knowledge (Abbott, 1988). As AI enters the workplace to either replace or assist these experts in their work, we expect that such established processes will change, but we do not yet know how. For example, are art historians who were previously respected for their “expert eye’s” ability to recall, recognize and analyze artworks, now working alongside or in competition with the Google image algorithm (Sachs, 2019)? We need more insight on the long- term (unintended) consequences of using AI in knowledge work. What are the ‘ripple effects’ of AI technology implementation? For example, novices working alongside AI have fewer opportunities for ‘situated learning’, and new or hybrid roles are emerging (Waardenburg et al., 2018).
 
Also, AI developers typically emphasize the superiority of their tools by contrasting the objectivity and superiority of insights provided by AI with the “subjectivity” used by domain experts or limitations of their cognitive capacities (Elish & Boyd, 2018). While fierce critiques to validity of such knowledge claims have emerged (e.g. Crawford & Calo, 2016; Selbst et al., 2019), we need empirical insight into how AI developers enroll organizations to buy into the superiority of AI decision making and why, despite controversy, organizations are eager to accept such knowledge claims. Finally, organizational decision-makers play in important but often ignored role in the implementation of AI. It is important to see what assumptions and rationalizations drive the decision to purchase and implement AI systems. For example, how do managers decide what aspects of expert work are suited for delegation to an AI system?
 
We welcome papers based on empirical studies (both qualitative and quantitative) as well as conceptual papers. A non-exhaustive list of relevant research topics includes:

  • How does AI at work help to reach an inclusive society?

  • How do AI developers integrate domain knowledge when developing AI systems?

  • What strategies do AI development organizations use to convince external audiences of the knowledge potential of AI?

  • How do intelligent machines automate, augment and informate work and organizational processes?

  • How are changes in the nature of work afforded by AI technologies?

  • What questions does AI raise regarding responsibility, accountability, ethics and justice in the public and private sector?

  • How does AI affect professional expertise and the evolution of professions?

  • How does AI shape knowledge work, and how knowledge is acquired and shared in organizations?

  • What are the consequences of AI for how workers are managed and how they respond and perform?

  • What is the relationship between AI and the organization of work and value chains, including the emergence of labor platforms?

  • How do AI implementation and use create new forms of work?

  • What are the unintended consequences of AI implementation? E.g. what counter-performances do workers engage in to cope with the impact of AI on them?

  • What are the effects of the emergent work changes on organizational boundaries, business models and entrepreneurship?

 


References


  • Abbott, A. (1988): The System of Professions: An Essay on the Expert Division of Labor. Chicago: University of Chicago Press.
  • Brown, J.S., & Duguid, P. (2017): The Social Life of Information. Updated, with a new preface. Boston: Harvard Business Review Press.
  • Brynjolfsson, E., & McAfee, A. (2014): The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.W. Norton & Company.
  • Crawford, K., & Calo, R. (2016): “There is a blind spot in AI research.” Nature, 538 (7625), 311–313.
  • Dreyfus, H.L. (1992): What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge: MIT Press.
  • Elish, M.C., & Boyd, D. (2018): “Situating methods in the magic of Big Data and AI.” Communication Monographs, 85 (1), 57–80.
  • Faraj, S., Pachidi, S., & Sayegh, K. (2018): “Working and organizing in the age of the learning algorithm.” Information and Organization, 28 (1), 62–70.
  • Forsythe, D.E. (1993): “Engineering knowledge: The construction of knowledge in artificial intelligence.” Social Studies of Science, 23 (3), 445–477.
  • Fry, H. (2018): Hello World: Being Human in the Age of Algorithms. New York: W.W. Norton & Company.
  • Huysman, M., & de Wit, D. (2004): “Practices of managing knowledge sharing: Towards a second wave of knowledge management.” Knowledge and Process Management, 11 (2), 81–92.
  • Huysman, M., & Wulf, V. (eds.) (2004): Social Capital and Information Technology. Cambridge: MIT Press.
  • Kellogg, K.C., Valentine, M.A., & Christin, A. (2020): “Algorithms at work: The new contested terrain of control.” Academy of Management Annals, 14 (1), 366–410.
  • Lebovitz, S. (2019): Diagnostic Doubt and Artificial Intelligence: An Inductive Field Study of Radiology Work. Presented at International Conference on Information Systems (ICIS) 2019 Conference, Munich, December 15–18, 2019; https://aisel.aisnet.org/icis2019/future_of_work/future_work/11/
  • Michalski, R.S. (1986): Understanding the Nature of Learning: Issues and Research Directions. San Francisco: Morgan Kaufmann Publishers.
  • Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., & Vertesi, J. (2019): “Fairness and Abstraction in Sociotechnical Systems.” Proceedings of the Conference on Fairness, Accountability, and Transparency, Jan. 2019, 59–68.
  • Suchman, L., Blomberg, J., Orr, J.E., & Trigg, R. (1999): “Reconstructing technologies as social practice.” American Behavioral Scientist, 43 (3), 392–408.
  • Susskind, R.E., & Susskind, D. (2015): The Future of the Professions: How Technology will Transform the Work of Human Experts. Oxford: Oxford University Press.
  • Polanyi, M. (1966): The Tacit Dimension. Garden City, NY: Doubleday.
  • von Krogh, G. (2018): “Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing.” Academy of Management Discoveries, 4 (4), 404–409.
  • Waardenburg, L., Sergeeva, A., & Huysman, M. (2018, December): “Hotspots and Blind Spots.” In: Working Conference on the Interaction of Information Systems and the Organization, Proceedings. San Francisco, December 11–12, 2018, 96–109.
  •  
Marleen Huysman Director of the KIN Center for Digital Innovation at at the Vrije Universiteit Amsterdam, The Netherlands, and is Head of the Department of Knowledge, Information and Innovation. She conducts research in the following overlapping fields of research, all related to the development and use of digital innovation: new ways of working, technology in practice, knowledge sharing, -coordinating, development and integration. Marleen’s research has been published in various international journals and books and is a frequent speaker at academic and professional meeting in the field.
Paul Leonardi is the Duca Family Professor of Technology Management at the University of California, Santa Barbara, USA. His research examines how implementing new technologies and harnessing the power of informal social networks can help companies take advantage of their knowledge assets to create innovative products and services.
Stella Pachidi is a Lecturer in Information Systems at the University of Cambridge, Judge Business School, United Kingdom. Her research interests lie in the intersection of technology, work and organizing. Currently, Stella’s research projects include the introduction of algorithmic technologies such as analytics and artificial intelligence in organizations; managing challenges in the workplace with digitisation; and practices of knowledge collaboration across boundaries.