Publication Type

Transcript

Version

publishedVersion

Publication Date

9-2023

Abstract

As we write this editorial for this special issue, we are amidst the significant technological changes that are continuing to shape society. Since the emergence of ChatGPT in November 2022, humanity has become aware of the potential of generative AI (i.e., AI that can generate content) and large language models (LLMs) (i.e., AI models trained on a massive corpus of unstructured data). There is growing debate and discussion about the promise and perils of generative AI for the future of work, and academia is not immune. Premier journals in the IS domain, such as Information Systems Research, have published editorials on what the emergence of generative AI means for IS research (see Susarla et al., 2023). Other journals have also published editorials on the role of generative AI – whether it is an assistant or a co-author/collaborator (e.g., Offiah and Khanna, 2023, Nah et al., 2023). These editorials have discussed various AI capabilities and limitations. However, they also assert that human researchers must fact-check the interpretation of the LLMs because they are prone to hallucinations and may be trained on irrelevant data, resulting in inaccurate inferences. In this editorial, we will explore what the emergence of generative AI and LLMs means for literature reviews, in general, and literature reviews in the IS domain, in particular.

Discipline

Artificial Intelligence and Robotics | Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

Journal of Strategic Information Systems

Volume

32

Issue

3

First Page

1

Last Page

4

ISSN

0963-8687

Identifier

10.1016/j.jsis.2023.101788

Publisher

Elsevier

Copyright Owner and License

Publisher

Additional URL

https://doi.org/10.1016/j.jsis.2023.101788

Share

COinS