European Science Editing 51: e142904, doi: 10.3897/ese.2025.e142904
Why artifical intelligence is not an author
expand article infoChris Zielinski
‡ University of Winchester, Winchester, United Kingdom
Open Access
Abstract
Generative AI/chatbots provide a valuable new writing tool, but they are just software products, and software does not have a legal persona. You cannot sue, arraign, fine, imprison or otherwise punish a chatbot. This is one reason why many journals, as well as COPE, ICMJE and WAME, among other practitioners’ organisations, advise against identifying AI as an author. Furthermore, chatbots produce a statistically generated language, or botfo, by applying probability to the materials they have scanned. It is a strangely dehumanised language, lacking intentionality and containing conscious and unconscious bias. Ultimately, this paper argues that we should not call chatbots authors since they are unaccountable, and can’t think, judge or be jailed.
Keywords
Authorship, botfo, chatbots, generative AI, large-language models