As a proponent of what we might call extended cognition, the general idea is one to which I’m sympathetic. The extended mind hypothesis is a metaphysical claim: on this hypothesis, mind can extend beyond the skull and into the artifacts that enable certain kinds of thinking (my smartphone might partially constitute my mind, when its reminders, navigational capacities, search functions, and so on, are sufficiently integrated into my cognitive activities). The extended cognition hypothesis is agnostic about metaphysics: it simply emphasises the degree to which our thought is offloaded onto the world, including artifacts. New technologies enable new kinds of thinking, and this has always been true. As Richard Feynman said, notes on paper aren’t merely a record of thinking, “not really. It’s working. You have to work on paper, and this is paper.”
Written by Neil Levy
More generally, the drudge work lays down the bedrock for creative activity. If I had never attempted to review and synthesise the work that appears in the review section of a paper, I wouldn’t know it well enough to be able to generate some of the hypotheses I go on to explore. That drudge work is an essential developmental stage. It’s also a developmental stage for a set of skills at navigating a terrain. This is a generalizable skill, one we can apply in future to different material and different debates. It may be that those who have already developed such skills – those who became academically mature before the advent of LLMs – can outsource drudge work at a smaller cost than those who have not yet developed this set of skills. Perhaps doing the task for oneself, boring though it may be, is necessary for a while, before we throw away the ladder we’ve climbed.
Inevitably, I ran this blogpost through an AI tool – the free version of Quillbot. It identified one or two typos, which of course I corrected. It also made a number of stylistic suggestions. I accepted almost none of them, but several led me to think I ought to rephrase the passage. Perhaps that’s not a model for how AI might be useful for writing right now.
The idea of a division of labor between the relatively routine and the creative imagined above, with the LLM taking on the first and the human (alone or in collaboration with the LLM) the second, is not unattractive. It can be tiresome to review a literature one already knows well. Sometimes, I find myself in the position of having to rewrite pretty much the same points I’ve made in a previous paper in an introductory section. It’s only norms against self-plagiarism that prevent me from cutting and pasting from the older paper to the newer one. Allowing the LLM to do the work of rephrasing is a tempting option. We might think that whatever other costs and benefits they have, getting them to do what we the drudge work is surely an unalloyed benefit.
Extending cognition through new technologies opens cognitive horizons that are otherwise inaccessible to us. Supercomputers that perform millions of operations per second allow us to analyse data and perform mathematical calculations that were utterly closed to previous generations. But in opening up new horizons, new ways of extending thought can make others less accessible and have unwanted impacts on our native cognition. In The Phaedo, Plato expressed the fear that writing would undermine our capacity to remember things. He may have been right about its effects on our memory, but that’s more than compensated for by our increased capacity to record things externally. There are no guarantees, however, that changes will always be for the better.
A number of academics, writing in academic journals and on Twitter, have suggested that LLMs could be used to streamline the writing process. As they envisage it, LLMs could take on the burden of writing literature reviews and overviews, leaving the human free to undertake the more creative work involving the generation and testing of hypotheses (here, too, though, the LLM might have a role: it could generate candidate hypotheses for the human to choose between and refine, for example).
Some of those who have worried about the singularity – the postulated moment when AI design takes off, with ever more intelligent AIs designing even more intelligent AIs, leaving us humans in their dust – have proposed we might prevent human obsolescence by merging with the machines, perhaps even uploading our minds to artificial neural networks. I don’t know whether the singularity or human obsolescence are real threats, and I’m very sceptical about mind uploading. Whatever the prospects might be for mind uploading, right now we can integrate AIs into our thinking. We may not stay relevant for ever, and we may never merge with the machines, but right now they’re powerful tools for extending our cognition. They might homogenize prose and lead to a loss of creativity, or they might lead to an explosion of new approaches and ideas. They’re certain to have unanticipated costs, but the benefits will probably be much greater.

I’ve got no doubt that LLMs can and will be incorporated into academic writing, in ways and with effects we’re only beginning to imagine. Externalizing thought is extremely productive: it’s always been productive to write down your thoughts, because externalizing them allows us to reconfigure them, and to see connections that we mightn’t otherwise have noticed. The more complex the material, the greater the need to externalize. LLMs allow for a near instantaneous kind of externalization: we might regenerate multiple versions of a thought we’ve written once, and the permutations might allow us to see new connections. LLMs can also be used to generate new candidate hypotheses, to identify gaps in the literature, to synthesise and visualise data, and who yet knows what else? Perhaps the day will come – perhaps it will even be soon – when AI replaces the human researcher altogether. For now, it’s a powerful tool, perhaps even a partner, in the research process.
Perhaps – perhaps – it’s a benefit overall, but it’s not an unalloyed benefit. While we may approach a paper with a hypothesis in mind, and think of the introductory sections as merely sketching out the terrain, the relationship between that sketch and the meat of the paper is not always so straightforward. Sometimes, in rephrasing and summarizing ideas that I thought I already knew well, I discover relations between them I hadn’t noticed, or a lack of clarity that hadn’t struck me before. These realisations may lead to the reframing of the initial hypothesis, or the generation of a new hypothesis, or simply greater clarity than I had previously. What I took to be mere drudge work can’t be easily isolated from the more creative side of thought and writing.
Large language models look set to transform every aspect of life over the coming decades. Some of these changes will be dramatic. I’m pretty unconcerned by the apocalyptic scenarios that preoccupy some people, but much more worried about the elimination of jobs (interestingly, the jobs that seem likeliest to be eliminated are those that require the most training: we may see a reversal of status and economic position between baristas and bureaucrats, bricklayers and barristers). Here, though, I’m going to look at much less dramatic, and very much near term, effects that LLMs might have on academic writing. I’m going to focus on the kind of writing I do in philosophy; LLMs will have different impacts on different disciplines.

Similar Posts