I recently received an email from someone about a grant application in which I’m involved.  In this email, the person coordinating the grant asked recipients to suggest revisions to the text, but noted that as it stood it had a score of 100% on Grammarly. He asked that any changes be made carefully, so that this score was retained.
What’s wrong with that? As I said, well designed affordances are useful. They enable us to pursue our goals more efficiently. They can also allow us to better coordinate with one another: if the paths funnel foot traffic in different directions onto different trajectories, we need to spend less time negotiating our way round one another. But the rollout of affordances also has a homogenizing effect. This may be especially the case when they’re AI-driven. I don’t know how the algorithms I’m using work, but they may well be based on machine learning using as its database text on the internet. If that’s what is going on, the suggestions will reflect what people already tend to do and reinforce it. The Sapir-Whorf hypothesis, according to which our language sets the limits of our thought, is surely false in any strong form. We don’t think exclusively in language and even our linguistic concepts are imprecise enough to admit of extension and ambiguity. But language does have an effect on thought. At least one way in which this happens is through the affordances of language: if it becomes easier to refer to a person as a client than a patient, this has downstream effects on what other concepts come to hand for thinking of them and how we relate to them. The homogenizing of language won’t homogenize thought to anything like the same degree, but we may reasons to worry that it will limit intellectual diversity. We should worry about who is designing linguistic affordances and to what ends, and we should worry about the effects of their broad rollout across the world.
I may also ignore the affordances of predictive text and Grammarly nudges. I often do ignore the spelling suggestions Word makes: often, the word it marks as incorrect is a proper name or a technical term.  When I’m unsure about a word, or about a formulation, I go with the flow, however. I accept the suggestion. Sometimes, especially when I’m using a mobile, I’ll use one of the Gmail reply suggestions, usually tweaking it for appropriateness.
Grammarly uses AI to identify grammatical errors and stylistic infelicities and to suggest changes. I don’t use Grammarly, but other services I use make different suggestions. Gmail, for example, makes reply suggestions and Word underlines spelling mistakes.
The result may be a loss of linguistic diversity and a sameness of expression. This is likely to be particularly acute for the millions of people who use English as a second language, because they are less likely to feel confident enough to override the suggestions of the AI.
These AI-driven tools alter the landscape of affordances for me as a writer. The affordances of an object or an environment are the suggestions for use embedded in it. The handle of a cup affords holding; a gap in a fence affords exiting there. Of course we may ignore or override afforances. If you prefer to hold your cup by the base, ignoring the handle you may do so, and you can climb the fence to leave at some other spot. But it takes effort (sometimes minimal) to override affordances. We usually go with the flow, and develop habits of relying on them. There’s nothing wrong with that: we can spare our energy for other things and in any case, many affordances are well-designed to facilitate action.
Written by Neil Levy

Similar Posts