A couple are in their kitchen, emptying the boxes from their recent move. The man is sweat-sheened, nervous, fiddling with an email. “I’ll be joining team Monday”, he types. His laptop rearranges the words as he writes, suggesting “the team” for “team”; “my wife and me” for “my wife and I”.
“Does this sound like me?” he asks.
“Yeah, just be yourself” the woman replies.
“I’m excited to meet everyone”, the man writes. His computer underscores “excited”, and throws up “eager, nervous, psyched, thrilled”. He clicks, “thrilled”.
The next day, his new colleagues greet him with enormous grins.
Grammarly, the AI powered writing assistant, promises to make your writing “bold, clear and mistake free”. Even if you’ve never used Grammarly, you’ll know of the tool from its ubiquitous advertising that seems to be hooked into every YouTube video and Facebook feed, promising you flawless written communication in your essays, speeches and job applications.
Like other AI writing tools, Grammarly uses algorithms to alter your tone and word-choice. It counts 6.9 million daily users, thanks in part to an advertising campaign so intense that you can’t type its name into Twitter without seeing comments about the sheer volume of its messages.
If you see a lot of Grammarly content, it’s because it’s seen you first. Grammarly doesn’t need much data to target you, as long as you are 1. human and 2. English-speaking. Its selling-point is twofold: good communication produces results, and honing our writing through algorithms can benefit our self-image, grades and performance in the workplace.
AI-powered writing assistants are undeniably useful. Written communication is key to every workplace. Gmail smart reply, which throws up a peppy “Love it!” to your colleague’s innocuous lunch suggestion, helps us craft responses more easily in a world where the human population sends 269 billion emails each day. AI writing assistants can support people with dyslexia and other learning diffculties, arguably promoting equality by allowing each person to express themselves with relative ease.
But there’s a line between a tool that helps us express ourselves, and something that tells us what to say. Word’s AI editor scolds you when you use “policeman” instead of “police officer”, cleaning up the possibility of gender bias. We’ll never say the “wrong thing” with AI, because it cleans our speech into hygienic perfection. Sure, it would be great never to encounter implicit sexism again, but I’d rather it come from real, human change than from an engineered appearance of progress.
How much can we hold onto our humanity in the constant presence of AI? Even if we ignore Gmail smart replies, they’re still there, nudging us into particular modes of self-presentation. Things like Apple autocorrect are supposed to anticipate our default states – what we would have said if our thumb hadn’t caught the wrong key, for example. But technology changes our instinctive communication. When Apple began automatically changing “omw” (short for “on my way”) to “On my way!”, it caused a sudden burst of memes. If you have to access your phone’s Settings to stop it from shortcutting “omw” (something that only 5 per cent of users do), then do you really control your own expression?
Every new technology comes with hand-wringing about what it will do to us. It’s inevitable that the environments you’re exposed to – your parenting, or education, for example – determine your writing style. But by their own admission, companies like Grammarly want to go beyond supporting the organic development of human language. They access what is fundamental to our humanity: our tone, style and word-choice, and change it into a “better” form.
For the last two years, Grammarly has been working on an AI-powered tone-checker, “training its algorithms to identify the flavor of a piece of text, whether it comes off as curious, optimistic, urgent, or concerned”. The alarming aspect of Grammarly producing a natural, human tone is that it may cultivate a humanness so life-like that we no longer have to be human ourselves. There’s an uncanniness in how Grammarly taps into human trends, with its posts about identity, self-care and Pride week, while subtly removing our humanity by offering us homogenously “flawless” writing.
In a society where we cultivate the appearance of everything, even our mistakes, it’s comforting to see an accidentally-printed spelling error. It reminds us that unconscious, instinctive humanity still exists. AI-powered writing tools reduce our tolerance for error, eliminating the mental misfires that make us individuals. Grammarly would never, for example, let me send the terrible email where I addressed my university tutor, “Dead Freya” instead of “Dear Freya”. This sounds like a good thing, but the human emotions I felt – embarrassment and panic – are rare resources in a world where machines remove every evidence of human quirks.
Is there an ableism in saying we should make mistakes? The dyslexic or dyspraxic person, getting rejected from job application after job application because the system doesn’t support their cognitive processing, can hardly afford extra errors when every sentence is an added labour. We’re in a privileged position if we can make mistakes without an enduring social and financial backlash. David Lammy memorably told students in London to stop using slang words like “innit”, because they “won’t help you get a job”. The percentage of ethnic minorities living in poverty is double that of the white population. For some, mistakes are a luxury.
But leaving aside the effects of AI-powered tools on individuals, they are extremely convenient for companies who want to conceal individual suffering. Businesses are increasingly using technology to obscure their human workforces. Uber, for example, has introduced a “quiet mode” feature on its app so that you don’t have to speak to your Uber driver if you don’t feel like it. I’ve never had a conversation with a taxi-driver that didn’t involve asking about their work, so an enforced silence is useful if Uber wants to hide its exploitative conditions. This obscuring of human interactions and feelings could extend to our speech; in the future, AI tools may nudge us towards censoring inconvenient emotional expressions, like pain and distress.
Grammarly’s slogan is: “everyone can be a great writer”. In Grammarly adverts, a tired college student gets an “A” on her essay for her “wonderful use of words”, but it’s Grammarly algorithms that have generated her word choices. And it’s true that everyone can be a great writer, if one corporation dictates what Great Writing is. Synonyms for “creativity” include “originality”, but if a machine decides the correct tone of your creative writing, you start to be original with a capital “O”: Original as a brand name, in a company t-shirt.
Harry McCracken from Fast Company writes of Grammarly: “I just find it helpful as a sort of linguistic Jiminy Cricket that watches me at work, steers me in the right direction, and points out when I slip up”. But even the friendliest Jiminy Cricket is still something that watches you, occupying the private space between your thoughts and words, checking if you’re being good, that you’re writing well. There’s something chilling about McCracken’s language, the way he talks about being steered in the right direction, the way he automatically agrees that he can “slip up” as a writer, when writing is about searching and probing and asking questions. Are we so used to technology dictating our lives that we see no issue with the meta-surveillance of our creativity?
Emily Beater is a freelance journalist who has written for the Guardian and BBC News