The pros and cons of using AI for content generation, from a copywriter’s perspective

Everyone’s NOT talking about Jamie at the moment (unless Jamie is the latest AI chatbot.) So many conversations in the marketing and comms teams where I usually work are now about AI and how it’s going to change what we do. People seem surprised that, as a copywriter, I’m not more anxious about it.

Perhaps I’m being a Pollyanna, but I genuinely feel that my copywriting job isn’t under threat. AI has great power and capability: it can assimilate data from many sources very rapidly to produce informative content, but it has its limitations. I think that AI presents opportunities to writers like me, and doesn’t present a like-for-like alternative to my work for my clients. Here’s why.

  • Originality – AI can only create content based on existing resources. It cannot be truly creative and make the leaps of imagination and connection that characterise the most distinctive content. Original turns of phrase, metaphors and humour are difficult to synthesise effectively: good writers have an instinct for what works for a specific audience and topic and it’s hard to reduce this to rules and standard practice.
  • Provenance – because AI bases its output on existing content, there are potential legal and copyright issues about (even loosely) repurposing what’s been created and owned by others. “How much similarity constitutes plagiarism?” is one question. For example, if AI designs an image in the style of an artist or graphic designer’s work, is that a copyright issue? It’s a grey area at the moment. Most people are cautiously waiting to see the results of inevitable test cases… no one wants to be that first test case themselves, of course!
  • Veracity – ChatGPT articulates many things very confidently, but we still need to check their accuracy. I recently used ChatGPT to help me write programme notes for a choral concert. It was very accurate and produced excellent facts and information about well-known composers and works. But for more modern and obscure composers and works it came up with some wrong information. I was able to recognise and correct this because I had my own direct knowledge as context. ChatGPT can only work with what it can find – sometimes it confuses people with similar names or even conflates sources to come up with wrong information. If you know little about the topic, it’s risky to assume everything you’re given by AI is true. For my concert programme, the only consequence would have been my embarrassment – but in commercial copy, wrong information disseminated to clients could have a more damaging impact. There is a need to check facts, both from your own knowledge and in a more formal way. You have to be able to do this intelligently if you hope to deploy AI to improve your productivity and speed of output, or else you might as well research the whole thing yourself.
  • Banality – a lot of ChatGPT responses follow a very similar format, so it’s easy for content to look a bit bland and repetitive if you use it a lot for similar types of post or article. Try asking ChatGPT questions about several people or subjects in a similar category – such as famous composers. You’ll see that the presentation of the information is relatively predictable. This is great for clarity, but for interest and variety, which tend to help to engage readers, it’s not so good.
  • Confidentiality – if you type a question into ChatGPT it will fold that into its own learning. Let’s say you’re working on a confidential project. If you type in something about it within a question, that snippet effectively enters the public domain. In some situations it could be searched out by competitors in a process of reverse engineering. This could put you in breach of NDAs. Again, it’s a grey area, but not many people want to risk having their approach tested by lawyers.
  • User competence – ChatGPT and other open source AI tools are only as good as the queries and questions you enter into them. If you ask a stupid, limited or biased question, you’ll get a stupid, limited or biased answer. In my experience you need to know something about a topic to be able to make use of AI to research and write about it effectively. You also need to be able to decide which information and angle will be of greatest relevance and interest to an audience you’re writing for, which influences both the query you input and the way you use the output provided.
  • Value – Anecdotally, many people are instinctively anti-AI. They react badly to the idea of a robot “pretending” to be a person. A banking IVR answered my call the other day in a very convincing-sounding voice. It turned out to be a bot that, frustratingly, couldn’t understand my non-standard enquiry. The frustration was compounded because I initially thought I was talking to a person, so I felt stupid as well! I think that human involvement will prove to be a differentiator in all kinds of content and engagement. Customers and audiences will feel more valued and genuinely respected if humans are engaging with them directly or through crafted, creative content. This will be associated with high-end experiences and customer service excellence – a good differentiator for organisations who continue to invest in human interaction.

Using AI to help with research and prep
In some situations, ChatGPT is at present a capable and time-saving helper for me in writing projects. But I have to use it thoughtfully, and in awareness of its limitations, just as I might work with an intern or junior researcher. I have some experience of this: I sometimes ask my student son to research client blog topics online for me, to free up more time for me to craft and write instead of preparing and gathering data. He is quick, clever and eager (motivated by the Wordsmithy research hourly rate!) He always finds a lot of useful content, but he lacks experience and context.

Sometimes he misses crucial information or includes research from less credible sources, because he is convinced by professional-looking presentation of information. He sometimes cites inaccurately, because the source material has been confusingly quoted and re-quoted several times. At times, he digs up far too much content and doesn’t know how to sift it down, because he’s not fully attuned to the project requirements.

Of course, some of this is down to me. I must provide a very clear brief to make sure the content he serves up is useful. But that takes time – would it be quicker to do the research myself? The pragmatic approach is to offer a concise brief and be prepared for fact-checks and adjustments before I am confident I can make use of the research. I adopt a very similar approach to AI. Sorry Freddie, I think ChatGPT is more likely to put you out of a vacation job than to put me out of my role at the moment!

AI-powered content apps
There are many AI-powered apps on the market that generate sales and marketing copy for volume campaigns and I’m sure they are more cost effective and productive than I could ever be for that level of demand and type of communication. But I’m equally sure that a skilled human marketing communications professional will be defining and overseeing these campaigns to ensure quality and impact. These kinds of context-specific AI-powered writing apps are opening doors to start-up and streamlined businesses and helping them compete with established players, giving them the capability to generate high volumes of content and present it professionally.

There are also excellent AI-powered proofing tools that make everyone’s life easier, correcting grammar and punctuation intelligently in context. Even professional writers sometimes make slips – it’s good to have a safety net. I don’t always accept auto-generated suggestions, but they sometimes make me rethink and rephrase.

There’s room for us both for now
At the moment, I think AI will change the way I write and may narrow the kinds of project I tend to work on – I can see that there may be fewer lower value, more repetitive copy briefs landing in my inbox. I don’t doubt that AI will become more sophisticated and capable of generating engaging, original and targeted content over time, although some of the risks I’ve listed will continue to apply. But I think there will always be demand for more sophisticated, original and complex content that’s individually planned, targeted and executed to engage, entertain and attract the right kind of attention and reaction. Thankfully, that’s the kind of content I’m mostly asked to write and edit.

In the spirit of our current partnership, I asked ChatGPT for its comment on whether it would put me out of business. I’m pleased to say that its response was measured and we find ourselves in writerly agreement and harmony… for now.

Photo credit: Bradley Hook via Pexels