Responsible-ish GenAI dos and don’ts: text edition
Over the last year, Careful Industries has worked closely with hundreds of people who are getting to grips with whether they should, or shouldn’t, use Generative AI in the workplace. This blog post contains seven things to bear in mind and three “responsible-ish” strategies for using Generative AI to generate text. If you find this interesting, you might enjoy our upcoming AI Impacts 101 courses, running in April and May.
Image credit: Yasmin Dwiputri & Data Hazards Project / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Using Generative AI is not a consequence-free activity. As I talked about at the 2024 Scottish AI Summit, there are many social, ethical, environmental, and political consequences of using and normalising everyday AI; however, recent research by the Ada Lovelace Institute shows that 40% of people are already using Generative AI, so it seems timely and useful to think about how to use those tools in a more responsible way.
Personally, I think about using Generative AI in the same way I think about my travel choices: when I can, I choose an active travel option or go by public transport, but sometimes I still need to use a car or get a cab and occasionally I might need to fly if there’s not another option. Although some uses of Generative AI are hidden or not labelled, when I have the choice I try to stop and think about whether it’s a worthwhile trade-off. This isn’t a perfect way of doing it, but I use some of the strategies below to help support that choice.
It’s also important to remember that not everyone is the same: different people have different skills, aptitudes, and access needs; what might seem like an unthinkable adaptation for you might be very useful and necessary for someone else. A lot of very literate and articulate people feel mortally offended by GenAI writing tools, but many people for whom those skills don’t come so easily find those same tools to be very useful. Likewise, if you enjoy writing, it would be a great shame to routinely outsource that pleasure to a tool. There is no hard and fast rule here; while organisational, institutional, and governmental decisions about AI bring with them moral and ethical consequences, at an individual level making decisions about thriving, inclusion, and access come with a different set of considerations.
I’ve already shared some organisational lessons about using AI (in this newsletter and in this blog post) but the following is more specfically about what you might do as an individual considering whether or not to use a GenAI tool to generate text or if you’re working for an employer who has asked you to use a particular tool to complete a task.
GenAI and text: 7 things to bear in mind
GenAI is very bad at jokes
When a GenAI tool such as ChatGPT or Claude Sonnet puts words together, it’s not actually writing in the same way a person writes, but using statistical probability to generate the next word. For an example of what that actually means, see this video demo (which I found via this staggeringly useful Bluesky thread by the brilliant Miriam Posner) or, if you want to see for yourself, try and get Gemini or Copilot to tell you a joke. Jokes need twists and surprises; GenAI is no good at those.
GenAI text generators aren’t made for writers
If you’re the kind of person who understands what they think by writing about it, then using a GenAI tool is unlikely to help you do that. In fact, you’ll probably find it to be quite annoying and a bit flat and want to walk around the house swearing about how terrible this technology is and how it’s clearly not going to replace anything.
GenAI text generators are really useful for people who don’t like writing
However, not everyone enjoys writing; for some people it’s actively tortuous and unpleasant. Text generators are really useful for people who don’t like, who need assistance with, or who have difficulties writing. Text generators can also be useful for people who want to sound more confident or assertive in the workplace or for people who are working in a second or other language. In some workplaces, the use of these tools is becoming more commonly agreed as a reasonable adjustment to accommodate an accessibility need.
GenAI text is not very stylish
It is very likely that any text generated by a GenAI tool will read like text generated by statistical probability. If you have used a tool to generate a first draft, be kind to your future readers and make the effort to go through and make it sound a bit more interesting and a little less samey and robotic.
Don’t upload anyone’s personal information to a GenAI tool
Especially not to a random one you found on the Web.
Don’t be a prompt bore
If the type of activity you’re doing doesn’t follow a strict set of processes and routines, it’s likely your friends and colleagues will work out their own preferences for using tools to get to the outcomes they need. Some of the appeal of using GenAI tools is working out what works for you, so it’s probably polite to only share your prompts if you’re asked for them.
GenAI tools often get used for the boring things people don’t want to do
If there’s a task that lots of people you work with are using a GenAI tool to do, it might be because they don’t find the task to be a useful or interesting one. Rather than judging them, it might be worth making the task less onerous and/or boring.
Some responsible-ish strategies
As I said above, using a GenAI text generator is not consequence free. Most text generation tools are built on top of Large Language Models (LLMs). LLMs have a big energy and water footprint, the text they generate is likely to exacerbate structural inequalities and reflect cultural biases, it will probably draw on content that has been scraped from the Web without the permission of its creator, and you are also likely to be lining the pocket of a billionaire. That said, life in modern capitalism is difficult and can require making trade-offs so — if you’ve taken a deep breath and need to accept above — here are some deliberative choices you can make to slightly take the edge off that decision.
Check, check, check
Whether you call them mistakes, hallucinations, or confabulations, GenAI makes them. Just as it’s difficult to proofread your own work, it can be difficult to spot these mistakes in content you have prompted yourself because you’ll have some familiarity with the original material. Also, when your text generator of choice very confidently and quickly returns a result it’s easy to think, “Ah, that seems authoritative and likely to be completely correct plus I can’t actually be bothered to read it properly.” You are, after all, only human.
As well as forcing yourself to read it through, find a buddy who will take a second look at the output for you or, if you’re doing this across a team or organisation, create a culture of collaborative working and checking to make sure mistakes get caught before they have consequences.
Set limits
It’s really easy to start defaulting to shortcuts rather than doing tasks for yourself; this could lead to you losing (or, indeed, never gaining) important skills, knowledge, and capabilities. It is also the case that GenerativeAI tools are extremely resource intensive: due to poor reporting standards, it is very difficult to assess the exact carbon emissions or amounts of water and energy being used by a tool (see, Sasha Luccioni, Bruna Trevelin and Margaret Mitchell’s very easy-to-read “The Environmental Impacts of AI — Policy Primer” for more on this) but all of the research points to the current environmental impacts of GenAI being much higher than for other kinds of digital activity. So, rather than drifting into habits you might come to regret, set yourself a number of tokens that you can use each month. This is not an exact science, but — for instance — this Times article estimates that generating 1 x 100 word email in ChatGPT uses the equivalent of a 500ml bottle of water; each token could be equivalent to 500ml of water and you could identify a personal monthly “AI water allowance” as a way of setting some conscious boundaries around your use of GenAI.
Be transparent and ask for transparency
I first encountered transparency statements via Kester Brewin, author and associate director at the Future of Work Institute, in this article, “Why I wrote an AI transparency statement for my book”. Normalising naming the tools you have used and how you have used them will help set the expectations of your readers and collaborators, and it will also bring some intention to how you use them. Likewise, if you’re asking other people to write text for you in the form of applications, articles, or reports it’s worth being clear about your expectations regarding their use of GenAI. For inspiration, this short article on the Paul Hamlyn Foundation website offers a good example of how to do this in a clear and direct way.
This is by no means a complete list of dos and don’ts, but more of a starting point for developing intentional strategies in your use of AI. Also, I’ve said it before, but remember: FOMO is not a strategy and AI isn’t inevitable. Technology can feel overwhelming, but it is possible to develop routines that help you make deliberative and considered choices.
Book an AI Impacts 101 course - April / May
Get a FOMO is not a strategy sticker