Five Lessons for Careful AI Adoption  

Careful Industries delivers training and strategic services to organisations getting to grips with AI in ways that work for 8 billion people not just 8 billionaires.

This blog post outlines five things we’ve learnt about AI adoption. 

A white circular sticker with the text FOMO is not a Strategy in blue with pink arrows to the side

Your laptop definitely needs a “FOMO is not a strategy” sticker

AI is currently inescapable. It’s not just making itself known in every app or software application you open – if you’re a digital leader or interested in technology then everywhere you turn someone will be explaining that it’s a strategic priority or a critical issue or the key to unlimited success or mind-bogglingly transformational efficiency. 

And even if you don’t work in tech, you’ll be hard pressed to escape those two little letters in the headlines: from the French AI Action Summit to Elon Musk offering to buy OpenAI for $97bn to the Chinese open-weight LLM Deepseek to the UK copyright consultation (just some of this week’s headlines) all kinds of people are talking about AI and what it might, or might not, mean for everything from global geopolitics to the future of entertainment. 

At Careful Industries, we’re turning some of the deep research we do into the social and democratic impacts of AI into training courses and group sessions so that more people can understand what AI means for both the organisations they work at and for society as a whole.

Whatever the enthusiastic sales email might tell you, AI won’t transform everything equally or in the same way for everyone. From small group training and discussion sessions to long-term partnerships to design ways of working, we want to help as many people and organisations as possible resist the hype and make the decisions that work for them.

Some of the things we’ve learnt over the last year: 

1. FOMO is not a strategy

When tech companies come and talk to you about how great AI is, it’s not because they’re benevolently coming to share the news: it’s because they’re selling you something, so they’re going to go out of their way to make it sound like the best thing since sliced bread. If the person selling you some software licenses can add another 0 to the deal by saying they’re solving your greatest strategic problems, they’re definitely going to give it a go.

One reason AI is so prevalent right now is that some people are making a lot of money and gaining power and influence as more people and companies adopt their technologies. So, go easy on the FOMO, take time to educate yourself, and feel comfortable questioning the hype.  (Buy your laptop sticker here)

2. There’s no such thing as a stupid question

The field of AI is almost intentionally confusing. 

Some of the terms in our AI Glossary

For a start, there’s no single agreed definition of AI, meaning that every tech company offers a different explanation and every regulatory environment has a slightly different spin. The field is also full of detailed technical and multi-lettered acronyms (LLMs, ADMS, GPTs, AGI etc etc) and every few months a new term will reach the top of the hype cycle and seem to pop up in every meeting and in every LinkedIn post. Over the next few months, many of the people who told you genAI would change everything will  start saying “agentic AI” is the future of work/the end of work/the ultimate disruptor* (*delete as appropriate) - you definitely don’t need to nod your way through that. Feel confident to stop and ask questions and make sure you know what’s being described. To make this easier, everyone who takes part in our training and workshops gets access to a beginners’ AI glossary to put some of those basics in context - but remember, it’s always a good idea to ask for clarity if you’re unsure of what’s being described. 

Adopt tools in a way that matches your skills and confidence rather than feeling pressurised to disrupt for disruption’s sake. 

r3. Find out what people are already using 

It’s very easy to assume that the way you and your immediate circle of colleagues work is what everyone is doing, but if you don’t work in a highly process-driven environment it’s likely that different people will have different habits and preferences. For instance, some co-workers might use Claude or ChatGPT on a personal device as a way of getting started with a difficult project, while people who enjoy writing as way of working out a problem might go out their way to avoid those same tools at any cost. Before you make any assumptions, it’s worth doing a survey to find out what people are actually doing, working out what the legal and confidentiality implications, and what that means for quality and accuracy in your workplace.

4. It’s possible to be both careful and curious about technology adoption

If your business isn’t exploring bleeding-edge innovation or you’re delivering to a clear social purpose, it’s only natural that you’ll want to proceed at the speed of trust. We can help design an environment for you that puts light-touch safeguards in place so that people can learn through doing. Rather than starting with a massive transformation or implementation programme, make spaces to experiment and see what that sparks. 

5. Automate the easy things 

It’s tempting to throw new technologies at your most wicked problems - the ones that seem to get harder over time, or have layers of complexity around them, but in reality you’ll just make a difficult thing even harder, and potentially create new points of failure. You’re better off automating the things that come easily to your team or organisation - particularly if you’re using generative AI, where potential inaccuracies and mistakes in outputs could undermine efficiencies, or implementing an automated-decision system that might create biased or incorrect outcomes. 

Take something where failure will be obvious - where there will either be clear external signals if it’s not working, or where the skills and experience of your staff will mean that everyone is alert to what the wrong kinds of outcomes look like.

For instance, if you work in social care, don’t automate complex human decisions about whether or not a young person is at risk; start by improving case workers’ diary management with better route planning between visits and automated appointment reminders. If you’re a funder, don’t automate funding decisions, but make it easier for potential grantees to assess their eligibility before they apply. Adopt tools in a way that matches your skills and confidence rather than feeling pressurised to disrupt for disruption’s sake. 

No matter what anyone tells you, AI isn’t inevitable, and you should feel empowered to make good decisions in the workplace that create better outcomes for everyone. 

Find out more about our individual and small-group training sessions and consultancy services at www.careful.industries/ai-training.

Next
Next

2024: A Year of Careful Trouble