Careful AI: Training and Consultancy

Our Services

We help our clients evaluate technical, social and ethical implications to establish how AI can, and cannot, help deliver their business goals.

Prioritising positive impact on people and planet will build trust, assist with regulatory compliance, minimise biased outcomes, and enhance the experiences of customers and employees.

Let’s make AI work for 8 billion people, not 8 billionaires.

Learn more about:

AI Training

AI Impacts 101

Who: For freelancers, industry, civil society and government employees

Aims: Move beyond the hype to share the environmental and social impacts of AI.

Understanding the impacts of AI will help you to make informed decisions and intentional trade-offs in your work.

​While there are some clear societal red lines, as set out in the Human Rights Act, the abundance of different sets of principles for responsible AI show that, in operational terms at least, ethics are relational and should be based upon both the values and operating context of your work.

​Learning Outcomes:

  • ​An understanding of AI systems

  • ​An understanding of the environmental impacts of AI

  • ​An understanding of (some of) AI’s social impacts

  • ​An introduction to the AI policy landscape

Our next session is Thursday 13th February!

AI Ethics in Practice

Who: For freelancers, industry, civil society and government employees

Aims: Learn how to use an updated AI-focussed version of Consequence Scanning with access to the toolkit.

​Learning Outcomes:

  • ​An understanding of AI systems

  • ​An understanding of the environmental impacts of AI

  • ​An understanding of (some of) AI’s social impacts

  • ​An introduction to the AI policy landscape

AI Consultancy

Practical AI Strategies

Who: Organisations who use digital tools but don't develop or create their own software or AI applications, but are either purchasing or using off-the-shelf AI tools and applications.

Aims: Develop AI policies that align with your organisational values and culture.

Outputs:

  • Values mapping

  • Acceptable use policies to guide procurement decisions

  • Step-by-step guides for staff

  • An outline approach to risk management

Enhanced package includes a full-staff survey, AI glossary, and a staff training session.

Prices start at £9000 + VAT

Responsible AI in Practice?

Who: Organisations who are operationalising AI tools and want to understand what that means in practice 

Aims: We will introduce an AI-focussed version of the Consequence Scanning tool, and create a bespoke approach to risk management for your organisation.

Outputs:

  • A deep-dive into risk management

  • Three bespoke workshops, including Consequence Scanning training with access to the updated toolkit

  • A comprehensive risk-management approach

Prices start at £6000 + VAT

Delivery Team

​The training session will be led by Rachel Coldicutt OBE.

Rachel is a technology strategist and researcher, specialising in the social impact of new and emerging technologies.

She spent almost 20 years working at the cutting edge of new technology for companies including the BBC, Microsoft, BT, and Channel 4, and was a pioneer in the digital art world. Rachel is an advisor, board member and trustee for a number of companies and charities and a member of the Ofcom Content Board.

She was previously founding CEO of responsible technology think tank Doteveryone where she led influential and ground-breaking research into how technology is changing society and developed practical tools for responsible innovation.

Watch Rachel Coldicutt’s keynote address at the Scottish AI Summit 2024 towards shaping a trustworthy, ethical, and inclusive AI future.

Related Work

  • House of Lords Report into LLMS and Generative AI cover

    House of Lords Report into LLMS and Generative AI

    Our evidence submission AI for Public and Planetary Benefit was quoted in this recent report, and we were pleased to see our calls for anticipatory governance and renewable power for data centres were reflected in the findings.

  • An auditorium of people at an event

    AI and Society Forum

    In October 2023, we convened the AI and Society Forum, an urgent gathering of civil society change makers, policy makers, practitioners and academics, to shape the future of AI.

  • An illustration of a street scene with the text AI in the street superimposed in front

    AI in the Street

    A multi-disciplinary project with the University of Warwick, University of Edinburgh, Cambridge University. King’s College London and others to map the messy reality of AI on the street.

    Funded under the AHRC BRAIDLink opens in a new window programme (Bridging Divides in Responsible AI)

  • the RAI UK logo

    Responsible AI UK

    Throughut 2024 and 2025, we’re running a series of workshops for the RAI UK Law and Governance Working Group, convening experts from across industry, government, academia and civil society on the new legal and governance challenges created by AI.

  • A page of post its showing terms related to Ai and social change

    AI and the Creative Industries

    Our foresight study for MyWorld Bristol and Bath and the Creative Industries Policy and Evidence Centre, examining the impacts of AI and emerging technologies on the creative industries.

  • A black t-shirt with the words let's make AI work for 8 billion people not 8 billionaires

    Buy a T-shirt

    Do you believe AI should work for 8 billion people not 8 billionaires?

    Our range of hooded tops and t-shirts lets you wear your passion for socially responsible AI with pride.

    All proceeds to the Save the Children Gaza Emergency Relief Fund.