Case Study: People-Centred AI Adoption for Communications Teams

As organisations explore tools such as ChatGPT and Microsoft Copilot, many communications teams are experimenting with AI — but often without shared standards, confidence, or clarity around risk. True was invited to help this client move beyond isolated experimentation and develop a confident, responsible, and sustainable approach to using AI in communications.

The Challenge: AI Experimentation Without Confidence or Consistency

The client wanted to help their communications team use AI more effectively and with greater confidence.

Some team members were already using tools such as ChatGPT and Microsoft Copilot, achieving quick wins in drafting, ideation, and summarisation. However, this activity was happening in isolation. There was no shared approach to governance, limited understanding of risk, and uncertainty about what responsible AI use looked like in practice.

Confidence levels varied across the team. Some people were enthusiastic early adopters, while others were cautious or sceptical — concerned about quality, ethics, and potential reputational risk. The client did not want a technology rollout or generic AI training. They wanted a people-centred approach that would help the team build confidence, clarity, and consistency in how AI supported their work.

Our Approach: Starting With People, Not Tools

True’s recommendation was clear: focus on people first.

Rather than introducing new technology or imposing a framework, we designed an approach that built confidence before capability. By creating space for listening, shaping shared guardrails, and embedding AI into everyday routines, the team could develop a way of working with AI that felt practical, human, and sustainable.

Listening was central throughout. The programme was designed so people could ask questions, raise concerns, and contribute their own ideas to how AI would be introduced and used.

Discovery: Listening to Understand Confidence, Concerns and Barriers

We began with a short, confidential listening exercise to understand:

  • How the team was currently using AI

  • Confidence levels across the group

  • Perceived barriers, risks, and uncertainties

The insights revealed a mix of curiosity, caution, and uncertainty. These perspectives shaped everything that followed. Instead of imposing a one-size-fits-all framework, we co-designed an approach that started from first principles and moved forward at the team’s pace.

Workshop One: Foundations and Friction Points

The first workshop focused on demystifying AI and establishing strong foundations for responsible use.

Together, we explored:

  • Governance and risk — why responsible AI use matters, what good guardrails look like, and how to protect quality, trust, and reputation

  • Tool set-up — how to configure AI tools in ways that felt safe, practical, and aligned with the team’s work

  • Everyday application — using ChatGPT and Microsoft Copilot to support real, day-to-day communications tasks

Rather than focusing on theory, the session was practical and grounded in the team’s own work. Each participant left with a small AI experiment to try in their role, designed to build confidence through application rather than instruction.

A Pause for Practice: Letting Learning Stick

A deliberate design choice was to pause.

Instead of moving straight into a second workshop, we built in a month-long gap to allow people to apply what they had learned in their own context. This gave space for learning to stick and ensured that the next phase would be shaped by real experience rather than assumptions.

During this period, participants tested AI in their daily work and reflected on what worked well, where they felt uncertain, and what they wanted to explore next.

Workshop Two: From Insight to Implementation

The second workshop is designed as a workflow lab, building directly on the experimentation and insight gathered during the practice period.

In this session, we work alongside the team to:

  • Map key communications tasks and identify where AI adds genuine value

  • Co-create AI-enabled workflows grounded in existing processes

  • Define clear points for human judgement, quality control, and ethical oversight

  • Test and refine prompts, structures, and review steps to ensure outputs meet agreed standards

The emphasis is on embedding AI into everyday practice in a way that feels realistic and sustainable. The focus is not on automation for its own sake, but on supporting better thinking, stronger storytelling, and more consistent ways of working — without losing the human voice.

Recommendations: Building Long-Term AI Confidence

True’s role is not to introduce tools, but to help teams become future-ready.

For this client, that means:

  • Turning everyday communications tasks into AI-supported workflows

  • Building skills beyond the technical, including critical thinking and storytelling

  • Developing internal AI champions to share learning and support peers

  • Embedding governance and ethics through co-created guardrails that feel owned, not imposed

  • Measuring and learning by tracking feedback and impact to refine practice over time

Impact So Far

Even after the first phase of the programme, the team has already:

  • Built a stronger shared foundation in AI governance and risk

  • Gained confidence in setting up and using AI tools effectively

  • Begun experimenting with AI in ways that feel safe, purposeful, and aligned with their values

Perhaps most importantly, the conversation has shifted. It is no longer just “What does AI do?” but “How do we use AI responsibly to help us do our jobs better?”