Creating psychological safety for AI exploration
Designing AI education that prioritises values-aligned decision-making over rapid adoption, helping teams engage with technology from a position of agency rather than anxiety.

When Virgin Money Foundation asked us to introduce their team to AI in September 2025, we faced a challenge that reveals why most AI training fails mission-driven organisations: how do you build genuine understanding rather than either breathless evangelism or paralysing anxiety? Most AI education pushes organisations toward rapid adoption. We designed a session that prioritised creating space for values-aligned decision-making.
The foundation's team were experiencing the tension common to many mission-driven organisations watching AI's rapid advancement. They recognised the potential relevance to their work but felt apprehensive about the speed of change and unsure how to evaluate AI opportunities against their values and operational constraints.
Rather than delivering a standard AI overview focused on capabilities and use cases, we designed a session that prioritised creating space for honest conversation about concerns alongside balanced exploration of both opportunities and limitations.
Beyond hype and anxiety
The session deliberately avoided the breathless evangelism common in AI presentations whilst also steering clear of dystopian warnings that could paralyse decision-making. Instead, we provided a grounded introduction that covered AI's genuine potential alongside frank discussion of its current limitations, risks, and areas requiring careful human oversight.
We explored practical applications relevant to their work - from AI tools supporting social work case management to job centre automation - whilst also examining cases where AI had failed or caused harm. This balanced approach reflected our core belief that mission-driven organisations need AI education that strengthens their ability to evaluate technology against their values, not training that pressures them toward implementation regardless of mission alignment.
The approach recognised that for organisations committed to social impact, the most valuable AI education isn't just technical understanding but frameworks for evaluating whether AI implementations align with their values and serve their beneficiaries appropriately.
Building informed confidence
The session's structure moved participants from apprehension to informed confidence by acknowledging their concerns whilst providing practical tools for evaluation. Rather than suggesting they needed to become AI experts immediately, we emphasised that good questions matter more than technical knowledge, and that their existing expertise in understanding beneficiary needs would be crucial for any successful AI implementation.
We focused on helping them distinguish between AI applications that might enhance their impact and those that might compromise their values. This included frameworks for assessing where human judgment must remain paramount, how to pilot AI tools safely, and what questions to ask when evaluating AI vendors or solutions.
The interactive format allowed team members to explore their specific concerns and contexts rather than consuming generic AI information. This created genuine engagement with the material and helped build confidence in their ability to make thoughtful decisions about AI adoption rather than feeling swept along by technological hype or paralysed by uncertainty.
From anxiety to agency
By the session's end, participants had moved from feeling like AI was something happening to them to understanding it as a set of tools they could thoughtfully evaluate and potentially adopt in service of their mission. They left with practical frameworks for assessment rather than pressure to implement immediately.
The session demonstrated that for mission-driven organisations experiencing AI uncertainty, the most valuable intervention isn't technical training or capability showcases, but creating psychological safety to explore both opportunities and concerns. When teams feel confident in their ability to evaluate AI against their values and operational needs, they can engage with technological change from a position of agency rather than anxiety.
The work together demonstrated our core belief: for mission-driven organisations, the most valuable AI intervention isn't accelerating adoption but building the capability to make thoughtful decisions about whether and how to adopt AI in service of their mission. This capability, knowing how to evaluate AI against organisational values rather than just technical possibilities, is increasingly critical as AI becomes so common across the sector.
Technological change continues to accelerate but only a quarter of charities say they feel prepared to respond to the opportunities and challenges. Let's close the opportunity gap together.

