Skip to content

Moving from AI experiments to transformational value

author
Laura Ansloos
26th Jan 2026
Article

The evidence is mounting. AI adoption is now moving beyond the pilot phase. In financial services alone, UK institutions report AI productivity improvements of 59% — nearly double the 32% reported just a year ago.

The pilot phase is ending. The transformation phase has begun.

Organisations are moving past questioning whether to adopt AI; they’ve run pilots, tested use cases, and offered training to employees. Yet most organisations remain trapped in “productive limbo.” They see promising results from individual experiments but struggle to translate those gains into enterprise-wide transformation.

The real question now is different: How do we evolve from an organisation that experiments with AI to one that unlocks transformational value from it?

The cultural unlock

So how do organisations evolve from experimenting with AI to unlocking transformational value? We argue—based on AI research and our work with organisations—that the unlock isn’t technical, it’s cultural.

Culture is often described as “the way we do things around here,” but it goes much deeper than surface-level behaviour. At its core, culture consists of deeply held beliefs and assumptions—the unconscious “operating system” that dictates how an organisation functions. Over time these beliefs and assumptions become accepted as “truth” and fundamental to repeated success. At the heart of these cultural beliefs and assumptions lies identity: how organisations and their people answer the fundamental question “Who are we, and how do we create value?” That sense of identity hardens into culture by informing shared assumptions of what “good” looks like: what counts as expertise, who holds power, what is legitimate work, what feels risky, what gets rewarded and what must be protected. Those cultural assumptions around identity then show up, very practically, in an organisation’s operating model, because they shape how decisions are made and how work gets done. Research has found that identity fundamentally influences the quality of human-AI collaboration.

This matters to scaling AI beyond experiments – because AI represents a shift in the source of value creation in organisations. When a company’s identity is “we are the experts who do X, and our people are our greatest asset”, and AI can suddenly do much of X, it feels existential: if the work that signals our expertise is automated, what makes us valuable now?

Scaling AI beyond experiments requires more than implementing use cases or having the best technology. It requires identity-level cultural work: surfacing the beliefs that sit underneath “how we create value”, co-creating a new vision for the value and impact an organisation can have with AI, then deliberately reshaping culture and operating model to support a future of human-AI partnership.

How cultural beliefs shape AI value capture

Understanding the identity-to-value chain reveals why certain beliefs create operating patterns that either drag or accelerate capture of AI’s transformational value. The contrast is stark:

“My professional identity is my expertise” vs. “My expertise evolves; it doesn’t define me” When identity is tied to doing tasks manually, AI feels like an identity threat. This “we are experts” pattern drives high consultation, extensive assurance, and bespoke solutions—valuable for quality but a barrier to the speed and scalability AI enables. By contrast, viewing expertise as evolving allows the operating model to embrace standardisation for routine work while preserving judgement for higher-value strategic thinking.

“My expertise is my power” vs. “Shared knowledge multiplies power” Knowledge-hoarding creates “shadow AI”—individuals using tools privately while resisting organisational adoption. This fragments AI usage and prevents enterprise-wide capability building. The alternative belief shifts the operating model from individual hoarding to collective capability-building, enabling faster learning cycles and compounding returns on AI investments.

“My power is threatened by AI” vs. “AI amplifies human capability” Zero-sum thinking—”efficiency always means job cuts”—drives a “we are guardians” pattern: excessive controls, committee-based decisions, and risk avoidance. This creates massive opportunity cost through missed growth and delayed transformation. Reframing efficiency as capacity creation enables a “we are shapers and innovators” identity that embraces experimentation and manages risk intelligently rather than avoiding it entirely.

“Winning with AI comes at an inevitable human cost” vs. “Transformation is co-created” Without credible investment in reskilling and a compelling vision of what people become, adoption feels like managed decline. When transformation is genuinely co-created, with clear development pathways, it unlocks the experimentation, knowledge-sharing, and risk-taking that drive transformational value.

When limiting beliefs go unaddressed, adoption stalls, talent leaves, and the remaining workforce becomes overly risk-averse. This isn’t just a human cost—it’s a direct threat to shareholder returns and competitive advantage. The business case and the human case are inseparable. But adopting empowering beliefs requires more than aspiration—it demands deliberately examining and reshaping the deeper assumptions that have long governed how the organisation creates value.

Examining identity beliefs through learning loops

So what is this deliberate process for examining and reshaping invisible beliefs and assumptions? This is where the learning loop framework becomes a useful tool to leverage – it describes three levels at which we can question and improve AI adoption strategies:

'Moving from doing to being' a titled workflow showing the stages of single loop, double loop and tripe loop learning.

Single-loop learning: “Are we doing things right?” This means corrections within existing rules e.g. tightening processes, improving prompt quality and dealing with output quality. Most AI efforts operate here: organisations refresh prompt libraries, add validation checklists, refine their controls. These improvements matter, but they never question whether they’re doing the right things.

Double-loop learning: “Are we doing the right things?” This involves questioning the rules themselves e.g. reflecting on how AI is being used and whether it is appropriate. This may mean considering whether certain processes or activities should be AI-enabled at all.

Triple-loop learning: “What is right? Why are we doing this? What purpose does it serve?” This examines the context of all other learning—the fundamental purpose, identity, and values of the individual and organisation. It examines the foundations of an organisation’s existence and its impact on the world.

Most organisations address AI adoption from the position of single and double-loop learning —this is important however is a key reason for “productive limbo.” They improve execution and question tactics, but never examine the foundational beliefs about identity and expertise that can make AI feel threatening. Only by engaging in triple-loop learning—questioning what makes work valuable, how we define expertise, and who we’re becoming alongside AI—can you unlock the beliefs and behaviours that power transformational value. For AI to reach transformational levels within an organisation, triple-loop learning means asking:

  • What organisation are we aiming to be, and how will AI enable us to deliver on our purpose and strategy?
  • What constitutes “valuable human work” in the AI era?
  • How do we balance efficiency with human development and dignity?
  • How do we define expertise when AI replicates technical skills?

This isn’t philosophy—it’s strategic cultural work that directly shapes whether an organisation has the cultural capacity to unlock transformational value. AI doesn’t just change what people do; it challenges fundamental assumptions about what makes work meaningful, what expertise means, and what growth (human and business) looks like.

From beliefs to action: What leaders need to do

If your AI strategy surfaces and addresses culturally reinforced beliefs about identity, you’ll be better able to become the future organisation you want to be—one built on principles of exploration, human enablement, and inclusive growth that unlock transformational value. Three shifts matter most:

1. Surface and address identity beliefs systematically

Until you understand what your people believe about their professional identity, expertise, value, and future in the AI era—and actively address identity-threat concerns at—friction is inevitable. People whose identities feel threatened become disengaged. Teams fearing elimination stop sharing knowledge. Professionals believing mistakes will be punished stop experimenting.
The work isn’t to eliminate concerns (many are legitimate), but to name them explicitly, examine them together, and build new shared understanding about what identity, value, and growth look like when humans and AI work effectively together. Create space for teams & professions to surface concerns without being labelled “resistant,” and make the invisible visible through structured dialogue about how current identity beliefs shape work design, decision rights, governance, and resource allocation.

2. Articulate a compelling vision for people & organisation in their AI-era

Leaders must articulate a credible vision of what people become in the AI-enabled organisation—with real investment in reskilling and role redefinition, not just reassuring words. This vision becomes the anchor for shifting limiting beliefs about what’s possible and where human value lies. It must connect to the organisation’s purpose and commercial ambitions, showing how AI enables the organisation to deliver on its mission while creating meaningful work for its people. Success metrics should reward intelligent risk-taking and experimentation, not just efficiency. Organisational narratives must shift from zero-sum framings (AI versus humans) to genuine partnership models (AI enabling higher-value work). When people can see themselves in the future organisation and believe the path to get there is credible, adoption accelerates.

3. Build context-specific risk intelligence

Different teams have different concerns about AI that make using it feel like taking a risk to varying degrees. Developers fear AI-generated code threatens their professional identity. Professional occupations worry about losing expert status. Risk officers fear accountability for probabilistic outputs. These aren’t problems to smooth over with generic policies—they’re legitimate differences requiring tailored approaches to help navigate the burning issue that AI can feel like personal risk exposure. Effective capability-building means designing interventions that address specific team contexts, addressing risk/reward concerns, leadership visibly modelling the behaviours they want to see, creating accountability structures that clarify who decides what, and meeting teams where they are rather than imposing one-size-fits-all solutions.

Who are you becoming in an AI era?

Triple-loop learning forces a fundamental question for your organisation: Who are we becoming in our AI era?

This isn’t philosophical—it’s a business imperative. Organisations that successfully scale AI and unlock transformational value won’t just have the best tools. They’ll be the ones that examined their deepest cultural beliefs and built a compelling, co-created vision of who they’re becoming together—delivering on purpose, strategy, and commercial ambitions while creating meaningful work for their people. This creates engaged, aligned, and motivated teams working toward a shared vision of transformational AI success at pace.

The alternative is remaining in productive limbo: successful pilots that never scale, disengaged talent, knowledge walking out the door, risk-averse behaviour that stifles innovation, and transformations that destroy rather than create value. Organisations that fail to address foundational identity beliefs don’t just face ethical questions—they face competitive disadvantage, talent crises, and failed transformations. The business case and the human case are inseparable.

As BBVA’s Global Head of Data offered to the banking sector in OpenAI Frontiers: “Avoid clinging to your current beliefs, because they may change tomorrow with what technology may bring”. This is triple-loop learning in practice—the willingness to let foundational beliefs evolve.

Are you ready to evolve yours?

Is your culture a catalyst or a drag on AI transformation? Kin&Co helps companies build thriving cultures that enable transformational change. Our Cultural Intelligence approach surfaces the beliefs that shape behaviour and help create the conditions for AI’s value at scale.
Contact us to explore a Cultural Intelligence briefing for your leadership team.

Read part 1 in our series: “Is your culture holding back your ability to scale AI for growth?