Skip to content

Is your culture holding back your ability to scale AI for growth?

author
Laura Ansloos
26th Nov 2025
Article

As AI continues to transform the world as we know it at pace, organisations aren’t just leading through a technological revolution, they are navigating a cultural revolution within their organisations too.

Growth requires AI adoption at transformational levels. Transformational AI adoption requires intelligent risk-taking at scale. And intelligent risk taking requires a culture that celebrates it, not just tolerates it, so that AI investment drives revenue growth, resilient margins and reduced downside risk.

The question isn’t just whether your organisation is ready for AI. It’s whether your culture makes intelligent risk-taking feel safe, supported, and rewarded – or whether it quietly signals the opposite. 

This is the first of two Kin&Co pieces exploring the link between AI adoption and risk intelligence. In this piece, we make the case that AI adoption requires people to take intelligent risks in their day-to-day work – yet many organisations have cultures that constrain exactly these behaviours. In our second piece, we’ll examine where these cultural signals originate and introduce an approach that enables organisations to evaluate and reset the conditions they need for AI adoption at scale.

AI is now a competitive imperative

AI-driven growth requires people to take intelligent risks in their day-to-day work, yet many organisations have spent years culturally conditioning their people to avoid risk altogether. This matters because capturing AI’s transformational potential is a key factor in unlocking growth and competitiveness for UK businesses. And there is no time to lose – adding an additional 5 years for full AI roll-out will reduce economic impact in 2035 by over £150 billion. Moody’s also warns that slow AI adoption will erode margins amidst low-growth conditions. 

However, while three quarters of risk leaders believe rapid progress on AI is essential, only a small minority feel confident about where to focus. The issue compounds in traditionally risk-conservative settings, like financial institutions, with parliamentary committees linking risk-averse culture among financial regulators to weaker competitiveness. In November 2025, the FCA also signalled a clear cultural shift mandate in a recent speech:

“Our existing outcomes-focused rule set is the reason that we are not regulating separately for AI … this shift to outcomes-focused and less prescriptive rules may feel uncomfortable. It requires a culture change – both in the industry and, yes, for us a regulator. And it requires more active risk management within firms. But an outcomes-focused approach is essential if we are to be forward looking and supportive of innovation.”

Sarah Pritchard, Deputy Chief Executive, FCA

Technology, risk, finance and HR leaders all feel the tension. Legitimate concerns about bias, security and human judgment sit alongside intense board pressure to move faster on AI than the competition.  This pressure to move quickly extends to even the most traditionally risk-wary organisations with financial institutions ramping up investment in AI to drive growth. Kin&Co believes that organisations with cultures that support the right kind of risk-taking will more successfully capture value from AI. Indeed, digital transformations fail to deliver promised value because of cultural barriers, not only technical ones. 

At Kin&Co, we have been partnering with COOs and Risk professionals to explore a critical shift at the centre of reframing risk. We define this as moving beyond risk management to building genuine risk intelligence; a much more active, confident and strategic approach to risk optimisation within firms and the sector. 

In simple terms, it is the organisational muscle that lets you pursue upside without losing sight of downside. We believe risk intelligence to be an essential ingredient for avoidance of reputational damage and AI-enabled growth.

And so the question we ask clients exploring AI adoption is not just “are you ready for AI?”. It is also “does your culture support people to take intelligent risks?”

Transformational AI adoption requires personal risk exposure

AI is different from every other technology we’ve asked employees to adopt. Think about the last time your company rolled out new software – maybe a CRM system, ERP or project management tool. If it was set up correctly, it did exactly what it was supposed to do, every single time. Click the same button, get the same result. AI doesn’t work that way. You can ask an AI the same question twice and get two different answers. One might be brilliant, the other might be completely off-base. This isn’t a bug – AI systems make probability-based predictions rather than following fixed rules, which means every output needs a human to check it makes sense.

Let’s think about what this difference means in what we demand from individuals.

  • A financial analyst feeds data into a model and relies on its categorisation, putting their judgment on the line
  • A developer deploys code suggested by an AI tool, accepting accountability for outputs they didn’t fully author
  • A talent lead uses an AI screening tool to shortlist candidates, relying on its pattern recognition while holding responsibility for fair and defensible hiring decisions
  • A risk officer approves a use case knowing the technology is probabilistic, choosing uncertainty over false comfort

Every one of these acts requires personal risk exposure directly linking to important consequences to the organisation – choosing to rely on something imperfect, making yourself accountable for outputs you can’t fully control, accepting that you might be wrong in ways your organisation will scrutinise.

And here’s the issue: AI adoption research shows people are happy to use AI for low-stakes tasks, but their confidence drops sharply when the consequences matter. In a law firm reviewing contracts, or a finance professional analysing cost data, an AI mistake isn’t just annoying – it could be catastrophic. Choosing to use AI in these environments isn’t really a technology decision. It’s one of personal risk exposure. 

This is why AI adoption is inseparable from intelligent risk-taking. “Intelligent” in the AI context means three things: knowing where using AI at work can offer instrumental value or create unacceptable risk, clear accountability and guardrails around responsible use, and clear processes for learning & reporting when things might not go as expected. Whether people feel able to take those intelligent risks is heavily shaped by the risk culture that shapes their day-to-day actions.

Culture influences how people respond to risk

Risk research shows that people’s risk appetite shifts depending on how a choice is presented, what they think will happen if it goes wrong or right, and what they see and hear around them. Put simply, the day-to-day signals and cues in your culture strongly influence how people respond to risk.

The work of the late Nobel Prize winning psychologist and behavioural economist, Daniel Kahneman, helps explain why. When situations are framed in terms of losses, people often take more desperate risks to try to avoid those losses. When framed in terms of gains, they often become overly cautious and demand a level of certainty that is unrealistic. Other human biases make this worse. We pay too much attention to recent events, seek information that confirms what we already believe and cling to first impressions even when conditions change.

Kahneman’s research also showed that people rely on mental shortcuts – what he called heuristics – when making decisions under uncertainty. AI adoption creates multiple layers of uncertainty: the outputs themselves are probabilistic, the consequences of using AI are unclear (will this be seen as innovative or disruptive?), accountability is ambiguous (who is responsible when AI contributes to a decision?), and organisational responses are unpredictable (will mistakes be learning opportunities or career-limiting events?). When we’re time-pressed or cognitively overloaded in these conditions, we default to shortcuts rather than careful analysis. This creates a fundamental tension: AI requires people to make risk judgments amid multiple uncertainties, but our cognitive wiring makes us uncomfortable with uncertainty and pushes us toward either avoidance or oversimplified heuristics. Under pressure to deliver quickly, people skip validation steps, forget compliance requirements, or fail to recognise when outputs might be biased – not because they’re careless, but because they’re human.

Your organisation’s cultural signals are doing this AI risk-framing work constantly through three key mechanisms:

What leaders model. If leaders signal their uncertainty around the value of AI, are generally risk-averse, or frame experiments as routes to failure rather than learning opportunities, people will interpret any personal piloting as risky to their reputation – or even their career – and treat it as a loss to avoid. In the face of ambiguity, people protect what they have rather than take a risk for an unclear gain. Addressing this requires real leadership energy and appetite for AI – not just sponsorship, but visible engagement. In our experience, it is critical that leaders articulate a clear link between AI adoption, risk and the organisation’s commercial and strategic goals. Leaders who actively use AI, share their experiments (including failures), and celebrate learning signal that intelligent risk-taking is not just permitted but expected. When combined with strategic clarity, this creates the conditions for productive rather than paralysing uncertainty.

What policies signal. The UK Cabinet Office’s guidance on scaling AI tools suggests that policies that generically mandate “humans must check outputs” often fail as a safety mechanism. Experienced professionals can’t always recognise bias or misleading information in outputs. In addition, time pressures and “saves you time” messaging create conflicting signals – if AI is meant to make me faster, why spend time checking? Without structured criteria specifying what judgment is actually needed, policies shift risk onto individuals while creating organisational vulnerability.

What incentives actually reward.  If any missteps made when using AI are met with blame, while the cost of not using it is never counted, it is rational for people to hold back and wait for impossible levels of certainty before they try anything new. This is loss framing in action – people focus on avoiding the visible risk (making a mistake) while ignoring the invisible cost (missing opportunities). If bonuses are still tied to hitting targets the “old way,” it is equally rational for people to stick with that and for AI adoption to stall.

These cultural mechanisms produce measurable impacts on AI adoption. Deloitte’s AI and trust research suggests that organisations that invest in building a transparent, trusting culture for AI adoption are more likely to land in the top tier for both realised AI benefits and effective risk management. However, Gallup’s Culture of AI Benchmark reports that only about one in five digital transformation initiatives achieve growth or efficiency goals, with cultural readiness the primary differentiator.

At Kin&Co, we help our clients see where they have become culturally trapped in this risk framing, and AI adoption stalls not because the technology fails but because the cultural context makes the benefits of using AI invisible, not worth the reward or, worse, come at a personal cost.

Does your organisation really believe AI is a risk worth taking?

Culture shapes AI adoption by constantly sending signals about what is or isn’t considered to be acceptable personal risk exposure. Whether an individual feels able to take those risks isn’t primarily about technical capability or even personal courage – it’s about what the culture signals is acceptable about risk taking. Through what leaders model, what policies mandate, and what incentives reward, your organisation is doing continuous risk-framing work – telling people, in a hundred small ways each day, whether using AI will be seen as savvy or risky, or whether early use missteps are learning opportunities or career-limiting events.

But these cultural signals around AI and risk taking don’t emerge from nowhere. They originate from something deeper – foundational, cultural beliefs that inform the visible patterns of behaviour that define an organisation. Beliefs around risk may sound like:

  • “Our reputation is built on our expertise”
  • “Prudence means survival”
  • “Perfect execution is the only acceptable standard”
  • “My knowledge is my power”

The question isn’t just whether your organisation is ready for AI. It’s whether your culture makes intelligent risk-taking feel safe, supported, and rewarded – or whether it quietly signals the opposite. 

Growth requires AI adoption at transformation levels. Transformational AI adoption requires intelligent risk-taking at scale. And intelligent risk-taking requires a culture that enables it.

The question is: Are you ready to go deep enough to capture the transformational value of AI?

In our next piece, we’ll go deeper. We’ll explore where these cultural signals derive from and introduce a Kin&Co approach that enables organisations to evaluate and reset the conditions they need for AI adoption at scale.

Kin&Co helps companies build thriving cultures that enable risk intelligent growth. Our Cultural Intelligence approach combines rigorous assessment with practical behaviour change tools and interventions to make intelligent risk-taking visible, safe, and scalable – surfacing the beliefs that shape behaviour, resetting the signals that drive decisions, and creating the conditions for AI adoption at transformational scale.