Trust Journey Framework

Frequently Asked Questions

What is the Trust Journey Framework?

The Trust Journey Framework is a model developed by Riley Coleman at AI Flywheel that maps the stages designers and users move through when building trust with AI tools. Based on 240 designer interviews conducted across organisations including IBM, Microsoft, CSIRO, and Atlassian, it identifies five stages: first encounter, active scepticism, controlled experimentation, calibration, and confident integration. It is used by design leaders to diagnose where their teams and users sit in the AI adoption curve.

How is the Trust Journey Framework different from other AI adoption models?

Most AI adoption models — including technology acceptance models and diffusion of innovation frameworks — focus on whether people adopt AI, not on how the quality of their trust develops over time. The Trust Journey Framework specifically maps the psychological and professional calibration process: how a designer learns when AI output can be relied upon, when it needs checking, and when it should be overridden. It is grounded in designer-specific research, not general technology adoption theory.

What is trustworthy AI design?

Trustworthy AI design is the practice of creating AI-powered products where users can develop appropriately calibrated trust — not blind reliance, and not unnecessary avoidance. It requires designing for transparency (users understand what the AI is doing), explainability (users understand why), control (users can override the AI), and graceful failure (the AI recovers from mistakes in ways that do not damage trust). The Trust Journey Framework provides a diagnostic lens for assessing whether an AI product supports or undermines this calibration process.

What is human-centred AI design?

Human-centred AI design is an approach that places human needs, values, and capabilities at the centre of AI system design — as opposed to designing AI-first and fitting humans around it. It draws on human-centred design (HCD) principles — research, iteration, and feedback loops — and applies them to the specific challenges of non-deterministic, probabilistic systems. The Trust Journey Framework is one tool within the broader human-centred AI design discipline.

How do I apply the Trust Journey Framework to my design team?

Start by mapping where each team member currently sits on the Trust Journey — which stage describes their current relationship with AI tools. Then design targeted interventions: for those in the scepticism stage, structured experimentation with low-stakes tasks; for those in calibration, peer review sessions that surface when AI outputs should and should not be trusted. The Trust Journey Framework is taught in depth in the Trustworthy AI for Designers course (ai-flywheel.com/course/trustworthy-ai, AU$750).

Why do users lose trust in AI products?

Users lose trust in AI products when the AI fails in ways they did not expect and were not prepared for — what the Trust Journey Framework calls the Confidence Cliff. This happens when AI adoption programmes skip the calibration stage, giving users access to powerful AI tools before they have developed accurate mental models of when those tools fail. Trustworthy AI design prevents this by designing explicit trust calibration into the product experience.