One in 6 workers admits to pretending to use AI. Not just struggling with it or using it badly, but actually pretending. These folks are “performing” to give the appearance of adoption while quietly doing the work the old way. A social scientist studying this phenomenon called it “real-life LARPing” (live-action role-playing). Employees are ticking the boxes on their objectives, perhaps to protect their bonuses, while quietly circumventing the tools entirely. If that sounds more like organizational theater rather than true transformation, you’re starting to see the problem.
The Fear Behind Failed AI Adoption
People are afraid of the unknown. They are afraid of getting something wrong, of admitting they don’t understand how these tools work, and—perhaps most of all—of becoming obsolete themselves. In our research on how great leaders think differently, fear consistently emerged as one of the dominant emotions shaping workplace behavior today. AI has amplified it to the point of existential threat.
When AI use becomes an expected part of someone’s role (written into objectives, with expectations tied to productivity gains), the pressure is enormous. And when that pressure isn’t matched with genuine support for behavior change, you get exactly what we’re seeing now: people performing compliance instead of genuinely adopting new technology to change the way they work.
The data from Pluralsight’s 2025 AI Skills Report bears this out. 91% of C-suite executives admit they’ve pretended to know more about AI than they actually do. 79% of workers are doing the same. One in three employees are actively pushing back—refusing to use AI tools or skipping training altogether. This is what happens when AI adoption is treated as a training problem instead of a change initiative. Corporate LARPing is the new productivity theater, and it is costing organizations millions in unrealized ROI.
The uncomfortable truth is that if people are pretending to use AI, that is not a skills gap. It is a leadership signal. People pretend when it is safer to perform compliance than to express uncertainty. AI fear isn’t a technology problem. It’s an identity problem.
The “It Won’t Happen to Me” Bias
There is another layer that makes behavior change even harder. Irrational Labs’ AI adoption research found that 8% of people believe AI will replace them, 14% believe it will replace their peers, and 29% believe it will replace workers in other industries. Notice the pattern: the further away from “me,” the higher the perceived risk.
This is classic optimism bias, the same cognitive mechanism that makes us think we are better drivers than average or funnier than most other people. The human brain is wired for homeostasis. It has self-protective mechanisms that help us avoid acknowledging threats unless absolutely necessary. Layer AI over all of this, and individuals will systematically underestimate their need to adapt. They are not being unreasonable. They are being human.
Which raises the real question: how do you create genuine behavior change when the brain is actively working against it?
Why Training Alone Won’t Get You There
The knee-jerk reaction to low AI adoption is always the same. Make it a training. Teach people to design agents. Build a curriculum. But training is one small piece of a much bigger puzzle. Having access to tools doesn’t transform organizations, change workloads, or shift behaviors.
Behavior change requires three features working in concert: capability (the skills and knowledge), opportunity (the conditions to apply them), and motivation (the reason to care). Most organizations obsess over capability and ignore the other two, which is why so many AI rollouts stall.
Self-Determination Theory offers a sharp lens for understanding why AI adoption either embeds or fades. It tells us that sustainable behavior change depends on three psychological needs being met: autonomy, competence, and relatedness. In practice, this means people don’t adopt AI simply because they’ve been trained or told to—they adopt it when it makes them feel more in control of their work, more confident or capable in delivering it, and more connected to how their peers are working. When those conditions are present, AI shifts from being a tool people use to something they rely on.
The implication is that AI adoption is less a capability problem and more a motivation system. If AI is imposed, people resist or comply superficially (low autonomy). If it’s confusing or unreliable, they disengage (low competence). If it isn’t socially normalized, it never scales beyond early adopters (low relatedness). Organizations that succeed design for these needs deliberately: creating space for choice, accelerating early wins to build confidence, and making usage visible and shared. That’s what turns experimentation into habit, and habit into sustained performance.
A Tale of Two Companies: Klarna and IKEA
Very few enterprise AI initiatives currently deliver clear ROI. True enterprise-wide maturity in AI deployment is still extremely rare. The difference between the organizations that get there and the ones that don’t usually comes down to how they think about people.
When Klarna made headlines for replacing its customer service team with AI bots, it looked like a bold leap into the future. The reality was more sobering. Within just a year, they were rehiring. Having asked employees to train the AI, they discovered that what they’d actually done was strip out the very human judgment, context, and nuance the system still depended on. Efficiency gains proved fragile without expertise in the loop, and the “replacement” narrative quickly gave way to a more durable truth: AI works best when it augments humans, not when it attempts to sideline them.
IKEA took a completely different approach... They built BILLIE, an AI bot that now handles 47% of customer inquiries. But instead of laying off their call center workers, they asked a different question: what could these talented people do now that they were freed from repetitive tasks? The answer was remote interior design. IKEA trained 8,000 of them, creating a brand new revenue channel that now accounts for 3.3% of total revenue—1.4 billion dollars.
IKEA didn’t try to do more with less. They did more with augmentation. While Klarna asked, “How do we replace people?” IKEA asked, “What else could our people do to have more impact?”
There is a principle psychologists sometimes attribute to IKEA’s broader business success: friction creates ownership. When people build their own furniture, they value it more. The same principle applies to AI adoption. When people are involved in shaping how AI transforms their work—rather than having it done to them—they own it.
That ownership doesn’t happen by accident. It requires getting close to the work itself, sitting with teams to understand what is painful, repetitive, or meaningless and reimagining it together. It requires mapping skills team by team rather than dictating from above. And it requires understanding what actually motivates people, not just what you assume will.
Introducing the BRAVE Framework
If fear is driving the dysfunction—the pretending, the pushback, the optimism bias—then the framework needs to counteract fear directly and create the conditions for genuine behavior change. That is BRAVE: Belonging, Relevance, Access, Visibility, and Empowerment. Each pillar addresses a specific barrier to adoption.
Belonging: Counteracting Fear with Psychological Safety
Before anyone will genuinely experiment with AI, they need to feel safe doing so. Belonging is built through purpose-driven messaging that addresses people’s fears head-on rather than pretending they don’t exist. It means positioning AI as augmentation rather than replacement, and meaning it. It also means being transparent about what AI can and cannot do and treating experimentation as something to be valued rather than punished.
Relevance: Connecting AI to Work That Actually Matters
This is where most implementations collapse. Tools get deployed without ruthless relevance to the workflows employees care about. Relevance means aligning AI use cases with the actual roles and tasks people are doing every day, not flashy demos disconnected from real work. The strongest signal of relevance is when teams set a goal together and explore how AI could move it, often supported by internal champions who understand the workflow well enough to spot where AI genuinely creates opportunity.
Access: Creating Safe Spaces to Learn
You can’t learn to swim by reading about it, and you can’t learn AI by sitting in a webinar. Access means hands-on practice with real tools, in sandbox environments where mistakes are learning opportunities rather than career risks. It also means having the underlying infrastructure, secure tools, and coaching loops in place so that people aren’t learning in a vacuum—and making the technology genuinely easy to reach when curiosity strikes.
Visibility: Leveraging “People Like Me”
The question quietly running in every employee’s head is: are people like me using this? Visibility answers it. When colleagues and leaders are seen genuinely using AI tools—not just talking about them in town halls—it normalizes the behavior in a way no policy document can. Social proof, peer mentoring, and leadership modeling create the stories people pass along to one another, and those stories are what shift the norm.
Empowerment: Normalizing Intelligent Failure
The final pillar is about ownership and momentum. When people feel empowered to experiment without fear of punishment, innovation follows. Empowerment means recognizing and rewarding experimentation, celebrating small wins, and treating intelligent failure as expected rather than embarrassing. The message employees need to hear (and believe) is simple: we’ve got your back, have a go.
What This Means for Learning & Development
AI adoption is not a training initiative. It is a change initiative, with training as one component. For L&D teams, that requires reimagining the role itself. The question is no longer “what training do we need to build?” It is whether you are creating psychological safety so people feel they belong in an AI-enabled future, whether every initiative is ruthlessly relevant to real work, whether you are providing genuine access through hands-on practice, whether adoption is visible through peer success and leadership modeling, and whether people are empowered to experiment, fail intelligently, and build momentum.
When AI initiatives are aligned with BRAVE, L&D shifts from being a training provider to a strategic partner in transformation.
Where to Start with BRAVE AI Adoption
The organizations that embrace frameworks like BRAVE—approaches that address the full spectrum of human factors from fear to motivation to social proof—will look dramatically different from those still treating AI as a technology rollout.
Getting there requires acknowledging some uncomfortable truths. You cannot train your way out of a behavior change problem. Training is absolutely necessary but not sufficient alone.
If people feel unsafe, they will pretend. If they feel incapable, they will disengage. If they feel alone, they will opt out. But when people feel in control, capable, and part of something shared, adoption stops being something you push, and becomes something that sustains itself. The organizations that understand that won’t just implement AI. They’ll actually change how work gets done.
Start by honestly assessing where your organization stands on each BRAVE pillar. Where are the gaps? Which elements are strongest? Use that assessment to build a roadmap that addresses human behavior alongside technical implementation.
And perhaps most importantly: stop measuring success by tool deployment alone and start measuring it by behavior change. Are people genuinely using AI, or are they pretending? Are they experimenting and learning, or are they quietly resistant? Are they being empowered to do more, or are they just terrified of being replaced?
The organizations that get this right won’t just adopt AI. They will be the ones creating real growth, real transformation, and real advantage in the market.
Ella Richardson
Learn more about our talent transformation solutions.
Transformation doesn’t happen overnight if you’re doing it right. We continuously deliver measurable outcomes and help you stay the course – choose the right partner for your journey.