How to Train Your Team on AI Without a Single PowerPoint AI Education
Home  /  Blog  /  How to Train Your Team on AI Without a Single PowerPoint

How to train your team on AI without a single PowerPoint

Published March 23, 2026

This is part of our AI Implementation Training series.

If you’re wondering how to train staff on AI, I have bad news. The answer isn’t better training materials. It’s not a more engaging workshop facilitator. It’s not an internal AI champion program with Slack badges.

The answer is: stop training them on AI. Start building AI into how they already work. Then show them.

That sounds like semantics. It’s not. “Training people on AI” produces a team that attends a session and nods. Building AI into their workflow produces a team that actually uses the thing six months later. The gap between those outcomes is enormous.

The PowerPoint problem

Every company I’ve spoken to that’s tried AI training has a version of the same story. They hired someone (consultant, internal L&D, whoever) to run AI workshops. The workshops covered what AI can do, how large language models work, maybe some prompt engineering tips. People were engaged. People were excited. People went back to their desks and changed absolutely nothing.

Why? Because knowing what AI can do and knowing how AI fits into your Tuesday morning is completely different. The workshop gave them general knowledge. It didn’t give them a workflow.

I talked to an operations director at a 200-person professional services firm last month. They’d spent 15,000 pounds on AI training across the company. When I asked what changed, she paused. “Honestly? A few people started using ChatGPT for emails. That’s about it.”

Fifteen grand. For email drafting. That’s the PowerPoint problem in a single anecdote.

Workflow-first education

Here’s how we think about it at Easton. You don’t train people on AI. You identify the specific parts of their job that AI can do better, faster, or entirely. Then you build AI into those specific parts. Then you show them the new way their job works.

The education happens through using the system, not through learning about the system.

A practical example. We worked with a coaching business where the support team was spending 3-4 hours a day answering student questions. Same questions, over and over. Different wording, same underlying answers.

We didn’t run an AI workshop for the support team. We built a knowledge assistant trained on all the course content, FAQs, and community discussions. Then we put it in front of the support team and said: “This handles first-pass answers now. Your job is to review what it drafts, edit if needed, and send.”

The first day, they were cautious. By the second week, they trusted it for about 70% of queries. By the end of the month, they were handling the same volume of support tickets in under an hour. Nobody needed a PowerPoint to get there.

Why the “AI champion” model fails

A lot of companies try the middle ground. They pick enthusiastic people, call them AI champions, and task them with spreading AI adoption across the org. In theory, this is peer-led learning. In practice, it’s putting the responsibility for organizational change on the shoulders of someone with no authority to change organizational processes.

The AI champion can show people cool tricks. They can’t restructure how the finance team processes invoices. They can’t decide that client intake should run through an AI triage before hitting a human. Those are operational decisions, not training decisions.

Real AI adoption requires changes to how work actually flows through a business. That’s a design and build problem, not an education problem. Which is why at Easton, the Design phase comes before everything. We map workflows, identify the highest-impact opportunities, and build systems that slot into existing processes. The education is embedded. (I wrote more about this in AI implementation training that actually works.)

If this sounds like your business, let's talk about building it.

What real adoption looks like

I’ll tell you what it doesn’t look like: everyone using ChatGPT at their desk for random tasks. That’s not adoption. That’s experimentation. And it usually creates more problems than it solves because nobody’s using AI consistently, nobody’s using the same prompts, and nothing compounds.

Real adoption looks like this:

Your sales team has an AI that enriches every inbound lead with company data, recent news, and a suggested approach. They don’t think about whether to use it. It’s just there, in their CRM, populated before they open the record.

Your ops team has an AI that reads every incoming client brief, extracts the requirements, and creates a draft project plan. They review and adjust. The AI did the first 80%.

Your support team has a knowledge assistant that drafts replies to common questions. They edit and send. Response time dropped from hours to minutes.

In none of these cases did anyone “learn AI.” They learned a new version of their existing job that happens to have AI running underneath it. The AI is infrastructure.

The three conditions for adoption without training

After building systems for multiple businesses, I’ve noticed three conditions that predict whether a team will actually use an AI system long-term.

First, the system has to be faster than the old way on day one. If there’s a learning curve that temporarily makes people slower, you’ve already lost. People will abandon it and go back to what they know. The system needs to be obviously, immediately faster.

Second, the output has to be good enough to be useful, even if it’s not perfect. If people have to rewrite everything the AI produces, they’ll (correctly) conclude it’s not saving them time. The bar is “good first draft,” not “perfect output.”

Third, there has to be a clear path for when the AI gets it wrong. I call these escape valves. If someone doesn’t know what to do when the AI gives a weird answer, they’ll stop trusting the whole system. “Flag it and a human takes over” is a simple escape valve that makes people comfortable.

When all three conditions are met, adoption happens naturally. No workshops required.

What about prompt engineering?

People ask me about prompt engineering training a lot. Should their team learn how to write better prompts?

Honestly, if your team needs to write prompts to use your AI system, the system was designed wrong. Good AI systems have the prompts built in. The user inputs information (a client name, a document, a question) and the system handles the prompt engineering internally.

Teaching everyone prompt engineering is like teaching everyone SQL because they need data from a database. Just build a dashboard. The same logic applies to AI. Build the interface. Hide the complexity. Let people focus on their actual job.

The exception is senior leadership and strategy roles. Those people might genuinely benefit from understanding what AI can do at a conceptual level, because they need to spot opportunities. But even then, a 30-minute conversation about their specific business beats a generic workshop every time.

How to start

If you’re sitting on a failed AI training initiative, or thinking about starting one, here’s what I’d do instead. Pick one workflow. The most repetitive, time-consuming, clearly defined process in your business. Build an AI system for that one workflow. Get the team that does that work involved in the build from day one. Launch it. Measure usage at 30, 60, and 90 days.

If usage holds, you’ve just proven the model. Scale it to the next workflow. If usage drops, figure out which of the three conditions (speed, quality, escape valves) you missed, and fix it.

That’s how you actually build AI fluency in a team. One workflow at a time, built into the work, no slides necessary. According to McKinsey research, companies that focus on embedding AI into specific workflows see significantly higher adoption rates and measurable business impact compared to those that rely on general training programs.

Frequently asked questions

What’s the best way to train staff on AI?

Don’t train them on AI. Instead, identify specific parts of their job that AI can do better, faster, or entirely. Then build AI into those workflows and show them the new way their job works. The education happens through using the system, not through learning about the system.

How much does AI training for a team cost?

The cost can vary widely depending on the size of your team and the scope of the training. As an example, one company we spoke to spent £15,000 on AI training across a 200-person organization, but the impact was limited to a few people using ChatGPT for emails. A more effective, workflow-focused approach will likely cost more upfront, but will drive significantly more adoption and impact.

Why does the “AI champion” model fail?

The “AI champion” model, where enthusiastic people are tasked with spreading AI adoption across the organization, often fails in practice. While it’s a good idea in theory to have peer-led learning, in reality it puts the responsibility for organizational change on a small group of people, rather than integrating AI into the actual workflows of the broader team. Harvard Business Review research shows that successful AI transformation requires top-down commitment to restructuring work processes, not just grassroots enthusiasm.

Keep reading

Stop training. Start building.

We design AI systems your team actually uses. Training is built in, not bolted on.

Book a discovery call
Or explore our AI Education service →