top of page
Search

AI Chatbots Aren't Just for Classrooms — They're Changing How Employees Learn on the Job

  • Jason Jacobs
  • Feb 27
  • 5 min read

There's a moment every new employee knows well.

You're 60 days in. You're handling a live customer situation or running a process you've done a dozen times in training; and something comes up that you weren't quite prepared for. Your manager is on another call. Your peer is slammed. The answer is probably in a knowledge article somewhere, but you don't have 10 minutes to find it.


So you guess. Or you wait. Or you make a mistake that costs time, money, or a customer relationship.


That moment is exactly what I spent years trying to eliminate at Sunrun — and what eventually led me to building AI chatbots trained on our own internal knowledge to serve as real-time performance support for a workforce of 1,300+ employees.


It also turns out to be the same problem that forward-thinking educators are solving in their classrooms. And the parallels between the two are worth paying attention to.


What the Research Gets Right

A recent piece by educator Olla (published in Inside Higher Ed) outlines a five-step framework for designing chatbot role-play scenarios in higher education — everything from defining learning objectives and writing prompts, to testing, iterating, and implementing with students. His work in healthcare and business courses shows that when chatbots are designed with intentionality, they stop being novelty tools and start being genuine learning engines.

His framing stuck with me: "When implemented thoughtfully, chatbot role-plays are tools for deep, student-centered learning."

Replace "student-centered" with "employee-centered" and you're describing exactly what high-performing L&D organizations are building right now.


What We Built at Sunrun and Why

When I joined Sunrun's L&D team, we were supporting a distributed workforce across technology operations, customer service, and field roles. Training programs were solid. Onboarding was structured. But the gap between formal training and real-time performance was a constant challenge.

The real drain wasn't what people didn't learn in training. It was what they couldn't access fast enough when they needed it on the job.


Knowledge articles lived in a system that required knowing what to search for. SOPs were detailed but dense. Recorded training workshops — the ones with the most useful walkthroughs and real-case examples — sat largely unwatched after launch week.


So we built AI chatbots trained directly on that content. Knowledge articles, SOPs, process documentation, and recordings of actual training workshops were used to train models that employees could query in plain language, in the moment, without escalating to a peer or a manager.


The shift in how employees worked was immediate. Instead of "let me ask someone," it became "let me check." Instead of waiting for a manager to become available, employees could get a grounded, accurate answer in seconds — one that referenced the actual documentation, not someone's memory of it.

The result was faster resolution, more confident decision-making, and a meaningful reduction in the kind of peer-to-peer interruptions that slow down experienced employees just as much as new ones.


The Design Principles That Apply to Both

What struck me reading Olla's framework is how closely it maps to the design principles behind effective performance support chatbots in corporate environments. Here's where the overlap is sharpest:


Start with the performance gap, not the technology. Olla begins by defining learning objectives before building anything. In corporate L&D, the same logic applies — before you build a chatbot, ask what specific failure is happening and why. At Sunrun, the failure wasn't knowledge. It was access. That diagnosis shaped everything about how we built the solution.


The prompt is the product. Olla's framework puts significant emphasis on writing a strong, detailed chatbot prompt — defining role, tone, constraints, and expected behavior. This is just as true for performance support bots. A chatbot trained on 500 knowledge articles is only as useful as the instructions it's given about how to interpret, prioritize, and present that information. We spent as much time refining prompts and response logic as we did on content ingestion.


Test it like a skeptic, not a builder. Olla ran his scenarios repeatedly, playing both student and chatbot to find the gaps. We did the same — putting new chatbots in front of frontline employees who had no context on how they were built, watching where they got confused or where answers were incomplete, and iterating before full rollout. The people who build the tool are always the worst testers of it.


Reflection closes the loop. In Olla's model, students submit written reflections after the role-play. In our environment, the equivalent was supervisory observation and quality metrics — were employees making better decisions after using the tool? Were escalations dropping? Measuring behavior change, not just engagement, is what tells you whether the chatbot is actually working.


The Bigger Shift Happening Right Now

Both use cases point to the same evolution: AI is moving from content delivery to performance enablement.


For too long, training has been something that happens to people — in a classroom, on an LMS, during onboarding — and then ends. The assumption is that if we designed the training well enough, people will retain and apply it. We know that's not how adults learn. We know performance is contextual, situational, and time-sensitive.


AI chatbots — whether designed to simulate a difficult board conversation for an MBA student or to answer a frontline employee's process question mid-task — close that gap. They meet people at the moment of need, not in advance of it.

That's not a replacement for strong instructional design. It's an extension of it.

Where to Start

Whether you're an educator, an L&D professional, or an operations leader thinking about how AI can support your people, here's the short version of what


I've learned:

Train the bot on your real content — the documentation your people already trust, not generic material. Ground it in your actual workflows, language, and context.


Define the use case tightly. The clearest wins come from a specific moment of failure — the question that always gets asked, the step that always gets skipped, the escalation that happens too often. Start there.


Build the feedback loop from day one. If you can't measure whether the chatbot is changing behavior or improving outcomes, you can't improve it — and you can't make the case for it internally.


And most importantly — don't wait for perfect. The best chatbot you never launch is worse than a good one you iterate on in the field.


The technology is ready. The frameworks exist. The only remaining question is whether your organization is willing to treat learning as an ongoing performance system rather than a scheduled event.


I'd argue the answer to that question is worth a lot more than another compliance training module.

Jason "JJ" Jacobs is a Learning & Development leader, performance consultant, and founder of LXD Consultants. He specializes in building learning ecosystems that drive measurable business results.

 
 
 

Comments


bottom of page