
We loved the opportunity to connect live with our fellow L&D leaders and innovators for our four-week Unpromptable webinar series. (If you missed it, no worries! You can still view the recordings and our 2026 L&D and Talent Trends Report.)
With the season finale behind us, we sat down with the experts behind the series to dig into the questions you shared with us live. In the following Q&A, our in-house thought leaders address everything you’ve been wondering about, from the ethics of AI use and implementation to the nuts and bolts of interactive simulations.

1. As L&D shifts toward AI-driven ecosystems, adoption remains a hurdle. Which tools do you recommend for 'onboarding' skeptical employees so they can see the tangible value of AI in their specific roles without feeling overwhelmed?
The tools will vary based on your role, company, or IT requirements. The embedded tools like Gemini and Copilot are very easily adopted because they’re present within an organization’s ecosystem. If your organization can use tools that automate routine work, like Meeting Assistants or Research aids, those are also easy to adopt for most people. When you start getting into creative and role-play tools, it may take people longer to learn how to use them effectively.
However, before you throw a new tool on their plate, spend some time framing this up well. Don’t ask them to innovate, ask them to use AI to draft and ideate. It’s more adoptable when employees understand that they’re working with the tool. Focus on the employee's role as the pilot, and frame the AI as an intern or co-pilot.
Also, have a respected skeptic-turned-user or other AI evangelist speak about their experience and the benefits to help ease the skeptic into the flow of using AI.
2. How do you address detractors who immediately recoil at the idea of interacting with an AI avatar, especially in relation to something like empathy?
We point out that AI avatars aren’t a replacement for human connection, but a high-fidelity "empathy simulator" that can provide a judgment-free space for people to sharpen their interpersonal skills.
We would invite detractors to consider a traditional roleplay, where peer self-consciousness or inconsistent feedback can make the activity feel awkward and ineffective. When we introduce an AI avatar, they fill the role of a sophisticated partner that allows participants to practice empathy, active listening, and problem solving in a psychologically safe environment.
We would also encourage them to engage with customer-service simulation where an avatar representing a customer shares a problem and the learner has to respond appropriately and attempt to resolve the issue. Once the AI has reviewed the response, the avatar reacts as a human would—with delight, acceptance, or anger.
By offloading the "acting" to a consistent digital interface, we can empower people to fail, iterate, and build their emotional muscle memory, ensuring that when they finally stand before a colleague or customer, they are fully present, confident, and deeply human.

1. Can you explore the difference between AI that is session-specific and AI that remembers a learner over time and across sessions, i.e., the difference between an AI agent and an AI learning companion?
AI in session-specific training is sharp in the moment but starts fresh every time, with no memory of what the learner did yesterday or how they've progressed over the past several weeks. An AI learning companion carries a persistent learner model that evolves across sessions, tracking patterns like recurring struggles, improving skills, and engagement shifts to inform not just what to practice next but how to coach it.
2. What suggestions do you have for balancing the need to capture AI's capabilities with the protection of proprietary enterprise information? Or is this concern subjective because it mostly depends on the enterprise relationship and contract with the AI provider?
The contract matters, but the underlying architecture matters more. The safest approach is to design the system as tier-agnostic, so the same orchestration logic works whether client data stays in a basic API-only setup (data passes through but never trains the model), a heavier RAG deployment (deeper personalization and more risk of data leakage), or a fully on-premise environment for regulated industries.
That way, the content scales across different security levels without being redesigned each time. On the contractual side, the non-negotiables are straightforward: Client content never trains the model; learner data follows the client's governance policies; and nothing persists beyond the session unless the client explicitly opts in.

1. A common pain point for L&D professionals is spending more time editing AI output than they save by using it. What are some specific strategies to ensure AI actually boosts productivity rather than creating more 'cleanup' work?
I think there are two key strategies:

1. How do you create simulations for participants with genAI, push it to them, measure the interaction?
There are three approaches:
The first: if your LMS provides a genAI authoring tool, instruct it to develop a scenario that focuses on the targeted skills and knowledge. Keep in mind that it doesn’t have to be a turn-taking role-play; it can be something like investigating a situation (e.g., in sales, “Here is a customer, this is what their needs are, this is the challenge they face”), then make a recommendation or decision (e.g., in sales, “What solution would you recommend, and why?”). You can then edit the module and publish.
The second: if your LMS doesn’t have an authoring tool, go to an LLM and ask it to devise scenarios, then construct them manually using an authoring tool such as Articulate Rise or Evolve. You can then publish the scenarios as a SCORM package. If you use an “assessment” component for the learner to make a decision, most authoring tools will send SCORM cmi.interaction data to the LMS to capture the learner’s answers, and some LMSs provide a means to report on cmi.interactions so you can aggregate their responses.
The third: This one takes some software engineering, which involves constructing an eLearning module using a custom SCORM-wrapped web app that captures learner inputs. It then sends the learner inputs in real time to an LLM to process via APIs, then receives the LLM’s feedback via JSON and formats and displays it. This approach is very useful in creating simulated role-plays (e.g., conversing with a customer) or when you’d like learners to put together an artifact such as a presentation or business plan. We’ve built these kinds of systems for client-partners across a range of industries and would be happy to demo a few.
2. How do you handle employees who are dubious about the ethics of AI? Many of our team members have questions about the morality of using it, and I want to make sure I respect that.
We think it’s a very positive thing for employees to be dubious—or at least hesitant. AI has the potential to radically disrupt business life, and not always for the better. A number of organizations have already announced layoffs due to anticipated productivity gains to be brought about by AI, and some believe many CEOs are using AI as a subterfuge to justify downsizing).
It’s logical for someone to feel that, if they are using AI to do their job, they may be training their replacement. Another understandable fear is of a kind of “Terminator” future, where AI is given agency and uses it to do harm. These kinds of feelings are natural, and not entirely unjustified, so an organization should address them head on.
The first step is for an organization to adopt a set of values and standards around AI use to ensure the goal is not to reduce headcount, but to increase the productivity and life of its employees and also to relieve them of the burdensome and routine elements of their job so they can focus on the interesting parts. Hosting dialogue around these values and standards, and involving senior leadership, can work wonders for lessening doubts and gaining employee buy-in.
Second, the use of AI in an organization should take place transparently and in the open. Regularly publicize gains made with AI (as well as mistakes and lessons learned). Seeing others use it productively is a great way to relieve skepticism and gain buy-in. You might consider hosting workshops where practitioners can share the work they’re doing in AI.
Third, teaching employees about what AI can do and how it works will go a long way to dispel fear, because we fear what we do not understand. Spending effort to train employees in AI can make believers out of them.
Throughout all four sessions—and, we suspect, all year long—there’s one overarching takeaway. In the still-unfolding age of AI, L&D leaders and teams are more needed than ever. Our organizations are looking to us to enable behavior change and mindset shifts, align skilling initiatives with talent and organizational needs, and "reimagine work itself as inherently developmental.” These are fundamentally human pursuits that require our empathy, connectivity, creativity—in short, our full unpromptability—as we continue to practice the craft we love in 2026 and beyond.
Got a burning question we haven’t covered? Curious about fresh approaches to adaptive learning, AI literacy, content curation, gamification, and other bucket-list items? We'd love to continue the conversation! Reach out to explore the possibilities.