Using AI in Workforce Training
What recent studies reveal about AI’s impact on skill development, judgment, and expertise in federal government health regulatory agencies By Gayle Griffin, Ed.D., Program Director, APV
(Part 1 in a 3-part series on AI and Workforce Training in Health Regulatory Agencies) Artificial intelligence (AI) is increasingly being integrated into federal workforce training, particularly in technical and regulatory roles where accuracy, professional judgment, and defensibility are essential. Emerging research on AI and adult learning points to a critical distinction: AI can measurably improve learning outcomes and accelerate skill development, while simultaneously weakening depth of understanding and independent reasoning. Although AI has the potential to be a game-changer for adult learning, a poor approach can undermine core elements of professional expertise. The difference lies not in the technology itself, but in how AI is designed into learning systems and workflows. For federal health-regulatory roles—where technical accuracy, evidentiary reasoning, and defensible decision-making are non-negotiable—this distinction matters. So, under what conditions does AI strengthen adult learning, and under what conditions does it weaken it? Answering that question requires moving beyond anecdotes and examining the emerging evidence.
Where AI Improves Learning
AI as Tutor and Coach
Several studies published in 2025 by Katz et al. and Guryan et al. indicate that AI-based tutoring and coaching systems can outperform traditional instructional approaches for adult learners, particularly in well-structured technical domains.
Furthermore, a randomized controlled study published in Scientific Reports found that learners using an AI tutor achieved higher learning gains and faster mastery than peers in active-learning classroom settings (Katz et al., 2025). Importantly, these gains were strongest when the AI tool went beyond simply providing answers and guided learners through problem-solving steps.
Similarly, a 2025 experimental study from the IZA Institute of Labor Economics (Guryan et al., 2025) showed that AI tutoring improved performance while allowing learners to reallocate effort more efficiently, suggesting AI can function as a coaching multiplier, not merely a shortcut.
These findings are reinforced by 2025 meta-analyses of generative AI in education. Positive learning effects were most consistent when AI was used to generate practice problems, provide targeted feedback, and prompt learners to explain or revise their reasoning (Zawacki-Richter et al., 2025; Chen et al., 2025).
For federal health-regulatory roles—such as scientific reviewers, inspectors, and compliance analysts—these use cases mirror how expertise is traditionally built: through repeated, guided application of rules, frameworks, and procedures.
Improved Access for the Workforce
Another consistent finding is that AI lowers barriers to engagement for regulatory professionals, balancing workload, time pressure, and complex subject matter. Studies report reduced intimidation when approaching unfamiliar material, greater willingness to attempt challenging tasks, and increased time on task.
For a workforce learning while performing mission-critical duties, this matters. AI-enabled learning supports just-in-time refreshers, structured walkthroughs of complex decision frameworks, and reinforcement of rarely used but critical skills. In short, AI improves access to practice, one of the biggest constraints in workforce training.
Where AI Introduces Risk
The same research documenting AI’s benefits also highlights risks, particularly when AI substitutes for human cognition rather than scaffolding it.
Shallower Knowledge and Reduced Integration
A 2025 study in PNAS Nexus found that while AI users completed tasks more quickly than those using traditional research tools, they demonstrated weaker conceptual understanding and reduced ability to transfer knowledge to new problems. For health-regulatory professionals, speed without depth is not an acceptable tradeoff. Decisions must be justified, defensible, and explainable under scrutiny.
Suppressed Critical Thinking
Survey-based research from Microsoft Research (2025) found that under time pressure and high confidence in AI outputs, users engaged in less independent evaluation. In health-regulatory environments—where errors may surface far downstream—this dynamic increases risk.
Emerging Dependency Patterns
Qualitative studies also show that some learners become uncomfortable performing tasks without AI assistance over time. This is a predictable cognitive outcome when tools consistently replace effort rather than structure it. For federal agencies, this raises concerns about resilience, internal expertise, and vulnerability during system or policy changes.
The Paradox Explained
The emerging consensus is clear: AI amplifies the cognitive role it is assigned. When AI generates practice, prompts reasoning, and provides feedback, learning improves. When it produces final answers and collapses reasoning steps, learning degrades. AI does not determine outcomes—architecture does.
What Comes Next
If learning outcomes depend on how AI is designed into training and work environments, the critical question becomes what structures agencies are deploying today. In the next post, I examine two dominant patterns: one that builds durable capability, and one that unintentionally replaces it.
Part 2: The Hidden Design Choice That Determines Whether AI Builds Skill or Replaces It.
Please contact us on emergingtech@apvit.com, for further information.
