Using AI in Workforce Training
What recent studies reveal about AI’s impact on skill development, judgment, and expertise in federal government health regulatory agencies
By Gayle Griffin, Ed.D., Program Director, APV (Part 2 in a 3-part series on AI and Workforce Training in Health Regulatory Agencies)
Part 2: The Design Choice That Determines Whether AI Builds Skill or Replaces It
AI is already reshaping workforce training—but the most important design decision isn’t the technology itself. It’s whether AI builds expertise or quietly replaces it.
In Part 1, I presented evidence from 2025 studies that highlight a critical tension: AI can accelerate learning, but it can also erode depth of understanding and independent reasoning. The good news is that this is a paradox we can solve by carefully designing AI into our workforce training.
Many organizations—including health regulatory agencies—believe they are improving training by “adding AI.” What is missing is the critical, deeper understanding of instructional design decisions required for success. The technology may be the same, but the design architecture determines whether capability grows—or quietly atrophies.
This distinction matters because many roles in health regulatory agencies are not knowledge recall positions. They require professionals to synthesize information, apply standards, and make defensible judgments. Scientific reviewers must interpret evidence and justify conclusions. Inspectors must apply procedures consistently while navigating real-world variability. Compliance and policy professionals must reason from guidance to sound decisions. These capabilities depend on disciplined thinking—not just fast answers.
Two Design Architecture Options
Instructional design architecture choices determine whether AI strengthens workforce capability or weakens critical thinking. When AI functions as a substitute for thinking, learning becomes passive. When AI functions as a scaffold – such as a mentor or coach - it supports active learning and critical reasoning, thereby accelerating workforce capability growth.
Architecture 1: Substitution;
This is the most common design pattern today, particularly when AI is introduced as a convenience tool. The workflow becomes simple: ask → answer → submit. It feels efficient because it collapses steps. However, the missing steps—retrieval, interpretation, evaluation, and synthesis—are often where learning occurs.
Research illustrates the risk. Experimental work comparing learning from LLM-generated syntheses versus traditional web search found that participants using LLM outputs completed tasks faster but developed shallower understanding and weaker knowledge transfer, particularly when the tool replaced their own integration of ideas (Melumad, 2025). In health regulatory environments, this creates speed without the underlying mental model needed to recognize edge cases, question assumptions, or defend decisions under scrutiny.
Workplace research shows a related pattern. Survey-based studies of knowledge workers indicate that when AI output is perceived as high quality and time pressure is present, individuals exert less cognitive effort and engage in less critical evaluation (Lee, 2025). Professionals do not stop thinking entirely, but workflow design can shift them toward acceptance mode rather than evaluation mode. In high-stakes regulatory environments, that shift introduces real risk.
Architecture 2: Scaffolding
Scaffolding design uses AI to structure thinking rather than replace it. The workflow becomes: attempt → critique → revise. AI prompts reasoning, generates practice opportunities, and provides feedback—while the learner remains responsible for the core cognitive work.
Strong evidence supporting AI’s benefits comes from scaffolding-based designs. Controlled studies of AI tutoring show meaningful gains in learning and efficiency when AI guides learners through material instead of simply providing answers (Kestin, 2025). Experimental work also demonstrates improved learning outcomes when AI supports engagement and comprehension without bypassing effort (Fischer, 2025).
In this model, AI acts as a mentor—strengthening reasoning, accelerating feedback, and expanding opportunities for deliberate practice.
Why This Distinction Matters in Health Regulatory Training
The difference between substitution and scaffolding determines whether AI strengthens or weakens essential professional capabilities, including:
- Reasoning from evidence to defensible conclusions
- Applying procedural standards under variability
- Documenting rationale that withstands scrutiny
- Recognizing uncertainty, limitations, and risk
When AI is used as a shortcut, these skills can weaken—even as productivity appears to improve. When AI acts as a scaffold, these skills strengthen because professionals remain actively engaged in the reasoning process, supported by coaching and structured practice.
In regulatory environments, where decisions affect public safety, compliance, and trust, preserving and strengthening expertise is essential.
A Practical Way to Spot the Architecture You’re Building
A simple diagnostic question helps identify the design architecture: Where does the thinking happen?
If the system produces polished outputs without requiring learners to externalize their reasoning, it is likely a case of substitution. If the system requires learners to attempt the task and uses AI to critique and guide improvement, it is scaffolding.
In Part 3, I will move from architecture to implementation, outlining what federal leaders and their consulting partners should change in program design, assessment, and platform requirements to ensure AI-enabled training produces durable, independent expertise—not just faster output.
Please contact us on emergingtech@apvit.com, for further information.
