December 2, 2025

Episode 165: Human Enablement in the Age of AI with Greg Wilton

In this conversation, Dane welcomes learning and development expert Greg Wilton to explore the intersection of L&D, generative AI, and knowledge management. Greg begins by reflecting on his early days as an e-learning developer, where he quickly realized that “one size fits all” training and information-heavy courses were not enough. To create real impact, he shifted from teaching people to remember content toward helping them identify situations, analyze what is happening, and act with confidence in the flow of work.

That mindset led Greg deeper into knowledge management, where he focused on capturing an organization’s collective wisdom and making it accessible at the right moment. As generative AI tools emerged, he saw that traditional learning would have to evolve again. Instead of simply producing more content, L&D and knowledge teams now have a responsibility to manage the “ground truth” that AI systems rely on. When that ground truth is weak or outdated, AI can produce convincing but wrong answers, amplifying risk rather than reducing it.

Greg introduces the idea of knowledge stewardship as the set of rules, structures, and behaviors that protect the quality of an organization’s knowledge base. Drawing from concepts like the data–information–knowledge–wisdom pyramid, he explains how AI initiatives that simply ingest every document in SharePoint are vulnerable to “garbage in, garbage out.” Regulations change, offerings evolve, and once-accurate documents can quietly become misleading. Without intentional stewardship, those artifacts still shape AI outputs long after they should.

The discussion moves into practical strategies for working with generative AI. Greg outlines how strong prompt engineering mirrors strong leadership: define the role, clarify the task and outcome, set constraints, and specify the desired format and tone. Leaders who already excel at setting expectations and boundaries for people are often well positioned to guide AI agents effectively. He encourages organizations to start with very specific use cases and measurable outputs, rather than trying to build broad, catch-all bots that are impossible to test or trust.

Greg and Dane also explore the idea of “human in the loop” more deeply. They argue that supervising AI outputs requires more than a quick review. It calls for well-honed critical thinking. Humans need to question whether results make sense, look for contradictory evidence, and notice what is missing. That same mindset applies to AI-driven process redesign. Rather than simply automating existing workflows, leaders should ask what becomes possible when humans and AI are paired thoughtfully, and how responsibilities should shift around that new reality.

One of the most important parts of the conversation centers on talent pipelines and entry-level work. Greg points to research showing a sharp decline in entry-level roles as organizations automate repetitive tasks with AI. Financially, that might look efficient, but he warns of long-term consequences. Future frontline leaders typically begin in those entry-level roles. If companies hollow out early career opportunities, they risk losing homegrown leadership, increasing dependency on external hires, and narrowing opportunities for an entire generation.

Dane and Greg broaden the lens to include social and innovation impacts. They note that young professionals bring fresh skills in digital communication, content creation, and new mediums like TikTok, as well as lived experience with emerging consumer behaviors. If organizations do not create meaningful roles that tap into these strengths, they lose both future leadership and present-day innovation. Greg suggests reimagining knowledge management itself as a powerful entry-level path, where new talent can observe knowledge in context, contribute to stewardship, and build critical thinking skills.

The episode closes with a reminder that generative AI does not eliminate the need for human judgment. It raises the bar. By investing in sound knowledge practices, clear leadership behaviors, and thoughtful early career roles, organizations can use AI to enhance human enablement rather than replace it. The future of teamwork will belong to leaders who design systems where people and machines elevate each other’s strengths.

 

Resources

DIKW Pyramid

🔗 Connect with Greg Wilton on LinkedIn

Guest Bio

Greg Wilton is a global learning and knowledge leader who helps organizations unlock human performance through the free flow of knowledge. With nearly two decades of Fortune 500 experience, he believes that simple, scalable learning cultures, combined with connected knowledge ecosystems, create the conditions where teams can embrace a true growth mindset.

Greg has spent his career designing modern learning environments for large, distributed workforces. He has led enterprise-wide initiatives to implement next-generation learning platforms, introduce AI-enabled development tools, streamline curriculum ecosystems, and expand access to personalized learning at scale, ensuring that knowledge supports people rather than overwhelms them.

A frequent speaker on the future of learning, AI, and knowledge strategy, Greg brings a strategic and visionary lens to how organizations build capability and create cultures where people can thrive.

Key Takeaways

00:00 Introduction to the Future of Teamwork Podcast

01:58  Meet Greg Wilton: Global Learning and Knowledge Leader

02:43 Elevating Learning Objectives with Technology

03:37 The Role of Generative AI in Corporate Learning

04:33 Knowledge Management and Human Enablement

13:46 Challenges and Solutions in Knowledge Stewardship

33:43 The Future of Entry-Level Roles and Talent Pipeline

42:03 Conclusion and Key Takeaways

Have A Question?

Stay Tuned for Our Latest Episodes

Join our subscribers list to get the latest episodes, blogs, and special announcements delivered directly to your inbox.

Start Listening Today!

Our mission is to advance people through business and business through people.