← Previous · All Episodes
Exploring Agentic AI: A Fun and Eye-Opening First Look Episode 16

Exploring Agentic AI: A Fun and Eye-Opening First Look

· 01:13:40

|
In this conversation, Richard and Vernon delve into the evolving landscape of AI, particularly focusing on the concept of agentic AI. They discuss personal updates, including their health and fitness journeys, before transitioning into a detailed exploration of AI technologies. Richard shares his recent experiences with AI training and projects, emphasizing the differences between traditional generative AI and agentic AI.

The discussion highlights the importance of goals, tasks, and tool awareness in AI, drawing parallels to software testing and the dynamics of generalists versus specialists in the tech industry. The conversation concludes with reflections on the implications of these technologies for the future. In this conversation, Vernon and Richard explore the evolving landscape of AI, particularly focusing on agentic AI and its implications for testing and quality assurance.

They discuss the importance of defining clear goals and expected outcomes for AI tasks, the need for quality characteristics in AI outputs, and the critical role of human oversight in AI decision-making. The conversation also touches on iterative learning, exploratory testing, and the future of AI in the testing domain, emphasizing the necessity for testers to adapt and enhance their skills in this rapidly changing environment.

Links to stuff we mentioned during the pod:
00:00 - Intro
01:14 - Welcome
04:02 - Rich's adventures learning about AI
05:24 - Rich goes down the Agentic AI rabbit hole
07:00 - GenAI vs Agentic AI
12:45 - Understanding Agentic AI vs. Traditional AI
13:27 - What's the difference between the term "Agent" and "Agentic"?
15:20 - How would Rich describe or categorise a chatbot?
16:15 - What makes something agentic then?
17:52 - Jason helps Rich understand what to expect from his exploration
18:51 - What's the relationship between goals and tasks?
20:06 - Rich explains what makes this so interesting for him and got him excited
26:12 - Empowering Agents with the Right Tools
27:47 - Understanding Tasks vs. Goals
28:45 - Breaking Down Tasks for Efficiency
29:44 - How much agency do agents have?
31:38 - Task Descriptions and Expected Outcomes
33:03 - Teams of agents vs teams of people and specialists vs generalists
35:48 - How does an agent decide what to do next and how does it know it has completed the task?
36:40 - Defining Quality in Agent Outputs
38:15 - MOAR testing concepts that have parallels with Rich's exploration
40:28 - The consequences of not being accurate enough with your backstory, expected output, tasks, etc
43:34 - What happens when agenticai is asked to achieve the same goal without changing anything about the backstory, expected output, tasks, etc?
45:40 - Challenges of Iteration and Learning
46:47 - What are max iterations and what does that remind Rich of?
47:40 - Vern wonders how important semantics is going to be and how Testers can contribute to this work
49:42 - Rich riffs on exploratory testing
51:02 - Exploratory Testing and Agentic Learning. What does the Tester's story look like in the context of an agentic system from the agent's perspective?
54:15 - Exploring Autonomy in AI Systems
56:57 - Evaluating AI Outputs and Task Design
58:21 - What happens if/when the context is left blank in these agentic systems?
01:00:49 - Soooo where do the humans fit in if agentic systems can doAllTheThings?
01:02:37 - Wrap up: Take 1 - Designing small targeted tests vs designing small targeted tasks
01:04:46 - Wrap up: Take 2 - Agents delegating tasks to other agents. Er... WTF?!
01:06:00 - Wrap up: Take 3 - How is Rich feeling about AI & AI tools?
01:09:45 - Wrap up: Take 4 - Testers ASSEMBLE! How we're going to contribute in a world of AI

Subscribe

Listen to The Vernon Richard Show using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes