← Previous · All Episodes
6 AI Tool Ideas That Will Transform How You Test Episode 34

6 AI Tool Ideas That Will Transform How You Test

· 51:12

|
In this episode, Richard and Vernon explore the evolving concept of automation in quality, especially in the context of AI and Gen AI. They discuss how new technologies are blurring the lines between testing and quality, and what this means for the future of software development and testing practices.

00:00 - Intro
00:52 - Welcome and weekly catch-up
01:11 - Vern's deep dive into the AI rabbit hole
02:39 - Rich’s quit(er) work week, new threads, and dentists
04:15 - Richard buys a domain and we started the pod proper
06:09 - Tool idea #1: Using an LLM to evaluate user stories and acceptance criteria automatically
07:35 - Is analysing a story "testing" or "quality"? The ISTQB static analysis debate
10:27 - Vernon's diabetes analogy: AI is forcing us to finally do what we always said we should
12:19 - Better stories = better testing: how quality work amplifies everything downstream
13:11 - Tool idea #2: "If we made this change, what areas of the system would be impacted?"
14:23 - Distilling years of system knowledge into 5–10 questions an agent could ask
18:37 - Tool idea #3: The PR Analyser — summarising code changes through a testing and quality lens
21:45 - Vernon's "1 unit of effort, 5 units of testing" — the quality multiplier effect
23:29 - Comparing story analysis to actual implementation: where did understanding diverge?
24:43 - Tool idea #4: Dynamic test selection — cherry-picking the right tests to run first
27:05 - Tool idea #5: An agent that analyses failed builds and attempts to fix them
27:28 - Why Richard's first attempt always "fixed" the test instead of the code (and what was missing)
29:21 - Dan's AI agents: one thinking partner, one employee monitoring production
32:42 - The documentation goldmine: why AI-generated RCA notes might matter more than the fix
33:39 - Tool idea #6: A holistic quality dashboard pulling insights across stories, code, tests, and process
36:43 - John Cutler on context: it's not data you pass around — it's formed through interaction
40:43 - More options than ever: whether it's testing, quality, or static analysis — you can do it differently now
41:56 - The real skill: spotting the opportunity to make yourself more effective
42:30 - Ge Hill's Lump of Code Fallacy and why task analysis matters
43:34 - Why Richard got into automation: efficiency, not because he was told to
45:03 - Vernon's big question: in a world where agents can do everything, what's your performance review about?
46:52 - Context, craft, and product knowledge can't be delegated to tools yet
48:29 - Call to action: What are you building? What tools couldn't you build before that you can now?
49:29 - Upcoming: Test Automation Days and PeerCon Live in Nottingham

Links to stuff we mentioned during the pod:


View episode transcript


Subscribe

Listen to The Vernon Richard Show using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes