← Previous · All Episodes
Six Principles of Automation in Testing: Still Relevant in 2026? Episode 33

Six Principles of Automation in Testing: Still Relevant in 2026?

· 01:03:17

|
In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies.


00:00 - Intro
01:47 - Welcome (Richard is not at home 👀)
02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷
04:01 - Today's topic: revisiting the AiT principles ahead of a keynote
04:58 - What is Automation in Testing (AiT)?
06:49 - Principle 1: Supporting Testing over Replicating Testing
07:01 - Vernon's take: testing is a performance, not a click sequence
08:22 - What the industry promised vs what automation actually does
08:49 - The serendipity you lose when a human isn't testing
09:59 - Agentic testing: observing more, but still not replicating humans
10:56 - The danger of anthropomorphising AI output
12:10 - LLMs always give an answer — and that's the problem
13:03 - Principle 2: Testability over Automatability
13:14 - Vernon's take: narrow vs broad — operate, control, observe
14:38 - Making apps automatable for the robots but not the humans
15:37 - The shiniest framework in a broken testing context
16:40 - If it's testable, it's probably automatable — but not vice versa
16:55 - Automation strategy vs testing strategy: when they compete, everyone loses
17:46 - The problem has always been testing, not automation
19:57 - Principle 3: Testing Expertise over Coding Expertise
20:18 - Vernon's take: testing expertise lets you leverage the tools
21:47 - The spoonfed tests problem: great at automating, lost without guidance
22:36 - The "code school" era: everyone told to learn to code
22:51 - Coding agents have changed the maths on this
26:01 - The new nuance: test design and framework knowledge over writing the code
28:44 - Evaluating code is a testing problem — and LLMs can help you do it
30:43 - Are agents as good as a junior developer?
31:42 - Outcome Engineering (O16G) and the race to write the AI principles
32:13 - Simon Wardley: we're in the wild west again
33:22 - Principle 4: Problems over Tools
33:29 - Vernon's take: the hammer and the nail
34:07 - Don't let your problems be shaped by the framework you have
34:36 - New automation opportunities beyond testing: PRs, logs, story review
35:30 - Principle 5: Risk over Coverage
36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage
38:00 - The one test case, one automated test fallacy
39:04 - Where in the system is the risk? Do you even know your layers?
39:49 - Probabilistic vs non-deterministic: refining the language around AI
40:53 - Coverage as intentional vs coverage as a number someone picked once
43:15 - Principle 6: Observability over Understanding
43:24 - Vernon's take: just-in-time understanding vs reading everything upfront
44:12 - What the principle was actually about: making automation results observable
47:00 - Does this principle belong in testing, or has it grown into quality?
49:00 - So... what's missing?
50:00 - The four pillars: Strategy, Creation, Usage, and Education
57:05 - Automation in Quality: the bigger opportunity
01:01:00 - Wrap up + Vern's Lead Dev panel

Links to stuff we mentioned during the pod:

Subscribe

Listen to The Vernon Richard Show using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes