← Previous · All Episodes · Next →
How No-Code Test Tools, Technical Leadership & Glue Work Impact Software Quality Episode 6

How No-Code Test Tools, Technical Leadership & Glue Work Impact Software Quality

· 57:17

|

Richard (00:00)
Hello everybody and welcome back to the Vernon Richards show. I am one of your hosts, Richard, a supporter of an FA Cup winning team. And I am joined by my esteemed co -host who is called...

Vernon (00:15)
Vernon Richards, the supporter of a team that is at least 20 points superior to the FA winning team that Richard just.

Richard (00:23)
and you won absolutely nothing this year. Did you win this year? League Cup The trophy that doesn't count? Is that the one that, you know? Yeah.

Vernon (00:25)
Pardon me? Excuse me? We won nothing? Correct.

Absolutely counts

Richard (00:36)
There you go, that's the football banter over. So Vernon, how you doing mate? How are you?

Vernon (00:43)
I am I am all right, man. I am I am glad to be doing this podcast, but that's put it that way. It's definitely given me an energy boost that is required for me right now. So, yeah, good to see you, man.

Richard (00:54)
You too. So firstly, before we dig in today's topics of conversation, thank you for everyone who's been sharing, subscribing, all that fun stuff. We've made a few tweaks to a few episodes. We've added chapters in there. So if you like some of those things, do let us know. And yeah, any again, feedback, we'd love to hear it.

Vernon (01:01)
Yeah.

or feedback because of my favorite Diary CEO episode. Why don't you give us some advice about things we could do better in the next episode? Can we have some advice, please? And feedback, whichever one you wish.

Richard (01:25)
Okay, yeah.

Advice?

You know the first one we're going to get is the one with no action yet, which is by burning a new camera. Which we've still not done.

Vernon (01:40)
Listen, this is operation get burning new camera. So please like, share and subscribe because then we can do more good stuff with YouTube and we can get some more sponsors. And by more, I mean any sponsors of any kind because we have none.

Richard (01:53)
We've not tried yet. Right, so first topic of conversation is a question that I got asked on LinkedIn, which I've answered, but I think it might trigger some interesting conversations. So I'm gonna just quickly switch to the message that I got. So the question I got asked was, with the huge boom in no code automation,

I'm wondering if there is still a need for manual test cases and manual testing in the future. I believe there is. What does you think? Cause it, that's what I got. I got, what got sent to me. So yeah. Is there a need for manual test cases and manual testing in the future?

Vernon (02:31)
How did you?

How did you answer that? Do you want me to answer it or do you want to share how you, because you answered it, right?

Richard (02:41)
I did answer it. Yeah. Well, not to keep the podcast, you know, and end it now. The answer is obviously, of course we do end a podcast job done. Right. So I think it's, I think those two words themselves have stigma attached to them in the industry. Right. So test cases.

Vernon (02:51)
Thanks for tuning in.

Richard (03:07)
make a lot of people, to be honest, literally at work today, I said, I had a task today to write some test cases and I joked to one of my colleagues that the thought of it makes me want to vomit, right? Which is not, you know, it was said at the moment, isn't to generate a bit of chillness and a bit of relaxation as well, right? But like they feel so alien to me sometimes in the way that I want to test. But,

Vernon (03:20)
Yeah!

Richard (03:33)
What I shared with this answer was think of them manual test cases and a lot of the way they're used are an artifact for a lot of the thinking and analysis that gets done. So commonly a way of capturing that analysis and thinking has been to document it in a very detailed test case. Me and you on the other hand might decide, actually, that's not a good idea. I might use some charters. I might create some cheat sheets. I might capture that information in, I don't know.

Gherkin style acceptance criteria, right? There's loads of ways to capture the said information, but test cases became a very common one. But really it is about documenting that discovery, those questions that, all the answers to the questions that we asked.

So yeah, I said to them that that thinking and analysis still has to happen, but when you automate. So if you're using a low code, no code, any automated tool, that thinking and analysis still has to take place, just you're not capturing it in a test case anymore, you're capturing it straight in to whatever this tool offers you. So therefore, maybe the test case concept.

as we traditionally view them might go away, but the effort and the work that goes into making them is still very much relevant and needed to happen by somebody, not saying who, but by somebody. So yeah, don't know if you want me to carry on the rest of the answer. That was the first paragraph of the answer. Okay,

Vernon (04:59)
yeah, carry on man.

Richard (05:00)
Yeah. So the second part of the answer was about manual testing, right. Which we both know is a huge problem and means many different things to many different people. Right. But I explained it in this answer as the, the act of uncovering information about a system manually, usually by following.

test cases, as in a very traditional view of it, or it still sometimes means a phase of testing that's taking place. So they're both very different things, but I've heard that term used for a phase or for the actual act of doing some quote manual testing. But whether that is, whether it's a phase or not, the underlying thing is we're getting information of a system.

We're getting information from a system that we're then computing and understanding and analyzing and putting it into context to understand whether this system is behaving as we expect it to or not, or opening up new avenues for more exploration and more questions. So again, whether we call it manual testing, if we're going to automate something, we still need that process of understanding whether what this system does.

is what it does desirable and accepted behavior before we can automate it. If you blindly just start automating, you're going to automate a bunch of noise that might, no one might care about. So therefore, yeah, I said these things still happen when we automate and we need to make sure we're automating behaviors and things that the business care about. And therefore in order to do that, we need to apply critical thinking analysis.

Vernon (06:22)
Yeah.

Richard (06:38)
get information, which are basically in my opinion, in this answer, underlying things that still happen when we create test cases and when we execute those test cases manually, those things are still happening. so those things, those things need to still happen, but I think the mechanisms to get those things doesn't have to be those labels. So I said, my answer in conclusion was yes, I think those things are still needed.

But I think it's more the underlying reasons we do those things and what we get from them, as opposed to that specific way of doing it.

Vernon (07:15)
I think I get what you're getting at there. I like that. Excuse me. I like that answer because it's, it felt like you were trying to, to sidestep the, the noun or the label manual testing and talk about the value that manual testing provides or the purpose that would, you know, that you use manual testing for. And it's like, well,

Do we still need to do those things? Yes. So if those things are what you call manual testing, then yes, we will stay in reading them. The obvious question though is, this is kind of predictable. So sorry, but how does AI change your answer, Mr. Bradshaw? So how does that change your answer? Does it change it? Sorry, sorry. I can hear.

I can hear people yelling at me. So if let me just change that generative AI so large language models is what I'm actually talking about. Trying to channel my inner Dr. Tariq King and I can hear my other friend, Mr. Eden yelling at me. It's

Richard (08:23)
Yeah, I can hear Mark shouting at me in my ear.

Vernon (08:27)
that change the answer though?

Richard (08:28)
I don't think so, no. My current view of those, are you talking about using them to help me or are you talking about testing an AI?

Vernon (08:38)
I'm talking about both.

Richard (08:40)
I think the gen AI usage to help me is still very apt. So that gathering of said information and that analysis and thinking still needs to take place. So if an AI tool can help me, happy days. That's how I still view that. Maybe I can probably do more analysis quickly because I could send it all off to said AI and get a load of stuff back.

I could ask it to generate it into formats that I help. I could ask it to summarize things. I can basically still doing the same task, but I'm getting the tools to make me more efficient. And that's what I've always viewed these tools as any tool, to be honest with you, not even just the AI tools, like anything that gives me more super tester abilities. I want them in my tool belt. testing AI though, again, I've not, I've done very little of it, right. But.

You know, you're testing something that's non -deterministic. So that's always going to be the challenge, right? Our systems usually are deterministic. They, given the same inputs, they would do the same thing over and over again. Whereas with these systems, they're not. So, you know, the, the idea of automating it, I know there's lots of things you can do. I've read lots of chapters of Mark's book. Don't tell him cause he thinks I don't read, but I've read some chapters.

Vernon (09:46)
Mm.

Richard (09:59)
of the book, right? And I know there's lots of things you can do in these days to test these LLMs, like, you know, from that perspective. But I think again here, I don't know what, I don't have the right words. I've been thinking about this all week and I've been trying to think of my own model for the underlying core activities that always take place no matter what we call the thing above it.

And I think if I had time and I sat down, I would come up with kind of five or six core activities that are very related to testing. And it doesn't matter what the label above it is. It doesn't matter the tools that you're using. You're still doing the same. You're still doing the same core actions for this, usually for the same goal, which is to get information that you can present to the team to help them make informed decisions.

Vernon (10:53)
I wonder if this might be related to knowledge work. I wonder if the topic of knowledge work would help give you your answer to those, you know, it almost sounds like you're looking for some, what are some foundational attributes or things that are always present when we're doing testing, regardless of the tool.

Richard (11:12)
Yeah.

or the approach or the technique. Like, I think the, like what I'm trying to get to, I think is like some of these techniques would yield more of one of the characteristics than the other or one of the foundational, you know, stones than the other. I think like, and I can't, I don't know what they are. I'm just going to make some up now, but anyone listening and you've done this work, share it. But you know, I'm thinking like, you know, curiosity, you know, analysis, critical thinking it's.

Vernon (11:17)
for the technique.

Richard (11:44)
communication, right? It's these kind of core things, but I want to get, I don't want them to be so generalized. You know, communication is very, a very broad topic, but I also, you know, I want to start thinking like, you know, like critical thinking, right? If you had critical thinking on there, if you're doing exploratory testing, you're probably using a lot more of that skill than you would be if you were executing manual test cases.

Vernon (11:55)
Yeah.

Richard (12:13)
Right. You're still testing the system to get information out of it. But I would say your level of critical thinking is probably a lot less in one than it is in the other. not go all the time. I know there's exceptions where testers, it depends, blah, blah, blah. Right. But generally we're applying, you know, with testing, creating, executing and analyzing in real time. Right. Whereas with the test following a test case, we usually don't have to do so much of the analysis because it's been done.

prior. So I think generating the test case, we do a lot more critical thinking, traditional way of doing it. But once it's been written and codified and my job is just to execute it, I would say there's less of that skill taking place. So that's what I'm trying to, that's what's in my head. It's like, what's that foundation layer? And then these activities kind of dip into those things, increasing the percentage, depending on the type of testing you're doing.

Vernon (13:07)
How much less do you think it is though? Because those results still need to be interpreted. And there are stories and experiences that we've all had and some of them are more public than others where the results said one thing, they were green, everything was wonderful, but doom and disaster were just around the corner, even though things were green. So that kind of, what that says to me is there's still a level of...

cognitive work required sometimes even when we have some outputs that have been passed that they're green.

Richard (13:46)
Yeah, yeah. No, I don't disagree on that. Yeah.

Vernon (13:47)
in time, you don't want to keep double checking the bloody, do you know what I mean? Like, all the units has passed, so everything's, you don't want to keep doubting those tools. It's kind of

So some years ago, I did a workshop which you were at, at the European testing conference 2018, I wanna say, and it was about scripting versus exploring. these, at the end of the workshop, I had all these different characteristics of scripting, which I, you know,

Richard (14:10)
yeah.

Vernon (14:19)
I would put as manual test cases automation that would be over there. And then you've got the exploratory side of things, which is using charters and using session based test management and you know, playing around with the computer as people say, or the app. And I put that over on the exploring side and it was.

Richard (14:39)
Yeah.

Vernon (14:43)
The point I was trying to get at with that is that these things have different characteristics and depending on the situation you're in will determine which approach is more appropriate, let's say. It's rather, it's less, the reason I came with a workshop is because I had this bias where I realised I was thinking and behaving as if test cases were bad.

Richard (15:07)
Yeah.

Vernon (15:09)
test cases, if you're using test cases, that's wrong. That's just, that's just stupid. Instead, I should have been thinking actually, when is it appropriate to use these things? So the workshop was almost a challenge to me. So that's pretty cool. I'll share that. I'll share the slice of that in the show notes so people can let me know what they think.

Richard (15:29)
I think that's kind of what I'm hinting at is that, like I said, I don't know what to call this bottom layer of skills or hats or whatever it is, right? But you know, there's, yeah, yeah, nice hat. There's definitely, you know, there's not an infinite, I know there's a, there's a course, let's use the hats for now, right? So there's, I think if you took a thousand QAs, QEs, testers, right?

And got them to fill in some very well smartly designed survey, right. And mapped it out. You would see certain hats, right. That is common throughout all those roles. And then there'll be a couple of specialized hats, right. That's kind of what I'm wanting to do with this bottom layer and to show people that you kind of dip in and out of them, depending on that testing activity, like you say. but I think in this example of the question.

Vernon (16:12)
Mm -hmm.

Richard (16:26)
And I've done this in the workshops myself is detaching the activities of automating and breaking them down. Right. So it's not just about writing the code, right? There's the bit before it where you need to think about the state that you need. You need to think about your assertion that you're going to do. Does it actually mitigate the risk that you think you're mitigating? Right. And then you've got kind of what environment and you've got the design of the code before you even sit down and start implementing it.

So that thing on the left is very much again, a critical thinking analysis system, system thinking, understanding oracles, heuristics, right? There's lots of like what some people would class as, you know, testing manual exploratory testing skills. But obviously that same person then is also going to use their tool knowledge and their coding knowledge to then implement said, you know, test in whatever tool it is. And that's what I was trying to answer with that question that we got asked is like.

having that low code, no code tool in front of you is great, but you need to tell it what to do and you need to tell it what to do that will result in stuff that matters and that's valuable to the other person. And I think that side of it is often under, undervalued. And I also believe that just to go on a separate topic, but I've seen the trend in the community that we went very heavy on the technical implementation side of this, these hats, right?

Vernon (17:51)
Mm -hmm.

Richard (17:51)
And we ended up with a lot of S -DETs, automators whose other hats were severely lacking. Like they didn't have hats, they had visors for those other skills, right? They were missing 90 % of the hat. Right? So, and that resulted in a lot of automating for the sake of automating, right? Because they didn't have the skills on the left, on the, let's say the left, the other side, the other hats to decide what should be automated and what shouldn't.

Vernon (18:03)
Why is this?

Mm -hmm.

Richard (18:19)
What layer it should be done on. Does the business care about this? It became a case of I can do it. So I'm going to do it. And computers are cheap. And it doesn't matter if I've got 10 ,000 tests that can run, still run them in 10 minutes. And now we've started to see a lot more talks at the conferences about, you know, how do I deal with 10 ,000 tests? How do I delete tests? How do I decide what test is valuable? And I think it's because we got ourselves into that trap of automating because we can and not using these other hats.

Vernon (18:31)
Yeah.

Yeah. I think it's at best it's an uninteresting question. Should we or should we not have automated tests or should we or should we not have manual tests? And in the worst cases, it's almost like the wrong question. That's how I position with people I talk about. So, well, I don't know if it's the right question or not.

because we should be talking about what problem we'll be trying to solve and trying to understand that. And then as we explore the answer to that question, we'll start to answer the question of should we automate it or not? Or should we, you know, mainly test this or not? Because it will become clear what problem we're trying to solve on what the appropriate solution would look like. You know, I might've said this before, but if I said to you, I use a helicopter to get to work,

Richard (19:17)
Yeah.

Vernon (19:43)
you instantly have an opinion about that. I didn't have to say anything else. Everybody had an opinion about that. Some of them probably said, I know what Vernon works from home. What the hell is he talking about?

But other people might be thinking, well, where does he work? Why has he taken a helicopter to work? And what does he do for a living? And how rich is he? And this, that, and the other. So maybe I work in Paris and a quick helicopter flight. There's a little airfield near my home. So I can just take a little quick. I don't know how long it would take. Couple of hours? I don't know. To fly over the channel in a helicopter to work and land on the top of a skyscraper and everything's awesome. Or maybe I work downstairs in my house. So why am I going to the airfield?

helicopter pad to get a helicopter to go up and down and land back in my back. God, that's just crazy. It won't make any sense. You know, so understanding the context of what problem I'm trying to solve probably would help avoid getting into that situation. But I tell you what else it thinks it makes me think about man. This is so random. So there's this really cool short story that I read years ago. I was I was sick.

Richard (20:26)
Yeah.

Vernon (20:52)
and I was in the hospital and then I was home and I was and I had to be home for a long time. So a friend of mine got me a collection of short stories by Isaac Asimov. Epic story. If you haven't read any Isaac Asimov, fix that. Now, if you like science fiction and you haven't read Isaac Asimov, you get on it ASAP. So this one story in the book, I'm reluctant to say this. Okay, first spoiler alert warning of.

the Vernon Richard Show podcast, right? So I'm gonna talk about this now and we'll figure it out in the chapters. So to make sure you don't avoid any spoilers. All I will say at this point is if you don't want any spoilers, but you wanna know what I'm talking about, find the story, the short story by Isaac Asimov called Profession. And this speaks to what Rich has been talking about and what I'm trying to talk about now. So.

in three, two, one, spoilers incoming. Are you still here? If you're still here, it's spoiler time. Okay, so the book is called Profession and it's about a dude, so it's set, you know, thousands of years in the future. And in this society,

your profession is determined algorithmically by the VAC system or whatever. There's some computerized system that at 18, it says, hey, Rich, you are gonna be an esthet. That's what you're gonna be. We've decided we've analyzed your brain. You are gonna be an esthet, awesome. And it's like, Vern, we've analyzed your brain and you're going to be an astrophysicist. And we've analyzed your brain, Jane, and you're gonna be.

like an engineer who builds combine harvesters for this planet that we're gonna go and terraform and all the rest of it and so on and so on and so on. And it's a big day, right? So the main character, he basically gets assigned as a computer programmer, I think fully enough. And he's like, what computer? I didn't wanna be a computer programmer. I wanted to be, I don't know.

Richard (22:59)
I should not.

Vernon (23:00)
I wanted to be a tester. This is crazy. I don't want to be. So the story is him trying to escape his fate. Because the way that you get the profession, think of the matrix and you just get the information downloaded into your brain immediately. So he's trying to avoid that. And he ends up at this school for people who are too dumb to take the downloading. And then he escapes from there. He's like, no, I don't want to do that.

Anyway, long story short, which is already too long. What it turns out is the school that is apparently for the dunces is where the new knowledge comes from in the first place. So if you get that, like Jane, she's the engineer person who's an expert in these particular machines, who built the goddamn machine with these people? All these people who rebel from getting constrained.

they sit and they learn stuff like we do now. They're like sitting like figure stuff out now. Well, how does this work? And this, that and the other. And when you were talking, that's what this reminded me of. It was this whole aspect of, we're just gonna automate this thing and whatever it tells us awesome win. Like we're not gonna think. And then there's another side to it where it's like, well, is that really correct? Like, what are we really doing? Like, why are we even doing this? Let's play around. How do I even know what to automate in the first place? How do I learn about this application? How do I learn the tool?

And I love that story. So if you haven't read it for a while, which I haven't, go read it again, because that's exactly what I'm going to do. And now that's the end of the spoiler about the Isaac Atomoff story. I don't even know if that's even going to survive the goddamn edit because it was so long. But we shall see. We'll find out what Rich decides later.

Richard (24:40)
So, well, yeah, so I'd love to see how other people would answer that question. Like, you know, if you've got opinions, you want to write, I don't know, blog posts or comments on this, whatever, just honestly, please share them because I'm not claiming to be right or wrong or anything. They just shared my opinion. Go on then.

Vernon (24:57)
And I have an interest in considering this. Get it. I want to consider the AI thing. If you've got any opinions on that, I think that just changes the nature of a lot of things. You know, we mentioned determinism in the audit, you know, whether a system is deterministic or not. If you're testing what we're calling AI, a large language model, generative AI type application, how would you do that?

The whole nature of right or wrong is different there. So what would the tooling look like there? What would the approach look like there? And would you class that as manual testing or not? I'd love to know the answers to that from people who are far more expert than me.

Richard (25:39)
It came into a massive, just to answer that myself, cause cause Matt as well, and it's our show. So the, I remember like me and Mark chatting a lot when he started, Mark Winteringham when he started writing his AI book and it reminded me of the automation and testing work that we did, for a number of years. And it was always about this.

Like you say, like, you know, you're going like this, automated testing, but for us, we always made the distinction that, you know, not that there isn't any automated testing, like I'm not going to go into that kind of area, but that it's still, it's just offloading that effort to a set of, I call it my robot army. That's how I talk about automated tests. It's basically a little mini Richard army that goes off, runs all these tests for me.

Vernon (26:22)
Sick.

Richard (26:29)
or not even like, probably army is not the right word. Like my little detective investigators, right? They go off and they've got a mission to come back with a certain piece of information and tell me if it's changed or not. And I think that kind of nature of how I view tools is still the same in this AI space. It's just using it to get me information. So even if I'm testing it, I'm still, I might build a tool that sends a thousand prompts, right? And outputs.

a report with a thousand summaries, which I might send to another tool and tell me, you know, how, how close are these answers? You know, I don't know, based on sentiment, right. And then that AI might give me, you know, 98 out of a hundred look the same, but these two look really odd. And then I'd be like, right, let's go look at those two and go, well, actually the wording and those two, we included the word, I don't know, Smith. Right. And that's thrown it off.

Vernon (26:58)
Yeah.

Mm.

Yeah.

Yeah.

Richard (27:24)
Right. That's, that's messed it up. And it reminds me of the Doug Hoffman when he, he spoke about what he called high vac automation, high, high volume automated testing or something high that I think it was anyway. And then I think he, he had an example or someone gave an example a long time ago of the very similar concept that I put into these AI models, which is the way they tested Google maps.

Vernon (27:43)
Yeah, yeah.

Richard (27:54)
So they basically built a big algorithm that went, go from point A to point B, and then let's say Manchester to Leicester, right? And it tells you that it's 12 .5 miles away or whatever. That's not true, but we know it's not that close, but like it's 80 miles away, right? Or, and then you do Leicester to Manchester, and then it goes, well, that's 85 miles away. And it's like, is that an acceptable deviation or not?

going to the same two places so there could be one -way systems there could be all sorts right but they basically did the reverse and spotted when it came up with silly misdirections so and they randomly did this like you could have got you know Seoul and South Korea to to land's end right but it just ran them back and forth and then outputted ones that were significantly different and those significantly different ones went to a human's queue

Vernon (28:27)
Yeah.

Richard (28:50)
who basically looked at the, you know, the maps, the instructions, the numbers to decide if there was a problem here or not. And I think that's how I viewed like, when you think of these non -deterministic systems, my brain immediately goes to how can I bombard it with so much information that I can get it back and start to see the trends and patterns by analyzing said information to basically hopefully spot.

Vernon (28:53)
Yeah.

Richard (29:19)
some problems. so yeah, that's what, that's what, when I start, I think of these systems, it's about, again, to wrap up my own thought, and I've probably gone the wrong way with it, but this idea that it doesn't matter whether I'm doing tools or not. My main goal is to get information of the system, either things that are new or things that I already know. I haven't changed and it doesn't matter how it comes my way. I just need to get it as soon as possible. And then.

Vernon (29:35)
that's it.

That's it.

Richard (29:47)
One add addition is the, like a good metric to measure. And one we adapted off the back of, accelerate was, MTTF, mean time to feedback. How can I reduce my mean time to feedback? So.

Vernon (29:58)
Ugh.

time to feedback. I like I vib that because

often I'll try and encourage people I'm working with to almost forget the word testing and instead say, what do I need to learn about this application? So I'll be happy to release it. I would like to know this. I'd like to learn about this. It's like, cool. How can we learn that? Like what experience do we need to, like what do we need to do and where and how in order to be.

comfortable with the information and have faith in it and confidence and trust the information is, we need to do this. We need to go here. They're there. They're there. They're there. Okay, cool. And you haven't talked about, it's like the art of testing without talking about testing. Do you know what mean? Number one rule of fight club. Don't talk about fighting. It's that, because I just find that sometimes you can talk about people can get fixated on automated testing for the wrong reason. And if you reframe it into, I just need to go get some information.

Richard (30:52)
Yeah.

Vernon (31:04)
think about why and what is that going to unlock. I find you can sometimes have more productive conversations instead of getting into a holy war about stuff.

Richard (31:17)
Absolutely.

But like what you just, this, what you just said about wanting to learn things. And I think this is a thing that's often misunderstood with automation or not focused on enough is.

If I want to do more digging into a system and I want to learn something about it, the actions that I'm going to take are going to be influenced by how I believe the system already works. Right. And what the automated testing that you have in place does is it gives you confirmation that your knowledge that you believe you have is still correct. And therefore when I go into that new piece of work, I'm doing it with the knowledge that

Vernon (31:51)
Mm -hmm.

Richard (31:57)
the confirmation that my knowledge I have is valid and still appropriate and correct. And I know for a fact that when I then start to design new test ideas or do exploration, I am 100 % using that fact, that confirmation to influence my testing. And I know that to be true because when tests fail, I get that icky feeling where I go, I need to do more testing in that area. I need to investigate that area because just one test failed.

And I think it's the same. I know the same happens when they all goes green. And you mentioned this before the, I talk about it as the illusion of green. I know that the feeling isn't as strong for the illusion of green. So like there's an element of wanting to automate it to help you get the knowledge. But I think this, this, this confirmation that happens when things do remain as they are, that's important to.

Vernon (32:51)
You're giving me ideas for further podcast episodes here, Rich, like, you know, emotions in software testing and software. We should definitely come back and do that episode. What do you think folks? Talk about emotions in the. Okay.

Richard (32:57)
Write them down.

Absolutely.

not? Let's talk about anything. Right, so let's do the final topic of conversation, which I believe is coming from you.

Vernon (33:16)
yeah, sorry.

Richard (33:16)
and the talk that you are doing tomorrow, but not the tomorrow when you listen to this, the tomorrow that happened when we recorded this.

Vernon (33:23)
However many days ago that is.

yeah, when you listen to this, I will have done a talk at the Agile Yorkshire Meetup, which is organised by an old friend of mine and an old colleague of mine called Roy de Roche. So I'll be up in Yorkshire. I think it's in Leeds I'm going up to tomorrow. And the topic of the talk is how we're setting QEs up to fail.

I started to have these ideas. I think it was the end of 2022. I was working with my friend Cassandra and we were talking about stuff, Cassandra Leung. And she sent me this. So this is 2022. So the plague was, you know, there was You know, people were talking about quiet quitting. That was a topic that...

people were talking about. And then Cassandra sent me a talk by a legend called Tanya Riley. You should definitely go and check out this talk. Her talk was about glue work. And it got me thinking, like what, I started to have this idea around a relationship between glue work and what that is, quiet quitting and what that is and quality engineering slash software testing.

and what that is. And I wrote a little bit about it and then I turned it into a talk and that's the talk that I'm going to go and do in Yorkshire tomorrow. So yeah.

Richard (34:57)
Can we have some spoilers? Can we like, what are the comparisons? Like, you learn?

Vernon (35:02)
Hey, man, okay. Okay, so.

Richard (35:05)
Or can you like for me, I've heard the terms quiet quitting, but I probably wouldn't be able to define it.

Vernon (35:15)
Okay man, this is, ooh, alright. So, I'll start with glue work and then I'll come to quite quickly.

Richard (35:21)
Well there's a podcast about glue work, I know what glue work is.

Vernon (35:26)
about glue.

Richard (35:27)
You

Vernon (35:33)
boy, what podcast is that?

Richard (35:35)
It's the Vernon Richard show.

Vernon (35:37)
Who's on there? yeah. Anyway, anyway, moving swiftly on for Richard, you know, takes over completely. So, glue work, what is glue work? The way I interpret this, when I first heard about it, and this is what Tanya talked about in her presentation, is I interpreted it to be that stuff that needs to get done in a given...

a piece of work, whether it's a discreet project or something that's ongoing. It's that stuff that needs to happen that is often not explicitly someone's job. But if it doesn't happen, whatever you're doing will not be successful. Or if it is successful, it will be successful at great cost and stress and extra energy for everybody involved concerned, right?

Richard (36:28)
By chance as well. Look.

Vernon (36:31)
chance. god, yeah, indeed. And, and so.

And another characteristic of this work is that because it falls outside of, you know, your quote unquote, your job description, you don't always get the credit that you deserve when you do it. Because we were like, well, that's not your job. Why are we doing that? You should have been in this thing over here. And so if you want to pay rise or promotion, that's a no from me because you didn't do your job. You're doing all this other stuff. And you're like, yeah, but that stuff was useful and valuable. Like, what's going on here?

So, and the key thing that Tanya said that set off the light bulb for me is that glue work is technical leadership. It's technical leadership. So remember that, because we're gonna come back to that later. Now, Richard Rich, aka Mr. Bradshaw, friendly tester my ass, he asked what is quiet quitting? So I will explain what quite quitting is. Quite quitting.

Richard (37:15)
the dots, yeah.

I'm sorry.

Vernon (37:32)
is when you say to yourself, do you know what? I have been busting my ass, like I've been busting my tail at this job, doing all this stuff and I'm not getting anything out of it. Like I'm not getting any thank yous. I'm not getting, you know, people are not thanking me with, you know, pound sterling or dollar bills or yen or whatever the currency is in your territory. No promotion, nothing. It's just, hey, thank you for doing this work. Here's some more work. Excellent.

So what people have started to do was kind of say, do you know what? I'm backing all the way off and I am going to stick to the letter of my job. So whatever's in my job description, that is what I'm doing because it isn't worth my while to push out beyond those boundaries. It's too tiring, it's too exhausting and I get no recognition and no reward. So Jenna Charlton talked to, well, they weren't talking about, hmm.

Richard (38:21)
Yep.

Vernon (38:29)
I don't think they were explicitly talking about quiet quitting, but what they were talking about was defending boundaries. Defend your boundaries, establish them and maintain them. And that's how I like to think of what quiet quitting is. What quiet quitting is essentially doing is saying, do you know what? I need to maintain the boundary of work and what isn't work. And I'm gonna stay in these very...

explicit parameters and I'm not going outside of it because every time I go outside of it, problem city for me. So that's quite quick. And then what is quality engineering? I hear you say, okay, so quality engineering is, you know, you, you'll hear different people call it different things. So my, my friend, Stuart day, by the way, Stuart day and Chris Henderson have also started a podcast. You should go listen to it. It's called quality talks podcast and it's sick.

Richard (39:03)
Yeah.

Vernon (39:24)
They've got such a better logo than us right now. It's massively annoying. Anyway, go listen to the thing. It's on YouTube. They are awesome. But Stuart talks about quality engineering.

Stuart talks about quality engineering.

Richard (39:37)
my god, I'm sorry, I'm not letting that go. I'm not letting that go. Like me and Vern had a lengthy conversation last week about our own initiatives in the industry, side hustles, all these kinds of ideas of what we're thinking of doing. And he was like, you don't need logos. Logo's the last thing you need. And now he's criticizing my logo.

Vernon (39:51)
Yeah, yeah. Yeah.

I'm not criticizing it. I'm saying I'm just saying it's better. I'm giving trying to give true love and what hating on our logo. That's what I'm doing. Exactly. I'm not advocating to do although although I will say I do want some t shirts, man. Maybe.

Richard (40:05)
You told me logos don't matter.

You told me you told me logos don't matter and yet immediately on this podcast you are also really nice logo.

Vernon (40:23)
Yeah, but our logo is not like we can take our logo and put it on a t -shirt. Like that's absolutely legit. We don't need a new logo. Like for...

Richard (40:26)
Okay. All right. I'm going to, I'm a week making a new logo now. Right. that's what's happening. Anyway, Stuart's got a brand new podcast link in description. we can't put the links up here yet because we need a thousand subscribers. So help us get a thousand subscribers. We'll put the links up here. sorry, back to Vern

Vernon (40:44)
So yes, so Stu talks about quality engineering in terms of the culture. So he's really, what he really focuses on is you have things like DevOps and continuous delivery and continuous deployment.

which is culture but also has a lot of tooling involved. And so when he's talking about quality engineering for him, he's really on the culture aspects of things. There

other folks like the modern testing crew, which I hope I'm part of that group. And they would talk about...

this in terms of accelerating the achievement of shippable quality, or at least that is the last definition I remember. And that is around systems thinking and looking for bottlenecks and empowering the team. And again, cultivating the culture

than being the person who owns all the testing as the testing specialist. Or you might take a context driven perspective on this, which is I...

in order to do good testing, sometimes I would have to demo the product. Sometimes I would have to go and advocate for a bug. Sometimes I'd have to go and teach some developers how to do stuff. I'd have to go and learn the architecture, et cetera, et cetera. What all of that means is, for me, is that we have testing. I'm interacting with the product as a core skill. But then it doesn't take long for you,

and to the testers in the crowd. It doesn't take long for you to start to say things like this to yourself. You'll go, man, it would be so much easier to test this if...

and then a whole bunch of things come out. If these user stories weren't so vague and difficult to understand, it'd be so much easier to test if I had test data that I could just grab from production and use in my test environment. It'd be so much easier if this application was more testable and I was able to control it and observe it much more easily and so on and so on and so on and so on. So why is all of this important? It's important because...

we usually get positioned as testers. So we're interacting with the product and then we start pushing outside of that boundary.

Greetings, GlueWork. How are you? How's it going? Hey, what's happening? I'm GlueWork. Nice to meet you. But then you don't get any reward for any of that shit.

Richard (43:13)
Yeah.

Vernon (43:17)
And so people start to say, man, I've had enough. I'm going to go back to testing. The problem is that increasingly these days, our job is positioned as.

all the things. And I keep seeing Maaret Pyhäjärvi quoting Anna Baik with a brilliant quote, which is something on the lines of we are the only people who have to fix the organization in order to do our jobs. man, we're the only people who have to fix the organization before we can do our jobs.

That presents.

Richard (43:52)
Now...

Vernon (43:53)
That presents a problem because if the expectations have not been set properly, if we've not been positioned properly, it makes our job a lot more difficult because people are expecting us to interact with the product and do some testing. And what we're saying is, you know, this testing is not the problem. The problem is we're not collaborating over here as a team. The problem is that we can't test in production when we need to. The problem is that we can't get access to these logs when I want them. The problem is that I can't spin up a test environment and bin it off quickly, et cetera, et cetera, et cetera.

Richard (44:19)
Yeah.

Vernon (44:20)
I want to help with those problems because that amplifies everything. And people are like, no, no, no, Testa, get back in your box. You need to just interact with the product and tell me if there's any bugs, god damn it. I want to know about bugs in the product. I don't want to know about bugs in the organization. Clear off.

Richard (44:37)
I've got two.

Vernon (44:39)
Thank you very much.

Richard (44:42)
I think that's going to be amazing. I wish I could go, but I can't. But if you are, if you hear this now, it's already happened. So it's not going to help you. So there's two things that came up while you were talking, and then you kind of, you repeated it in a different sense that made my thought, made me feel better about having my thought was you mentioned DevOps, right? As an example of a culture as was Stu Day talking about quality engineering.

Vernon (45:09)
Mm -hmm.

Richard (45:10)
And the first thing that got me thinking was, I remember all the tweets and posts about you don't have a DevOps team, right? That's, that's an anti -pattern to have a DevOps team, right? You have to have a, I don't know, a platform team or engineers focusing on that space. Yeah. If you're seeing quality engineering is a cultural kind of,

Vernon (45:18)
Yeah.

Richard (45:33)
It's not a role yet. We're all called quality engineers. You know, that's the thing that my brain got me thinking of. We don't have DevOps engineers, but if DevOps is the same example as quality engineering, why do we have quality engineers? Like that almost sounds again, like the same anti -pattern as saying when you have a DevOps engineer, right? Is that, am I making sense?

Vernon (45:54)
You're making sense. I don't.

What I'll say is if I said it's not a role, I shouldn't have said that.

Richard (46:01)
No, no, you didn't say it's not a role. Sorry, people say DevOps is not a role. Right? So comparing DevOps to quality engineering.

And then we've already established as an industry that DevOps is not a role. Why is quality engineering a role? And then that led me to the second thing that you mentioned, which was we do all this, we're often labeled as testers and we do all this glue work. But then if quality engineering shouldn't be a role because it's, it's a cultural whole team thing, what the hell should we be called?

Vernon (46:28)
Mm -hmm.

Well, this is why I said I was gonna come back to the technical leadership thing. Because remember, glue work is technical leadership. So if you are doing quality engineering and quality engineering is glue work, then that means quality engineering is technical leadership. And if we were positioned differently. So if I went into a company as a quality engineer and started to...

Richard (46:38)
So...

Vernon (47:05)
challenge my product manager around the quality of user stories, let's say, that was coming out from them, I might get some pushback. If, same me, everything's exactly the same, no six pack, not six foot tall.

But my role was engineering manager or my role was senior developer or my role was director of engineering. I'm not getting, I'm not getting at, no one's questioning what I'm doing there.

And I think some of that is because like the job title of tester before it, it's overloaded. It has a lot of baggage and interpretation and therefore expectation. And that's a problem because sometimes we've been half positioned as Van is going to come here and disrupt all the things. And much like your New Year's resolution on New Year's Eve,

to go to the gym five times a week. It's all shits and giggles at the start. Until you find your funky ass in the gym and the personal trainer is screaming in your face saying, that's light work. You've only done 20 ,000 reps. You need to do another 20 ,000. You're like, what? I've actually got to do some work? No. It's similar. It's like people are challenged and they're like, you're gonna disrupt everything. That sounds great. We need that. And then when the disruption is.

presented to them and it means they have to change something about how they're working. They're like, sounds like a good idea at the time. Actually, I'm not really up for that. So Mr. Tester, you can go away. And that's kind of what I'm talking about. And so if we're not positioned as technical

this speaks to a lot of the problems that we run into or certainly I've run into.

So when I, so that's why all these things kind of coalesced together. Cause like, yeah, that makes sense. You know, this glue work, this is technical leadership. Like, you know, I've been, I've joined teams where people have asked me to join a team because they're adamant that there are testing problems. Absolutely. Hey, testing is taking too long. It's too slow. Well, testers are not, are not, they don't have the right behaviors that we want, et cetera, et cetera, et cetera. And then you go into the situation.

Richard (49:07)
Hmm.

yeah, yeah, yeah.

Vernon (49:28)
The tester is completely isolated, is getting no support. The test environments are trash. The user stories are vague. You know, just on and on and on and on and on it goes. But somehow all of that is the tester's problem. They're not empowered. They haven't been coached or supported to empower themselves. But like I say in the talk, there are two aspects to this. I am not trying to position us as these helpless individuals who cannot do anything about this situation. That's not what I'm saying.

So there's a whole bunch of stuff that we can do. And many of us have been talking about many of those things only for years. But this is an organizational issue. That's the main thing that I want to say it in this talk is that this is an, this is an organizational issue. And that's why I talk about positioning a lot. It's like, how have you positioned your people in that organization? How are you as an engineering manager or a director of, or a VP of?

or a CXO, right? How have you positioned this role and these activities with your peers? Have you said, expect Vern to show up in meetings just because he trying to look for problems? Are you saying that? And he's like, he's not trying to be an ass. Here's the mission that I've established for this guy. He's trying to do this, he's trying to do that, right? Is that happening? Or do the Vernon's of the world have to literally fight to get into these conversations?

Do they have to, you know, bring cake? Do they have to bring cake and bribe people? Please give me access to production. my God. Yeah. Yeah. Do you know what I mean? And all of these things. And this is, and this, this loops back to what Rich was talking about earlier, right? Around you were talking about the hats and this, this, if, if you overemphasize a given hat, a given skillset,

Richard (50:57)
I've done a bit of that. Yeah.

I've burst into meeting rooms before to get a seat at the table, yeah.

Vernon (51:25)
So you were talking about technical testing and neglect communication and neglect sales. I'm learning that sales and entrepreneurship is the antidote, I think to all our testing problems. It's ridiculous. And I'm going to try and explore that some more. But I'll probably talk about that in a little bit. But yeah, that's that's it's a positioning issue, which means I'm actually talking to the technical, you know, I'm talking to the engineering managers at organizations right now. I'm talking to.

Richard (51:27)
-hmm.

Vernon (51:53)
directors of I'm talking to VPs of I'm talking to CTOs, COOs, CEOs, CMOs, CPOs, all of you CXOs, you know, staff, people, all of you with any leadership stuff, how are you positioning your quality engineering teammates? Yeah, so that's kind of the talk.

Richard (52:20)
Sounds mint. I'm going to encourage you to, let's talk about how it goes in the next episode. Maybe let's document that. Like anyone watching me while Vernon was chatting that night, you can tell my like my brain's like freezing, like, cause I'm, but I'm fixated on like maybe just to try and tie this up in my own head is I'm fixated on that DevOps examples though. But because I think that was one shift.

Vernon (52:24)
Ha ha!

Okay.

Yeah.

Richard (52:48)
that I'm not saying is finished, but it seems to be a shift that did require organizational change. I know that quote from Anna does make sense, but I think to go from releasing half once a year to releasing multiple times a day potentially, or thinking about that, like that definitely is also a shift, but also just the way the industry went with that route of...

Well, there's no such thing as a DevOps engineer. That's such a stupid title oxymoron, right? So we ended up with like SREs. We ended up with platform engineers. We ended up with these dedicated roles. Yeah. In testing, I think we ended up with this weird shift towards everyone being a quality and quality engineer. Yeah. Now we're discussing what, what is quality engineering and how it potentially is this cultural shift, right? Do we act, have we actually gone, not saying we've gone in the wrong direction, right? But would it have, does it make more sense to keep?

Vernon (53:14)
See?

Richard (53:41)
the roles of estet or automation engineer or test engineer, because they're very specific parts of that overall puzzle of getting a company to take quality engineering seriously or take quality seriously. So anyway, that's, that's where I'm, that's what I was stuck trying to think about in my head.

Vernon (53:49)
Yeah.

We've been talking for a while, right? So I don't know, but maybe we can, yeah, maybe we should come back to this topic after I've done the talk and we can answer some of those questions. And if you have questions, comments, concerns of anything that I've just said, hit us up in the comments, DM us, and we will add that to the list of things we talk about in that episode whenever we get to it.

Richard (54:10)
Yeah, we should. Yeah.

Yeah, absolutely. The more comments you interact with us, you know, if you're listening to what we've just been saying for the last, you know, 50 minutes, and you're like, I disagree. You know, share your thoughts in the comments, you know, me and Vern, we're not going to come back at you in that way. We'll just use that information to fuel our, continue our conversation. No, I won't. I'm very friendly. But honestly, like having a dialogue underneath these videos is like kind of...

whole reason like me and Verna chatting with each other because as you just experienced now they trigger each other to think of things, it helps cement our own knowledge and that's value enough for us but...

Vernon (55:02)
Just...

Just at that point, you think this, so we have been talking, recording the podcast, which is probably not gonna survive editing for an hour, right? So let's say it's down, maybe it's an hour, maybe it's 45 minutes, but we spent over an hour talking before we started recording.

Richard (55:25)
Yeah, we got together at half eight and it's now quarter to 11. But the same for you, Lotlight. If you're listening to us and that's triggering you to have thoughts, reflections, honestly, ping them in the chat, ping them in the comments. They're only going to help me in burning and they're probably going to help you as well. Think of some of your ideas and reply. We just want to start a dialogue.

Vernon (55:29)
is why we thought we should do a podcast. It's like we talk about this stuff anyway. So remember to just record it. See you.

Listen, listen man.

I've said this a bunch of times when I've been hosting conferences and it is just as applicable to this. We and Richard are not awesome because we're doing this podcast. We were already awesome. And the same applies to you. Like we are not the flippin' experts about anything that we have spoken. If you have like, man listen, I am happy to hear from everybody who listens to this and get your input, get your advice, get your feedback.

And I'd love to hear your comments on what we've been talking about on this show, because that's, you know, one of the upsides of doing stuff like this is that you get to learn from the conversation. So take part in the conversation, throw some questions our way, comments, whatever, and we'll, you know, we'll share the good ones

Richard (56:35)
Absolutely. So thank you all for listening to us again, again, again, again, again, again, again, again. Share your comments, share your positive, what was it thing, one thing we can do to be better next time, did you say? Yeah.

Vernon (56:42)
Yeah.

Yeah, give you some advice about how we can make the show better.

Richard (56:52)
Advice on how to make it better. The whole YouTube subscribe, share, like all that jazz copy links, DM it to people, whatever it is that makes the algorithm go, this is amazing. Definitely do And yeah, the next episode, we hope you tune in for the next one. So yeah, goodbye from me.

Vernon (57:02)
Yank. Get.

Thanks for listening Rich, you're an absolute legend. And it's goodbye from me.

Richard (57:14)
Bye everyone.

View episode details


Creators and Guests

Richard Bradshaw
Host
Richard Bradshaw
A true driving force in the software testing and quality domain. I’m a tester, automator, speaker, writer, teacher, strategist, leader, and a friendly human.
Vernon Richards | Ghostwriter & Coach
Host
Vernon Richards | Ghostwriter & Coach
I ghostwrite Educational Email Courses for Software Testing SaaS Founders | 20+ years testing & coaching in tech | Will Smith's Virtual Stunt Double

Subscribe

Listen to The Vernon Richard Show using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes · Next →