Maslow's Hammer and Three Lies QA Tells Itself

It seems to me like there's a lot of anxiety on testing conferences these days.
Code velocity is ramping up while QA teams getting downsized. Those who haven't been affected are worried that they'll be facing the same scenario. It's clear that AI has a massive impact, but there hasn't been too much concrete example on what that impact actually is.
Speakers at these conferences mention AI, but they seem to be kinda vague when talking about the parts that fuel the mentioned anxiety. Each Q&A section has the same question being repeated: "What do we do with all this?" I keep hearing three common answers.
The "things will get better" answer
The most common one is: "The golden age of QA is coming". The logic is AI makes mistakes -> Everyone is using AI to write code -> Code needs to be checked -> We'll need more testers to do the job (mostly manual QA).
The "things may get worse" answer
Another one I hear is: "The SDLC will crumble down without testing." Main arguments for this one point to major incidents that tend to happen if the software development moves too fast at the cost of proper validation.
The "things will stay the same" answer
The third common answer is that everything will eventually calm down and companies will start "doing things right". This encourages testers to keep doing what the're doing, because all of the changes happening out there have ultimately no real impact on their work.
My body goes into restless mode whenever I hear these answers, because I feel like all of these answers are a lie. Not necessarily delibrate lie. Sometimes it's just a lie that the testers themselves believe in. It's as the old saying says: "If all you have is a hammer, everything looks like a nail."
This common saying is attributed to Abraham Maslow (you probably know his pyramid) which he used as a critique for how scientists and academics think. Maslow observed that:
- scientists tend to become over-specialized
- they rely heavily on the methods they already know
- they force-fit problems into their preferred frameworks
What leaves me restless is that I sometimes also see this pattern at testing conferences. It's something I've observed for a long time, but while 3 years ago I would just be slightly annoyed, these days I'm nervous and worried.
I'd like to spend some time on these answers for a moment and discuss about why I think they're wrong.
Golden age is coming
This is an idea that is nice to hear and nice to say. I should know, I've said it myself. But it's narrow minded.
It's easy to fall for this idea, when you notice the incredible speed that the usage of AI has brought. Speed is something that's being discussed on social media, and at conference talks. Speed is something we oftentimes see in demos. It's cool to demonstrate, and measure.
AI has brought speed to the development. Because of this, it's easy to make a prediction on that increasing the speed of development will create higher demands on testing. Even teams that claim to downsize development teams due to AI efficiency seem to expect higher code output, which some expect will result in more work for testing.
But as I mentioned, this line of thinking lacks dimension. This change in speed is not just about making teams code faster. It has a transformative property. With AI, we'll be looking at changes in how development teams look and what are the roles and responsibilities.
This is what drives many downsizing decisions in companies. I'm not claiming that these decisions are always the right ones. Sometimes, AI is the scapegoat for massive layoffs. But no matter what the real reason is, it's obvious that many roles are going through a re-definition period.
It seems unrealistic and downright delusional to think that the redefinition will skip over QA, but affect vritually every part of development. The expectation that QA will be in higher demand because we can generate code faster is unfortunately ignoring this transformation. It's achored in a modus operandi that is disappearing. It's built on the idea that AI code generation is only affecting the output velocity, without affecting anything else.
SDLC is crumbling down
Many will point to anecdotes where poor AI coding output caused incidents, outages, spike in number of bugs, reliability or performance issues. Combined with layoffs that affect many QAs, the picture painted looks like a recipe for a disaster.
Stories like these serve as a good argument for keeping the QA role as we know it today. It opens up a path that guides us to advocating for testers. These stories suggest that if companies want to keep quality, they need to keep their QA teams intact. This sounds reasonable on the surface. But it's built on a flawed assumption - that quality is primarily a result of having testers on the team.
In reality, keeping quality is not just about having testers. Many of the most effective quality gates are systematic. Linters and checks in type-based languages catch typos and syntax errors. Unit tests prevent regressions at the source. Code reviews catch issues before they ever reach a test environment. Monitoring and observability tools catch production issues in real time, often faster than any manual tester could. More and more systems and tooling is being built around AI to help improve the quality.
When we frame the conversation as "hire more testers or quality will suffer", we're ignoring all the other mechanisms that contribute to quality. And worse - we're positioning QA as the only thing standing between a company and disaster, which is a fragile place to be. If the only argument for your role is that things will fall apart without you, eventually someone will test that theory.
Things will not change
This answer is rooted in AI skepticism and a firm belief in one's own expertise. "I've been doing this for 15 years, I know what good testing looks like, and no AI agent is going to change that." There's a quiet confidence to it that can feel reassuring.
And look - expertise and self-worth are important. I'm not dismissing that. I’ve said repeatedly on stage and off-stage that the need for expertise is not going anywhere. But I’m also seeing a weird paradox when talking to "things will not change" folks. If you ask any tester what the most important quality of a QA professional is, most will say critical thinking.
I'd agree with that. But critical thinking is not simply just something you have. It's something you constantly pursue. It requires learning, growing your knowledge, and expanding your competency. It means being willing to challenge your own assumptions - not just the assumptions in the software you test.
The biggest problem with the "things will stay the same" mindset is its anti-intellectualism. It shuts down curiosity. You’ll hear things like "They said test automation is going to replace us and look where we are." or "The bubble will soon pop.". But more importantly, it says "I already know enough." And that's paradoxical coming from people who pride themselves on questioning everything. If your critical thinking stops at the boundary of your own role and career, it's not really critical thinking. It's self-preservation dressed up as a professional skill.
So - what do we do with all this?
I’m also being asked this question by peers. I have lengthy discussions and arguments with friends and members of QA community that I’m part of. I don't have a crystal ball. I can't tell you which roles will disappear or which ones will emerge. But I can share how I think about the direction this is heading.
The QA role is about to change. Not overnight, and probably not in a single dramatic shift. But the trajectory is giving some hints.
The way we advocate for quality needs to change. "We need more time and people for testing" is not going to stand as an argument. Quality needs to scale alongside code velocity. If code output doubles but your testing capacity stays the same, the answer isn't to slow down development. The answer is to find ways to make quality keep up. This calls for changes in how we work. Let me give you an example.
When a QA catches a bug today, there's really not too much preventing the same bug from happening again tomorrow. A human found it, a human fixed it, and the system that produced it remains unchanged. This is something that needs to change. QA will have to become more of an engineering role - building, improving and contributing to these systems, creating agents, using quality engineering to build a harness for quality that scales beyond what a single person can check manually.
QA will become a highly collaborative role. Not a gatekeeper that blocks releases, but an enabler that unlocks the team. This has been a reality in many teams I’ve had the chance to work with, but it’s not a rule unfortunately. A gatekeeper says "you can't ship until I say so." An enabler says "here's how we can ship faster and with confidence." QAs are part of post-mortems, collaborate on building solutions, co-create standards and rules for projects define and re-define quality gates and open up important team disucssions.
AI has made code cheap to generate. The scarce resource that the market needs now is trust. - Itamar Friedmann, Cofounder & CEO at Qodo
Quality engineers help build that trust. The role is highly equipped for this task, but that doesn’t mean there’s nothing new to learn. Which brings me back to Maslow's hammer. The answer to the over-specialization problem is to become multidisciplinary. QA needs to become a multidisciplinary role. Not just writing test cases. Not just automating scripts. But understanding systems, building tooling, working with data, and yes - using AI to amplify the work.
We should absolutely be advocates for quality. But it matters what we advocate for. Advocating for bigger teams that move slowly, block releases, and demand more time and resources is going to be an uphill battle. This battle fuels the anxiety, and as we can see - it's a battle we seem losing sticking to our old guns. Instead, we should advocate for smarter systems, better feedback loops, and quality that's built into the process rather than bolted on at the end.