Month: November 2025

  • Egoless Testing

    Egoless Testing

    Jerry Weinberg wrote about egoless programming decades ago. The idea was simple. The code isn’t you. It doesn’t express your worth. It isn’t evidence of your brilliance or your failure. It’s just something that needs to work, and you can let others help you make it better. Lately, I’ve been wondering what egoless testing might look like.

    Because this week, I asked more “dumb questions” than I have in a long time. I stared at flows that should have been obvious. I felt confused about basics. I took nearly an hour just to understand what I was actually supposed to be testing. And I could feel that old internal narrative starting to whisper.

    “You should be better than this.”

    “You’ve been told you’re a good tester. Prove it.”

    “Everyone else gets this already.”

    It’s amazing how quickly you can go from feeling competent to feeling fraudulent in your own mind. I didn’t come to this industry as a developer. Testing was my way in. And from day one, it taught me something that has shaped everything since: you often begin with confusion, and you work faithfully toward clarity.

    That is not a flaw in the work. It is the work.


    The Work Is Hard Because It’s Hard

    Testing in a new system, with a new data model, new flows, new expectations, and new rhythms… it can feel disorienting. Even when you’re good at this work. Even when you’ve shipped major features. Even when you’ve carried the responsibility of quality in high-pressure environments.

    The difficulty is not a reflection of your intelligence. The difficulty is a reflection of the problem. And pretending otherwise never helps anyone.


    Questions Aren’t a Sign of Weakness

    When I’m tired or overwhelmed, asking questions feels like exposing a flaw. Like I’m announcing, “I don’t belong here.” But the truth is different. Questions are the work. Questions are how testers build mental models. Questions are how risk becomes visible. Questions are how teams get safer.

    Egoless testing means letting the question be the question, without loading it with shame. Sometimes the “dumb” question is the one everyone else was silently avoiding.


    Your Confusion Isn’t You

    This is where Weinberg’s spirit still applies. Egoless programming says: the code isn’t you. Egoless testing says: the confusion isn’t you. If it takes you an hour to understand something, that hour wasn’t wasted. It was invested in clarity.

    If you ask something basic, you didn’t embarrass yourself. You surfaced an assumption that needed to be named. Your value has never been the speed of your comprehension. Your value is the honesty and care with which you help the team see risk.


    Testing Requires a Self You Must Protect

    Being the person who says “I don’t understand this yet” can feel vulnerable. Being the person who asks the questions no one else is asking can feel lonely. Being the person responsible for the final call can feel heavy. Egoless testing means tending to the part of yourself that actually makes good testing possible. You cannot test well if you mistreat the person doing the testing.

    When I feel lost, I try to remember something simple: My questions aren’t signs of incompetence. They are signs that I am doing the slow, faithful work of making sense of something that matters. Egoless testing is not about pretending you have no ego. It is about refusing to punish yourself for being human in a very human job. It is about accepting that confusion is not a sin. It is part of the craft.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • Taking Testing Seriously (Still)

    Taking Testing Seriously (Still)

    AI can process an entire codebase in seconds. It can trace dependencies, generate test cases, and even simulate user behavior with near-human fluency. Sometimes it feels like the machine is thinking faster than I can blink.

    The language model may be doing something approximating human thinking, but that doesn’t mean it’s thinking for me.

    I’ve been reading James Bach and Michael Bolton’s new book, Taking Testing Seriously, and it reminded me how much testing has always depended on human judgment, context, and responsibility, no matter how powerful the tools become.


    Learning Through Experience

    Bach and Bolton describe testing as a process of learning about a product through direct engagement—by exploring, experimenting, and experiencing it.

    AI can help me explore faster, but I still have to learn. There’s no shortcut for that kind of embodied understanding. It’s the learning that implants memory in a tester’s brain, memory about how the system behaves, where it creaks, and where it hides risk.

    That’s something no model can outsource.


    Automation Isn’t the Whole Story

    The Rapid Software Testing methodology emphasizes that testing is not defined by the artifacts we produce—like test cases or automation reports—but by the activity of investigation and evaluation.

    Automation and AI can generate code, plans, and data. But the essence of testing is in the performance: observing, questioning, interpreting, and making sense of what we see.

    I think about this whenever I read an AI-generated report. It often looks complete, but something in me always asks, Is that actually what we needed to know?


    Responsibility Still Belongs Somewhere

    According to RST, tools and automation can help us check software, but they cannot test on their own. Testing requires interpretation, judgment, and context awareness—qualities that remain distinctly human.

    Even if AI executes every scenario, someone still has to take responsibility for what those results mean. Someone has to say, “I understand what this automation did. I see its limits. I’ll stand behind this call.”

    It’s not about taking blame when things go wrong. It’s about stewardship. It’s about deciding who will train the AI, interpret its findings, and ensure that testing continues to serve the purpose of quality, not just completeness.


    The Human Role Isn’t Diminishing; It’s Deepening

    Testing today looks nothing like it did even five years ago. We’re surrounded by tools that can analyze faster, reason more broadly, and write with startling accuracy. But Taking Testing Seriously helped me realize something important.

    AI expands what is possible, but it does not expand our wisdom automatically.

    The work still requires the same human qualities it always did: curiosity, accountability, systems thinking, empathy. The tools change, but the craft endures.

    We don’t stop being testers when the bots arrive. We become the ones who decide how they test, why they test, and what success means.

    That’s what it means to take testing seriously, still.

    Beau Brown

    Testing in the real world: messy, human, worth it.