Category: Uncategorized

  • Quality in the AI Era: Why QA Roles Are More Strategic Than Ever

    Quality in the AI Era: Why QA Roles Are More Strategic Than Ever

    Executive Summary

    Artificial intelligence is reshaping the economics of software development. Automation of routine tasks, from code generation to test data creation, is changing how companies allocate their budgets. Yet, despite predictions of a shrinking role for quality assurance (QA), the evidence shows the opposite: investment in quality remains essential, and its scope is expanding.

    Recent industry studies (2024–2025) show that while AI is making software teams more efficient, it is also introducing new categories of risk—prompt injection attacks, model drift, unsafe outputs—that require more sophisticated oversight. As a result, quality professionals are not being displaced. They are being asked to step into more strategic, governance-driven roles that safeguard both innovation and revenue.

    The Changing Landscape of Software Quality

    Efficiency Gains Do Not Erase Quality Needs

    A 2025 SaaS Capital study found that SaaS companies using AI in operations reported lower R&D and G&A spend but higher allocations to customer support and marketing—a sign that AI is changing where money flows, not eliminating the need for quality-related investments.

    AI Is Already Part of QA Practice

    The 2024 World Quality Report found that 68% of organizations are either actively using generative AI for quality engineering or have concrete roadmaps following pilots. Meanwhile, QA Tech’s 2024 statistics report showed 78% of software testers now use AI tools in their workflows, with common use cases including test data generation (51%) and test case creation (46%).

    Persistent High QA Investment

    Despite AI efficiencies, large enterprises continue to spend heavily on testing. A 2024 TestingMind Future of QA Survey reported that 40% of large companies dedicate more than 25% of development budgets to testing, and nearly 10% invest over 50%. These figures confirm that quality is not being deprioritized—if anything, the risks of AI adoption are expanding the scope of QA.

    Why Quality Roles Matter More in the AI Era

    Automation ≠ Autopilot

    AI can accelerate regression testing, but it introduces new risks such as bias, hallucination, and security vulnerabilities. Skilled professionals must design evaluation pipelines, adversarial tests, and governance checks to keep systems safe.

    Budgets Are Shifting, Not Shrinking

    AI may reduce traditional R&D costs, but companies are reinvesting in customer-facing reliability and AI safety measures. Quality professionals play a pivotal role in ensuring adoption doesn’t spike churn.

    Governance and Compliance Are Front and Center

    McKinsey’s 2024 report on AI-enabled product development emphasizes the need to integrate risk, compliance, and accessibility requirements earlier in the lifecycle—placing QA at the heart of strategic decision-making.

    The ROI of Modern QA

    The value of QA is measurable and directly tied to SaaS economics:

    Escaped defect reduction: Companies adopting AI-aware test strategies report up to 30% fewer post-release defects, reducing support load and protecting NRR. Faster detection and resolution: Continuous AI-driven monitoring reduces mean time to detect (MTTD) and mean time to resolution (MTTR) by double digits. Customer retention: Every percentage point of churn prevented translates directly into millions in preserved ARR—a metric leadership understands.

    In short: QA is no longer just about catching bugs; it is about protecting revenue.

    The Strategic Future of QA

    Forward-looking QA professionals are already moving beyond “test execution” into areas like:

    AI Evaluation Pipelines: building continuous safety and bias tests into CI/CD.

    Data Quality Ownership: monitoring for drift and contamination in training data.

    AI Release Governance: ensuring new AI features meet safety bars before launch.

    Support Telemetry Loops: connecting customer incidents back to failed tests and reinforcing the system.

    These are not “overhead” functions—they are growth enablers, safeguarding adoption and brand trust in an AI-saturated market.

    Conclusion

    The data is clear: AI is transforming QA, but not by making it irrelevant. Instead, it is making QA indispensable to business outcomes.

    Budgets remain high (25–50% of development spend in many enterprises). AI adoption is driving a reallocation of resources, not a reduction. The strategic role of QA professionals—designing guardrails, ensuring compliance, and protecting revenue—is expanding.

    For software companies, the choice is not whether to invest in quality, but how to evolve quality functions to meet the demands of the AI era.

    Acknowledgment

    This white paper was drafted in collaboration with ChatGPT (OpenAI’s GPT-5, August 2025), which assisted in sourcing recent studies, structuring the argument, and refining the narrative for clarity.

    References

    1. SaaS Capital. AI Adoption Among Private SaaS Companies and Its Impacts on Spending and Profitability. July 2025. https://www.saas-capital.com/blog-posts/ai-adoption-among-private-saas-companies-and-its-impacts-on-spending-and-profitability

    2. Capgemini, Sogeti, Micro Focus. World Quality Report 2024–25. https://www.worldqualityreport.com

    3. QA Tech. AI in Quality Assurance Statistics 2024. June 2024. https://qa.tech/blog/ai-in-quality-assurance-statistics-2024

    4. TestingMind. Future of Quality Assurance Survey Report. 2024. https://www.testingmind.com/future-of-qualityassurance-survey-report

    5. McKinsey & Company. How an AI-Enabled Software Product Development Life Cycle Will Fuel Innovation. May 2024. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/how-an-ai-enabled-software-product-development-life-cycle-will-fuel-innovation

  • The Value of Being Known by a Few

    The Value of Being Known by a Few

    There is a kind of recognition that feels good but does not go very deep. A Slack emoji reaction. A round of applause in an all-hands. A line in a quarterly update. These moments matter—they affirm that your reputation is intact, that people generally see you as smart, kind, and hardworking. And it is good to maintain that reputation with as many people as possible.

    But I am learning that real growth often comes not from widespread acclaim, but from directed and contextual feedback from a few key people.

    Widespread Recognition Is Hard to Measure

    Unless you have a PR team managing your image, trying to keep track of how the whole organization sees you can be a frustrating and fuzzy metric. You cannot know what everyone thinks. You cannot control every impression. And the effort to do so often pulls energy away from the work itself.

    What you can do is cultivate a circle of people whose perspective you trust, who know what you are actually working on, and who are close enough to see both your strengths and your blind spots.

    Context Matters

    Generic praise feels nice, but it is not always actionable. “Great work!” is encouraging, but it does not help you know whether your test suite design, your release notes, or your leadership approach is actually hitting the mark.

    The best feedback is directed and contextual:

    – From the colleague who reviewed your code and noticed how you structured your assertions.

    – From the manager who saw how you facilitated a tense conversation without shutting anyone down.

    – From the teammate who watched you debug a thorny issue and appreciated your calm approach.

    That kind of recognition, rooted in specific contexts, tells you what to repeat and what to improve.

    Leadership Parallel

    The same principle applies in leadership. Leaders who chase broad acclaim often miss the signals that matter. But leaders who cultivate trusted feedback loops—whether from their immediate reports, peers, or mentors—are better equipped to guide with clarity.

    Widespread recognition is not wrong, but it is fragile. Directed feedback is durable. It forms the bedrock of real trust.

    A Gentle Encouragement

    So yes, keep your reputation healthy. Do your work with integrity so that people in every corner of the organization know you can be trusted. But do not measure your worth by the volume of applause. Measure it by the depth of the conversations with the few people who really see your work.

    Because in the end, being known deeply by a few is more valuable than being vaguely recognized by many.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • The Gift of Not Being the Smartest in the Room

    The Gift of Not Being the Smartest in the Room

    For a long time, I carried a quiet insecurity into my work. As someone with a perfectly average IQ, it was intimidating to sit side by side with people who are “Mensa-level” smart. In the world of enterprise software, many engineers operate at a level of pattern recognition, systems thinking, and technical problem solving that is almost breathtaking to watch.

    In my early days, that brilliance felt like a shadow over me. I worried that because I couldn’t see around corners the way they could, or architect a solution in an afternoon that might take me a week to grasp, I didn’t belong.

    But lately I’ve begun to see it differently.

    Different Brains, Different Gifts

    One of the privileges of testing is the variety of perspectives you get to bring into the room. Software engineering teams need that variety, because no one brain type can cover all the ground.

    Some people see elegant patterns where others see noise. Some can hold an entire system in their head and shift its pieces like a chessboard. My brain seems to work in another direction: connecting dots between people, clarifying processes, asking the kinds of “simple” questions that keep us grounded in what the user actually needs.

    For a long time, I dismissed that as “lesser intelligence.” But I’ve started to understand that it’s not lesser—it’s different. And difference is exactly what makes a team strong.

    The Tester’s Role

    In testing, this difference shows up in practical ways. While others may dive deep into code optimizations or architectural elegance, I find myself tracing the human story of the product:

    How will this behave for the accountant logging in after a long day? What happens if the user clicks the wrong thing at the worst time? Is the process simple, or are we expecting people to think like engineers to get through it?

    The “dumb” questions—what happens if I do this? why does it feel confusing here?—often lead us to discover edge cases, usability snags, and even data integrity issues that otherwise slip through.

    That doesn’t make me less valuable than my “genius” colleagues. It makes me complementary.

    Building Smarter Rooms

    The truth I’m learning is that the magic isn’t in being the smartest person in the room. The magic is in building a room where different kinds of intelligence get to play together.

    The engineer who can juggle patterns. The tester who can feel the friction points. The designer who can see beauty in simplicity. The customer who can tell us what matters most.

    When these gifts combine, software not only works—it breathes.

    Closing Thought

    If you sometimes feel average in a world of brilliance, don’t rush to trade your perspective for someone else’s. Instead, notice where your mind naturally goes, and offer that gift fully.

    Because software needs not only genius, but also empathy, persistence, creativity, and curiosity.

    And when those things work together, the whole is always smarter than the sum of its parts.

  • Driving Without a Dashboard: Why Instrumentation Matters

    Driving Without a Dashboard: Why Instrumentation Matters

    Imagine driving your car without a dashboard.

    No speedometer, no fuel gauge, no warning lights—just the hum of the engine and the scenery rushing by.

    At first, it might feel fine. The car starts, it moves, you reach your destination. But you have no idea how fast you’re going, whether you’re almost out of gas, or if the engine temperature is creeping toward disaster. The first sign that something’s wrong? When the car sputters to a stop on the side of the highway, or worse, the engine seizes up.

    That’s what it’s like running software in production without proper instrumentation.

    What Happens When You Skip Instrumentation

    When code is built without logging, metrics, and health checks baked in, you’re essentially shipping a black box into production. You can’t see what’s happening inside. The application might work perfectly—until it doesn’t. And when it doesn’t, your team is left diagnosing in the dark.

    No logs? You can’t trace the root cause. No metrics? You don’t know whether the slowdown started minutes or months ago. No alerts? You only find out something is broken when users complain.

    Sometimes, teams skip instrumentation because of external pressures: tight deadlines, client demands, leadership urgency. And in those moments, the instinct is to “just get it out the door.” But cutting this corner almost always costs more later, both in firefighting time and in user trust.

    Building It Right the First Time

    Instrumentation is like a dashboard for your code. It’s not a nice-to-have—it’s essential for safe, reliable operation.

    Key benefits of building instrumentation in from day one:

    Faster troubleshooting – You know where to look before the problem spirals. Proactive fixes – Metrics and alerts let you address issues before they affect users. Confidence in releases – You can measure the impact of new code with real data.

    Practical Recommendations

    You don’t need a huge framework overhaul to start instrumenting better. Begin with these simple steps:

    Log important events and errors. Include enough context (user ID, request ID, timestamp) to trace issues. Keep logs clean—no spammy debug statements drowning out the signal. Track key performance metrics. Monitor response times, error rates, and resource usage. Focus on the metrics that actually matter to user experience. Set up alerts that are actionable. Avoid “alert fatigue” by triggering only on issues worth waking someone up for. Tie each alert to a clear response playbook. Make instrumentation part of your definition of done. Code isn’t “done” until it’s observable. Review PRs not just for functionality, but also for visibility.

    Leadership Lesson: This Isn’t Just About Code

    The same principle applies to organizations. Leaders who run without visibility—no feedback loops, no performance indicators, no clear communication channels—are essentially “driving without a dashboard.” Problems surface late, at higher cost, and trust erodes.

    Instrument your organization:

    Define key indicators of health (team morale, delivery velocity, customer satisfaction). Create channels to surface small issues before they become crises. Make feedback a built-in part of your process, not an afterthought.

    Conclusion

    Building it right the first time isn’t about perfection—it’s about visibility. Just as a dashboard keeps you aware on the road, instrumentation keeps your engineering and leadership efforts on track. Skip it, and you’re flying blind. Build it in, and you’re in control of where you’re going and how you’ll get there.

  • Strong Pipelines, Strong Leadership

    Strong Pipelines, Strong Leadership

    I’ve been thinking about what makes a good pipeline.

    Not just one that “runs” or “deploys,” but one that actually does its job:

    It performs the right tasks. It provides the right feedback. It does it with as little overhead as possible. And most importantly—it doesn’t deliver the wrong signal.

    Because a pipeline that produces misleading outcomes—or slows the team with noise—is worse than no pipeline at all.

    The Job of a Pipeline

    A strong pipeline isn’t about complexity or flash.

    It’s about fitness for purpose. It answers the question: Can we deliver this safely and with confidence?

    A good pipeline:

    Runs quickly enough to be part of the team’s rhythm. Automates what should be automated, without over-engineering. Catches issues early, without blocking unnecessarily. Surfaces accurate, actionable information when something breaks.

    When it’s healthy, the team trusts it.

    When it fails, it teaches.

    When it gives false confidence, it’s dangerous.

    That’s why fragile, slow, or noisy pipelines are so costly.

    They don’t just waste time—they erode confidence in every release.

    Leadership Is the Same Way

    The more I think about it, the more I realize the same principles apply to organizational leadership.

    A leader is, in many ways, like a pipeline.

    We exist to move work forward smoothly, safely, and with clarity.

    We create feedback loops.

    We remove friction.

    Strong leadership:

    Automates the right things – setting up systems where the team can operate without micromanagement. Surfaces issues early – naming risks before they become crises. Keeps overhead low – processes support progress, not stifle it. Provides accurate signals – decisions that reflect reality and build trust.

    But leadership can “fail” the same way bad pipelines do:

    Slow leadership – decisions bottlenecked at the top. Flaky leadership – inconsistent signals, shifting priorities. Noisy leadership – so many processes and communications that no one knows what matters. False-positive leadership – giving the impression everything’s fine when it’s not.

    When a pipeline (or leader) gives the wrong signal, trust breaks down—and the system slows to a crawl.

    Building for the Right Outcomes

    Whether we’re talking about pipelines or leadership, the job is the same:

    Perform the right jobs, with minimal friction, and give signals the team can trust.

    That means:

    Be clear about purpose. What’s this pipeline—or this process—here to achieve? Be accurate. No false greens, no empty approvals. Be efficient. Remove steps that don’t add value. Be trustworthy. Every signal builds or erodes confidence.

    Good pipelines and good leaders both protect the team’s momentum. They make it easier to move forward boldly.

    The Parallel That Stuck With Me

    Over time, I’ve realized:

    A good pipeline doesn’t exist to prove how sophisticated it is. It exists to help the team deliver. A good leader doesn’t exist to prove how necessary they are. They exist to help the people and the mission thrive.

    When either starts signaling for its own sake, the system suffers.

    Moving Toward Stronger Signals

    This reflection leaves me with two questions:

    Which parts of our pipeline add friction without value—and how can we simplify? Where in my leadership am I creating noise instead of clarity—and how can I course-correct?

    Because in the end, whether it’s a pipeline or a leader, the goal is the same:

    Make it easier for the team to do the right thing.

    That’s the kind of signal worth building.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • Faith, Technology, and the Grace of AI

    Faith, Technology, and the Grace of AI

    I feel a lot of positivity about AI these days.

    It’s not that I’m blind to the concerns. I know there are environmental costs, social implications, and fears about job displacement. I’ve read the warnings about bias, misinformation, and the potential for misuse. These are serious, and I don’t dismiss them.

    But I also can’t ignore what I’ve experienced firsthand:

    AI has helped me become light years more productive in my quality work.

    It’s made me faster, sharper, and freer to focus on the parts of my job that actually require human judgment.

    It’s helped me in my ministry—preparing worship, thinking through pastoral concerns, even writing prayers.

    It’s allowed me to work from home, reducing my commute and saving not just time but energy.

    These aren’t small things. They’re gifts.

    And in my theology, gifts deserve to be received with gratitude—while still being tested.

    A Theology of Tools

    In the Christian tradition, tools have always been a double-edged sword.

    The plow breaks the ground to grow food, but it can also exhaust the soil. The printing press spreads the gospel, but it can also spread lies. The internet connects, but it also isolates.

    Technology amplifies human intent—for good or ill.

    AI is no different.

    My faith teaches me that creation is good, but fallen. Human ingenuity reflects God’s image, but also human brokenness. So every new tool invites two questions:

    How can this serve life, love, and flourishing? How might this harm, distort, or enslave?

    Holding those questions together—that’s the work of discernment.

    The Luxury Question

    There’s another tension I feel:

    I don’t want my luxury to be at the expense of another’s suffering.

    AI makes my life easier. But what about the unseen costs—energy consumption, labor practices, job impacts? These aren’t hypothetical. They’re real.

    And yet, the connection between my use of AI and someone else’s suffering isn’t always clear. The world is complex. The lines are tangled.

    In the face of that complexity, I try to stay humble:

    Grateful for the benefits. Open to hearing the critiques. Willing to change my habits if the harm becomes clearer.

    Faith doesn’t give me a simple answer here. It just calls me to stay awake—to love my neighbor, even when that neighbor is far away and hidden in the supply chain.

    Skepticism Without Cynicism

    I’m skeptical of both extremes:

    The prophets of doom who see only destruction. The evangelists of progress who see only salvation.

    AI is neither angel nor demon.

    It’s a tool.

    And like any tool, it needs wise use.

    The Bible often talks about discernment—testing the spirits, weighing the fruit, watching what something produces over time. That’s how I want to approach AI:

    Skeptical enough to ask hard questions. Hopeful enough to use it well. Faithful enough to see it as part of God’s unfolding story.

    What AI Has Shown Me So Far

    For me, AI has been less about replacing my work and more about redeeming my time:

    It takes care of the tedious parts so I can focus on the meaningful parts. It opens space for creativity and thoughtfulness in both my engineering work and my pastoral work. It reminds me that technology can be used to serve people, not just profits.

    I don’t think that’s accidental.

    I think it’s a sign of what’s possible when we use technology as stewards, not masters.

    Moving Forward With Hope

    So where does that leave me?

    Grateful for what AI enables. Alert to its risks. Committed to using it in ways that build trust, not fear. Rooted in a theology that sees every tool as something to be used for the good of others, not just myself.

    My cautious optimism doesn’t come from naïveté.

    It comes from faith—a belief that God’s Spirit is at work even in our imperfect creations, guiding us to use them with wisdom.

    Because at the end of the day, AI is not ultimate.

    God is.

    And that means we’re free to use it boldly, humbly, and with love.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • Leading Quality When You’re Selling a Service, Not a Product

    Leading Quality When You’re Selling a Service, Not a Product

    Quality as a discipline started in the factory.

    It was about making sure that when you produced ten thousand widgets, every single one met the spec.

    Consistency was king. Defects were the enemy.

    Deming, Juran, Crosby—all emphasized reducing variation and building trust through repeatability.

    But in SaaS, the game is different.

    We’re not stamping out physical products.

    We’re providing a service—one that changes every week, maybe every day.

    And our customers don’t experience it as something they buy once and hold. They experience it as something they rely on—something that either works for them, or doesn’t.

    So what does it mean to lead quality in that context?

    Quality as Experience, Not Output

    In a SaaS world, the thing customers value isn’t just “the product” (in the traditional sense). It’s the whole experience:

    How reliably it works when they need it. How easy it is to do what they came to do. How quickly they get help if something goes wrong. Whether they trust updates to make things better, not worse.

    That’s why I think of SaaS quality like this:

    Quality is the degree to which the service consistently enables the customer to achieve their desired outcomes with confidence and ease.

    It’s not just fewer bugs.

    It’s fewer surprises.

    It’s less friction.

    It’s more trust.

    What Makes This Harder (and More Interesting)

    In manufacturing, you can inspect the product before it ships.

    If it meets the standard, it’s done.

    In SaaS, nothing is ever done.

    Your code is deployed to production and immediately starts living a new life—interacting with real data, real users, real constraints.

    You can’t inspect your way to quality. You have to design for it, observe it, and adapt continuously.

    This changes the role of QA and quality leadership.

    You’re not just testing features—you’re stewarding an experience.

    How I Think About Leading Quality in SaaS

    Here’s how I’m learning to approach it:

    1. Expand the Horizon Beyond Code

    Yes, bugs matter. But so do outages, confusing workflows, slow support responses, and brittle integrations.

    Leading quality means caring about the entire journey, not just what’s in Git.

    2. Make Value the North Star

    Defect rates tell part of the story, but not the whole story.

    The real question is: Does this change make our customer’s life better?

    That’s the metric that matters.

    3. Treat Releases as Living Things

    Shipping isn’t the finish line—it’s the starting point.

    We need monitoring, telemetry, feature flags, and fast feedback loops to keep learning from what’s in production.

    4. Coach Teams to Own Quality

    In a service, quality is everyone’s job. Developers, designers, support, marketing—they all touch the experience.

    My role is less about gatekeeping and more about shaping a shared mindset.

    5. Design for Resilience

    Manufacturing aimed for zero defects. SaaS should aim for graceful failure and fast recovery.

    Rollbacks, fallbacks, clear communication—these are features, too.

    Where AI and Continuous Delivery Fit

    AI is changing the game.

    Tools like Playwright MCP or automated observability help us find issues faster, automate the boring stuff, and spot patterns humans miss.

    But AI is still just a tool.

    It doesn’t define quality for us—it amplifies whatever system we already have.

    Deming would still ask:

    What’s the system that produces these results?

    In SaaS, that system isn’t a production line—it’s a living network of code, people, and feedback.

    AI fits when it supports learning and improvement, not just speed.

    The Leadership Shift

    Leading quality in SaaS means shifting from inspecting outputs to shaping a system.

    It means asking:

    Where are we building trust? Where are we losing it? How can we learn faster?

    And it means connecting dots across the organization:

    From engineering to customer outcomes. From features to value. From incidents to learning.

    Because in SaaS, what customers buy isn’t code.

    It’s confidence.

    Moving Toward Service-Quality Leadership

    If I had to sum it up, I’d say quality leadership in SaaS is about:

    Seeing the whole picture – not just defects, but outcomes. Building resilience – not just preventing failure, but recovering with grace. Inviting ownership – making quality a shared mindset, not a department. Measuring what matters – focusing on value, not vanity metrics. Keeping the customer at the center – because they’re not buying your features; they’re buying what those features let them do.

    In manufacturing, quality was about building the same thing right every time.

    In SaaS, it’s about continually earning trust.

    That’s a harder problem. But it’s also a more meaningful one.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • Quick Wins with Playwright MCP + Cursor

    Quick Wins with Playwright MCP + Cursor

    I’ve been experimenting with Playwright’s MCP server tool inside Cursor, and honestly—it’s kind of magic.

    I spun up the classic TodoMVC app just to test things out, and within minutes I had working UI tests:

    Add a to-do item Remove a to-do item Check that it stuck

    No fuss. No deep dive into selectors. Just fast, clean interaction with a real app.

    Where MCP Shines

    Here’s where the Playwright MCP + Cursor combo really shines:

    When you know what you want the test to do, but you don’t want to burn 30 minutes getting the TypeScript syntax just right.

    In other words:

    You’re not designing a robust suite (yet).

    You’re just trying to get real signal fast.

    That’s where this workflow flies.

    AI + Testing = Acceleration (Not Replacement)

    To be clear, architecting a solid test suite still takes work—strategy, structure, edge cases, naming conventions, cleanup logic. You know… all the stuff that makes tests worth running in the long term.

    But that’s the beauty here:

    AI isn’t replacing any of that. It’s just accelerating the ramp-up.

    You can sketch ideas, try things, refine.

    Then build the real suite once you know what matters.

    My Takeaway So Far

    Tools like this change how I think about test scaffolding:

    Short feedback loops: Try something, see it run, improve it. AI as a testing assistant: Not writing everything, but getting you started. Speed without sloppiness: When used well, these tools speed you up without skipping important thinking.

    If you’ve ever put off writing a test because setting up the test felt harder than the test itself… try this combo.

    It’s not perfect, but it’s fast.

    And sometimes, that’s exactly what you need.

    ⚡ Quick Start Guide: Playwright MCP + Cursor

    🛠️ Step 1: Install the TodoMVC App

    Clone the classic TodoMVC example or use your own small app to experiment.

    git clone https://github.com/tastejs/todomvc.git cd todomvc/examples/react npm install npm start

    This gives you a local app to write and run UI tests against.

    🧪 Step 2: Add Playwright + MCP Support

    If you haven’t already, install Playwright:

    npm install -D @playwright/test npx playwright install

    Then, to enable MCP in Cursor:

    Visit https://docs.cursor.com/tools/mcp Scroll to the Playwright card Click “Add Playwright to Cursor”

    I didn’t even need to run anything manually. I just restarted Cursor once, and it connected automatically. Your mileage may vary, but the setup was impressively smooth.

    💡 Step 3: Write and Run Tests (with AI Help)

    Once MCP is active, open a new tab in Cursor and run this prompt to explore your local app:

    use the playwright mcp tool to explore and write tests for localhost:8080

    You can experiment with other versions of the prompt to add more detail, like:

    “Start by writing a basic test that adds a todo and checks that it appears.” “Write 3 test cases for deleting todos, including an edge case where the list is empty.”

    The MCP connection lets Cursor explore the running app, interact with it, and generate working Playwright test scripts—without you needing to wire everything up manually.

    ✨ Bonus: Sample .cursorrules Prompt

    @testcases - Playwright Test Skeleton You are a senior QA engineer using Playwright Test. Write a basic UI test to verify the following: [user adds a todo item, sees it listed, and deletes it]. Use TypeScript and the Playwright testing API.

    Or stick with direct prompting in the Cursor composer for more control.

    🧠 Pro Tips

    Don’t skip test teardown—AI might forget to clean up state. Keep a scratchpad folder for rough drafts and auto-generated tests. Use this approach to validate flows before you design a full suite.

    Final Thought

    You still need good judgment, especially when building long-term test infrastructure. But this tooling removes so much of the friction at the beginning of the process.

    It’s not about skipping craft.

    It’s about skipping tedium.

    And I’m here for that.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • You Didn’t Know You Could Do It (Until You Did)

    You Didn’t Know You Could Do It (Until You Did)


    There’s this weird phenomenon I keep bumping into—maybe you’ve felt it too.

    You walk into something not knowing how hard it’s going to be.

    You’ve got just enough confidence to say yes.

    And then it hits: this is way harder than I thought.

    It stretches you.

    It exhausts you.

    It humbles you.

    You almost walk away—or at least wonder if you should.

    But then, somehow… you get through it.

    And when you come out the other side, you realize something quietly life-changing:

    You’re more capable than you thought you were.


    The Arc of “I Can’t” to “I Did”

    It’s strange how growth sneaks up on you.

    You start off with one set of assumptions:

    • This will probably be manageable.
    • I’ll work within my current limits.
    • I’ll stay in the zone of what I know how to do.

    And then the challenge arrives and laughs at all of that.

    You have to learn faster.

    Stretch wider.

    Think deeper.

    Lead, even when you weren’t given the title.

    Test, even when no one defined the scope.

    Speak up, even when your voice shakes.

    It’s terrifying. And messy. And often unfair.

    But then—somehow—you do the thing.

    And it doesn’t kill you.

    And now you can’t unknow that you’re capable of more.

    That doesn’t mean you want to do it all again.

    But it does mean you carry a new kind of confidence—not the loud, flashy kind, but the grounded kind that says:

    “I’ve walked through fire before. I didn’t love it, but I’m still here.”


    The Danger of Underestimating Yourself

    When you’re just starting something—new role, new company, new toolset—it’s easy to look at your current skill set and assume, This is what I have to work with.

    But most of the time, what you can do tomorrow doesn’t show up on today’s resume.

    You only find out by being asked.

    By being stretched.

    By being given too much—just barely too much—and learning how to carry it anyway.

    The hard part is: you don’t know what you’re capable of until you’re already in it.

    There’s no shortcut to that.


    What I’m Learning to Trust

    I’m learning (slowly) that this kind of growth usually starts with a moment of ignorance—not in a shameful way, but in a pure, honest way:

    I don’t know how this will go.

    I don’t know what I’m capable of.

    I’m about to find out.

    That’s not a failure of planning.

    That’s the beginning of learning.

    And when I look back at the hardest, most important moments in my life and career, that’s the pattern:

    • Ignorance
    • Struggle
    • Breakthrough
    • Quiet, unshakeable strength

    If You’re In the Middle

    If you’re in the middle of that arc—where it feels like too much and you’re wondering whether you’re enough—I hope you’ll hear this:

    You don’t have to know yet.

    You’re allowed to struggle.

    And it’s entirely possible that the part of you that’s currently overwhelmed is also the part of you that’s growing stronger.

    You’re not stuck—you’re stretching.

    Give it time.

    Keep walking.

    And don’t be surprised when, later, you look back and say:

    “I didn’t know I could do that.”

    But you did.

    And now you know.

    Beau Brown

    Testing in the real world: messy, human, worth it.

  • When Respect Looks Like a Challenge

    When Respect Looks Like a Challenge


    I’ve been thinking about something lately—something I’ve felt but haven’t always known how to name.

    It’s this:

    Sometimes, the way you know someone respects you is when they come hard at you.

    Not in a cruel way.

    Not in a power play.

    But in that sharp-edged, test-your-thinking, defend-your-ground kind of way.

    They come at you because they think you can handle it.

    Because your ideas matter enough to push against.

    Because they see you not as fragile—but as a peer.

    It’s not the best way to approach people.

    It’s definitely not the gentlest.

    But sometimes—it’s real.


    A Hard Respect

    I’ve had people challenge me with more heat than I was expecting:

    • “Why did you do it that way?”
    • “Are you sure that’s the risk we should be prioritizing?”
    • “That feels like a half answer—what are you really saying?”

    And in the moment, it stings.

    I get defensive.

    My brain scrambles to explain.

    My heart wonders if I’ve lost their trust.

    But later—sometimes hours, sometimes weeks—I realize:

    They only challenged me that hard because they thought I had something worth challenging.

    They saw me as someone who could take it, wrestle with it, and sharpen back.

    And that kind of respect, while messy, is still respect.


    It’s Not Always Healthy

    Let’s be clear—this doesn’t mean aggression is leadership.

    Or that we should go around testing people’s worth by throwing elbows in meetings.

    Respect can also look like listening.

    Like collaboration.

    Like invitation instead of interrogation.

    But in some circles—especially in tech, especially in leadership, especially in fast-moving teams—respect sometimes shows up through pressure:

    • Prove it.
    • Justify it.
    • Show your reasoning.

    And if no one ever challenges you?

    That might feel polite—but it can also be a sign that people aren’t listening closely enough to your work.


    What I’m Learning to Do With It

    When someone comes at me hard, I try (emphasis on try) to:

    1. Pause instead of reacting Sometimes the heat in their tone isn’t about me—it’s about the stakes. Or their stress. Or their own discomfort with uncertainty.
    2. Hear the question behind the tone Is there a good challenge buried inside the delivery? Can I respond to the substance, not just the sting?
    3. Push back without burning bridges Respect goes both ways. I can hold my ground without having to mirror the intensity.
    4. Ask myself: would I rather be ignored or engaged? I’d rather someone argue with me than pretend I have nothing worth saying.

    A Better Way Forward?

    Of course, the goal isn’t to normalize hard-edged conversations as the only way to show respect.

    But it’s worth recognizing the pattern.

    And maybe it’s worth naming in ourselves, too:

    • If I push someone, is it because I believe in them?
    • Can I challenge without triggering?
    • Can I honor people not just by being kind—but by being engaged?

    Because sometimes, the hardest questions come from the people who are actually paying attention.

    And that’s a kind of respect I’m still learning to receive.

    Beau Brown

    Testing in the real world: messy, human, worth it.