Sometimes I sit back and wonder: are we solving real problems with AI—or are we just solving expensive ones?
Lately, I’ve seen AI used in ways that feel like trying to outrun the limits of being human.
Take longevity science. A field once rooted in slow observation and human care is now buzzing with machine learning models, digital twins, and predictive “aging clocks.” In an April 2025 Forbes article by Tomoko Yokoi, the author describes how researchers are using AI to simulate decades of biological aging in silico—testing interventions without waiting for time to pass. What once took years in mice or decades in humans can now be compressed into hours of computation.
It’s impressive. It’s groundbreaking. It’s possibly even life-saving.
But part of me wonders: was it a problem that it took time?
Was it truly a flaw that understanding aging required patient, embodied presence across a human lifespan? Or was it simply expensive—and therefore labeled inefficient in a world that values scalability above all?
The Discomfort of Limits
This question has been on my mind a lot lately, not just in the context of medicine, but in tech and software as well. I’ve seen AI tools that generate code and write test plans, promising to accelerate everything from development to deployment. And they do. Until something breaks—and no one remembers how the system works.
The deeper assumption seems to be: if it takes a team of skilled humans a long time to build something, that’s a problem. But is it?
We’re increasingly told that speed is always good, and human limits are always bad. But slowness isn’t always a bug. Neither is mortality.
Sometimes the fact that things take time—that they require wisdom, conversation, conflict, or care—isn’t inefficiency. It’s reality. It’s part of what makes the work meaningful.
Not All Progress is Healing
I’m not against progress. And I’m certainly not against tools that help people live longer, healthier lives. There are parts of the longevity movement that feel hopeful—especially when they’re focused on accessibility, dignity, and care.
But when the conversation shifts from “How can we help people age well?” to “How can we prevent aging altogether?”—I start to get uneasy. Not because I fear the future, but because I care deeply about what we’re willing to call “broken” in the first place.
In testing, in engineering, in caregiving—some of the best work happens in the friction. In the waiting. In the debugging. In the time it takes to really see what’s going on.
If we train AI to skip all that, what else are we skipping?
A Different Kind of Intelligence
I use AI almost daily. It helps me generate test cases, write bug reports, explore possibilities I hadn’t considered. I think it’s a gift—when used with humility. But I don’t want to build a world where human slowness, uncertainty, or mortality are treated like defects to be patched.
What if intelligence isn’t just about speed or predictive power?
What if intelligence includes acceptance, discernment, even grief?
What if a good system isn’t just one that runs smoothly—but one that allows space for the kind of deep, messy, unscalable wisdom only humans can offer?
Staying with the Trouble
To my fellow testers, engineers, and builders: I know it’s tempting to want everything frictionless. But some things are worth the trouble.
Slowness is not failure. Collaboration is not inefficiency. Limits are not bugs.
Sometimes what the world needs most is a few good humans, working together—asking questions that AI can’t quite answer, holding space for the kind of complexity that can’t be summarized in a model.
Let’s build tools that help us become more human, not less.
Let’s not be afraid to ask whether the problem we’re solving… was ever really a problem at all.
Reference:
Yokoi, Tomoko. “How AI Is Rewriting The Future Of Aging.” Forbes, April 26, 2025. https://www.forbes.com/sites/tomokoyokoi/2025/04/26/how-ai-is-rewriting-the-future-of-aging

Leave a comment