I watched it happen in about three minutes.
There was a software feature I'd been wanting to build for months—one of those things that kept getting bumped by higher priorities.Then, one afternoon, out of curiosity, I fed the developer spec I had created to Claude Code and let it plug away.
Three minutes. That’s how long it took to build this feature. Bug-free. Clean code. Well integrated into the project architecture. It would have taken many hours to develop this!
I wasn't scared. I was impressed. And then something quieter crept in—a question I didn't expect.
If a machine can do this… then what, exactly, am I here for?
The Goalposts Keep Moving#
If you've been paying attention, you've probably noticed something unsettling.
We keep drawing lines in the sand—things only humans can do—and AI keeps stepping over them.
Reason? AI reasons. It passes law exams, medical licensing tests, and graduate-level assessments.
Language? It writes poetry, translates across dozens of languages, and holds conversations that feel disturbingly real.
Creativity? It composes music, generates artwork, and writes stories that win literary contests.
Strategy? It beat the world's best chess players decades ago and has since conquered games we thought required human intuition.
Each time, we adjust. "Okay, but it can't really understand." Then it seems to understand. "Fine, but it can't create anything original." Then it creates. "Sure, but it doesn't have empathy." Then it responds to pain and loneliness convincingly enough that people start forming real attachments to chatbots.
Now, I can already hear the pushback—and it's fair. But AI doesn't really understand. It's pattern-matching, not thinking. What it "creates" isn't truly creative—it's remixing what it trained on. It doesn't actually reason; it simulates reasoning.
And there's truth in that. I work with these tools every day, and I know they have real limitations.
But here's what I'd ask you to sit with: the fact that we're having that debate at all is the point. Twenty years ago, no one needed to argue that a machine wasn't truly creative because the question was absurd. Now we're splitting hairs over the difference between "real" creativity and something that looks, sounds, and functions almost identically. Whether AI truly reasons or merely simulates reasoning so well that most people can't tell the difference—the ground beneath us has still shifted. And it keeps shifting.
The goalposts keep moving. And the field is shrinking.
And here's what makes this more than an academic exercise: the pace is accelerating. That list of milestones I just walked through? Most of them happened in the last three years. We've already passed what many experts thought was possible, and there's more ground in sight. We're not just answering questions for where AI is today. We're asking who we are in light of where this is headed.
The Questions We're Not Saying Out Loud#
Here's what I notice in my work, in my church, and in conversations with people across all kinds of backgrounds: there's a quiet anxiety building.
It's not the loud panic you see in headlines about robots taking jobs (though that's real enough). It's something deeper. More personal.
People are wondering—sometimes without even putting it into words—what am I actually for?
If a machine can write my reports, analyze my data, draft my emails, counsel my clients, even create art that moves people… where does that leave me? What's left?
These aren't just interesting thought experiments. They're existential questions—the kind that follow you into the shower and wake you up at 2 a.m. They're the oldest human questions there are, reframed.
We've Been Answering It Wrong#
Here's what I think is actually happening—and it's not what most people expect from a guy who works in technology and pastors a church.
I don't think AI is making us less human.
I think it's revealing that we've been defining "human" wrong all along.
For a long time—centuries, really—we've built our understanding of what makes humans special on what we can do. Our intelligence. Our creativity. Our ability to reason, communicate, build, and solve problems. We've treated capability as the core of our identity.
And as long as humans were the only ones who could do those things, the framework held up fine. It felt solid. Of course we're special—look at what we can do that nothing else can.
But now something else can.
And honestly? This framework hasn't just been failing since ChatGPT showed up. It's been failing people for a long time.
If we define human value by capability, then what happens when capability is diminished—or when it was never valued by the culture in the first place? When someone loses the use of their legs, lives with a cognitive disability, works a job the world considers "unskilled," never had access to the opportunities that showcase capability, or chose to pour themselves into something the culture doesn't bother to measure—like raising children? Under the capability framework, they become less. Less productive, less useful, less valuable. It's no wonder that a life-altering injury can shake someone's entire sense of identity—because our culture has been telling them, in a thousand ways, that they are what they can do.
And if your definition of human value is built on capability, then AI isn't just a technological disruption. It's an identity crisis.
What If There's a Different Starting Point?#
This is where it gets interesting to me. Not as a technologist—though I'll be honest, the tech is fascinating—but as someone who's spent years studying an ancient story—found in the pages of Scripture—about who humans actually are and why they're here.
That story doesn't start with what humans can do. It starts with who they were created to be. It doesn't define humanity by capability. It defines humanity by something else entirely—something no machine can touch, not because the technology isn't advanced enough, but because it belongs to a completely different category.
The story says that what makes you you was established before you ever produced a thing. Before you accomplished anything. Before you proved your worth. Before anything went wrong.
If that's true—if human identity isn't rooted in capability but in something deeper and more enduring—then AI isn't a crisis at all.
It's a clarifier.
Let me say that a different way, because I think this is the crux of it:
If humanity's defining trait is capability, AI is a crisis. If it's something else—something more fundamental—then AI is simply clearing away the things we've been hiding behind and leaving us face to face with who we've always been.
If humanity's defining trait is capability, AI is a crisis. If it's something else—something more fundamental—then AI is simply clearing away the things we've been hiding behind and leaving us face to face with who we've always been.
Where We're Headed#
That's what this series is about. And a quick note on what it is and isn't: I'm not going to try to sort out the ethics of AI development or the policy questions around automation and employment. Those are important conversations happening elsewhere, and they deserve serious treatment. This is about something more personal: what does it mean to be you in a world where machines keep redefining what's possible?
Over the coming weeks, we're going to explore what it actually means to be human in an age when machines can do so much of what we once thought only humans could do. We'll look at the false foundation we've built our identities on, and why it was always going to crack. We'll dig into an ancient answer that turns out to be strangely, almost perfectly suited for this technological moment. We'll get practical about what this means for your work, your relationships, your sense of purpose.
And we'll discover that the thing that makes you irreplaceably human isn't under threat from AI. It never was.
The answer has been waiting for us since the beginning.
This is the first post in the "Being Human in the Age of AI" series. Next up: "You Are Not Your Output"—where we'll look at the false foundation of productivity-as-identity and why Genesis 1 offers a radically different measuring stick.
Have thoughts? I'd love to hear them. And if this resonated, I'd encourage you to take the Imago Assessment—a free tool I built to help you discover how you uniquely reflect what it means to be human.

