As a developer who has recently managed to publish a blog post nearly every day, I should feel accomplished. Yet, a quiet sense of guilt lingers. With AI tools streamlining my work—writing blog drafts, generating 90% of my code through tools like Copilot or Cursor—the process feels almost too effortless. The deliberate rhythm of crafting each sentence or line of code by hand now seems like a distant memory. This ease prompts a deeper question: If AI does the heavy lifting, can I still call myself a developer?

The Ethics of AI-Assisted Creation

This dilemma reminds me of a concept from mountaineering: oxygen-free climbing. Elite climbers often forgo supplemental oxygen, not for lack of resources, but to test the limits of human endurance. In a documentary, one climber remarked, “If your only goal is to reach the summit, you could hire a helicopter. But then what’s the point?” Similarly, if AI generates most of my work—whether code or prose—am I truly creating, or merely directing?

I faced a similar question over a decade ago when I first learned Ruby on Rails. The framework’s scaffolding feature allowed me to build a basic blog with just a few commands. Even then, without AI, I wondered: Is this still programming? Today, tools like Copilot amplify that question. They write faster, anticipate needs, and fill in gaps with an almost magical precision. Yet, the core concern remains: When creation becomes so automated, what role do we play?

From Horse Carriages to Jet Engines

Pete Koomen, YC Group Partner and co-founder of Optimizely, recently shared a thought-provoking metaphor in his article “AI 無馬馬車” (“AI Horseless Carriages”). He observed that while AI gives us an unprecedented productivity leverage, many current AI-powered apps feel clunky and counterproductive—like early cars designed to look and function like horse carriages.

Take Gmail’s AI writing assistant. It’s supposed to help, but sometimes the prompts required are so elaborate that you’re better off just writing the email yourself. The issue, Pete argues, is not the intelligence of the AI model, but the outdated software interaction paradigm it’s embedded in. We’re stuffing AI into frameworks built for manual tasks—just as early cars mimicked carriage design instead of embracing the unique possibilities of an engine.

This resonates deeply with my own discomfort. Perhaps the problem isn’t that AI makes things too easy, but that we haven’t yet reimagined what we should be doing with that ease.

Redefining Value Beyond Code

So what is our value as developers in this age? Maybe it’s not about lines of code or hours spent. Maybe it’s about design choices, framing the right questions, defining workflows that allow others (and our future selves) to move faster.

Pete suggests that true power lies in shifting from using AI tools to building AI agents—configurable assistants that actually understand our intent, workflow, and personal style. This means empowering users to craft not just prompts, but system-level behaviors. He gives the example of customizing a “system prompt” to match his personal writing voice—resulting in dramatically better outputs from Gmail’s AI.

This is design as leverage: not crafting every word yourself, but shaping the conditions under which the right words get written.

Beyond Tools: Toward AI-Native Thinking

If we take this seriously, being a developer means more than writing code—it means rethinking interaction models entirely. Most current apps limit users to surface-level inputs. But with large language models, we can (and should) let users define how their AI behaves. Developers shouldn’t just write apps—they should create agent-building kits, giving users the freedom to encode their personality, preferences, and priorities into intelligent systems.

Imagine an email client where your agent reads, prioritizes, and drafts replies—not based on some corporate generic, but on your tone and past behavior. Imagine task management tools where you don’t input every detail but train an agent to understand and manage your flow.

That’s not just using AI. That’s building the future of human-machine collaboration.

A Future of Evolving Roles

One day, we might share bedtime stories with our children about a bygone era: “Before AI, human engineers used something called a keyboard—they typed every single line of code by hand.” Our kids will gasp in disbelief: “Wait, what? That’s how people used to code?”

By then, the title of “developer” may have evolved into something new—perhaps creators, problem-solvers, or designers of logic and leverage. We won’t be the ones doing all the typing—we’ll be the ones designing systems, workflows, and intelligent agents that do it for us, on our terms.

Embracing Our Human Core

In an AI-driven world, our value as developers lies in our ability to go beyond automation. We define problems, shape solutions, and ensure that technology serves a purpose. But now, more than ever, we must also rethink how software is structured—not just what it does, but how people interact with it, personalize it, and derive meaning from it.

So the next time you question your role amidst AI’s capabilities, don’t ask whether you wrote every line. Ask: Did I build leverage? Did I solve a real problem? Did I help someone express their intent more clearly or act more effectively?

What does being a developer mean to you in this evolving landscape? I’d love to hear your thoughts in the comments.