Yesterday I opened a frontend project in Replit that I had forgotten about for weeks. One of those projects you start on a whim, with a quick idea, without structure or a clear plan, just with the desire to not let a creative moment pass. I left it halfway, like many projects that compete with the urgencies of daily life.
It was total chaos, more than I remembered: mixed components, poorly named functions, folders without apparent logic, as often happens when you start making big changes with an agent on an existing project. And beyond agents, a common scene for anyone who has started more ideas than they’ve been able to finish.
But yesterday I wanted to see a component I had liked and wanted to pick it up again. This time, with a different tool: the freshly released Claude 4. More out of curiosity than expectations of results.
And what happened was strange. Not because it “worked well,” but because what it did was unexpected. Something in the way it approached the code, from the agent’s first response, felt different. I don’t know if it was because of the model, the workflow it proposes, or maybe how I formulated the prompt… but it looked at that mess from a different logic. It didn’t try to simply fix the code. It read it as if trying to understand the intention behind the disaster.
That caught my attention, because it wasn’t like using a tool that obeys. It was like having someone who asks you, without asking, if there’s a better way to think about that project.
And that’s when I realized something important: this isn’t just about having access to the latest AI or being up to date with all releases, but also about how we decide to use them and build a strategy.
I don’t have access to OpenAI’s Codex because I don’t have a Pro plan, and honestly, I don’t have the capacity to test everything at once. GitHub Copilot Agent is on the list, but later, when the next deploy comes, with its due time. Because the reality is that you can’t absorb everything and still think clearly.
And that leads me to a feeling I’ve seen in many conversations lately: technical FOMO. That anxiety of falling behind, of not mastering the tool of the moment, of not integrating every new feature. And the truth is that this fear is also noise. Because we’re not in a speed competition. We’re facing a paradigm shift that forces us to stop and think.
Are our projects designed for AI to truly collaborate? Are we designing development environments where an agent can understand, interpret, and contribute without us having to micromanage every step? Do we give these tools enough context, or do we expect them to perform magic in the middle of nowhere?
What happened with Claude 4 wasn’t that it improved “my” code, it was that it interpreted it differently. It showed me things that were there, but I hadn’t seen. And in doing so, it pushed me to question not just how I program, but how I structure my work, what I leave explicit and what I assume “is self-evident.”
And that becomes even more evident when you realize that these agents don’t fix the mess. They reflect it. If your repo is a maze, all you’ll have is an assisted maze. If your structure is ambiguous, the AI’s output will be too.
That’s why I think we need to talk about strategy. But not in the inflated sense of the word, but as something pragmatic. As a way to decide with intention:
What tools to use, and why.
When to use them, without it being by reflex or habit.
How to integrate them without destroying the flow we already had.
And what an agent needs to really help us without exaggerating the context.
AGENTS.md, for example, seems brilliant to me. It’s like writing a welcome for the AI, explaining how to navigate your project. As if it were a new team member. Cursor is doing something similar, and I think these kinds of practices are the near future: not just writing code for humans, but also for intelligences that accompany us in real-time.
I don’t have all the answers, in fact, the more I explore these tools, the more questions arise… but maybe that’s the point. That we stop looking for absolute truths in the rankings of models and start having better conversations about what we bring to the table.
Because this moment is strange, we’re leaving behind the era where everything depended on what we knew how to do with our hands, and entering one where what matters most is how we design systems to help us. We’ve already experienced this when our teams grow and we have to delegate, but this time it’s different, and that’s uncomfortable, but it’s also powerful.
It’s no longer about writing more code or launching the prototype faster. It’s about thinking differently and thinking deeper. About thinking about what we used to ignore because we were too busy doing what AI does for us today.
Is your code readable for a human? Good. But is it also readable for an AI?
Is your work environment collaborative? That’s good. But is it prepared to collaborate with an autonomous agent that can’t “intuit” your intentions or chat with you over coffee?
Are you leaving space for these tools to contribute, or are you expecting them to solve things without understanding them?
I still don’t have clear answers, but I’m sure we need to ask better questions.