2026-03 - W2

AI Doesn’t Reduce Work — It Intensifies It

I’m finding more and more blog posts and articles about how AI is really double-edge sword on productivity:

I believe, this will really impact at large mental health. I do think we’re at the starting point of the “collapse” of system fully generated by AI.

Or take coding. Everyone brags that 50% of their code is now written by Cursor or Claude. But does that mean your engineers only work three hours a day? Of course not. They ship more. They take on more projects. They build features that would have been deprioritized six months ago. The AI didn’t buy them free time. It raised the bar on what “shipping fast” means.

You might know the feeling. Maybe you’ve stayed up too late pursuing a project that started as a quick experiment, or caught yourself prompting during lunch or in the last few minutes before you told yourself you’d be done for the day. ”One more prompt” turns into 50. “I’ll just fix this one bug” turns into a vibe coding marathon.

Testing Microservices, the sane way

Old article, but still quite relevant! I need to re-read this article twice, or even thrice!

Compound Engineering: How Every Codes With Agents

This is a very interesting concept, that should improve the output of the agent in the long run.

A compound engineer orchestrates agents running in parallel, who plan, write, and evaluate code. This process happens in a loop that looks like this:

  1. Plan: Agents read issues, research approaches, and synthesize information into detailed implementation plans.
  2. Work: Agents write code and create tests according to those plans.
  3. Review: The engineer reviews the output itself and the lessons learned from the output.
  4. Compound: The engineer feeds the results back into the system, where they make the next loop better by helping the whole system learn from successes and failures. This is where the magic happens.

Although, I wonder what would be the fine line between always adding previous failures/successes and too much useless context… That means, at some point, there will be too many “lessons learned”, that the agent will need a “best practice” catalog, with a search engine…

Production query plans without production data

That’s like a dream come true! It’s always a pain to debug performance issue that is only noticed on production (as always), and to test/experiment/verify some attempts, most of the time is to deploy and then pray. Now, it seems to be possible to have a shorter feedback loop!

The 8 Levels of Agentic Engineering — Bassim Eledath

Interesting take to split several levels of “Agentic Engineering”:

  1. tab complete
  2. agent IDE
  3. context engineering
  4. compounding engineering
  5. MCP and skills
  6. harness engineering & automated feedback loops
  7. background agent
  8. autonomous agent team

That reminded me of Steve Yegge’s post on Gaz Town where he described the 8 stages of dev evolution of AI.

I think levels might not be the right word, because that would mean there’s only one way to “evolve” / “level up”. In other words, that means to there’s only one way forward to improve, which is obviously false. I don’t think there’s only one single way, and as always, it depends on the context. Going all in for autonomous agent team for everything is prone to failure.