2026-03 - W3

Are AI agents slowing us down?

Quite interesting, and maybe a bit sad that some big tech companies are taking some stance on ai for their performance/hiring strategies:

Meta is taking token usage into account during perf reviews. Inside large tech companies, it’s becoming a career risk to not use AI at an accelerated pace, regardless of output quality.

Context Anchoring

A priming document captures project-level context: the tech stack, architecture patterns, naming conventions, code examples. It is relatively stable, updated quarterly, or when significant architectural changes occur. It is shared across all features and all sessions. It tells the AI “here is how this project works.”

A feature document captures feature-level context: the specific decisions made during development, the constraints that shaped them, what was considered and rejected, what remains open, and the current state of progress. It evolves rapidly, potentially every session. It tells the AI “here is where we are on this specific piece of work, and how we got here.” … Because code captures outcomes, not reasoning. … There is a practical byproduct worth noting. A feature document of fifty lines carries the same decision context that hundreds or thousands of lines of implementation code cannot express at all, and it does so at a fraction of the token cost. … The feature document fills this same gap for AI collaboration. It is, in essence, a living ADR, one that evolves in real-time as decisions are made, rather than being written after the fact.

Quite interesting way of keeping track in a feature, like an ephemeral artifact. This could be interesting to use for long / big features, although I’m not sure if I ever have to implement such big feature, because most of the time, we split projects into small tasks, as small as possible so that the ai agent can tackle it by itself.

Rob Pike’s 5 Rules of Programming

  • Rule 1. You can’t tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don’t try to second guess and put in a speed hack until you’ve proven that’s where the bottleneck is.
  • Rule 2. Measure. Don’t tune for speed until you’ve measured, and even then don’t unless one part of the code overwhelms the rest.
  • Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don’t get fancy. (Even if n does get big, use Rule 2 first.)
  • Rule 4. Fancy algorithms are buggier than simple ones, and they’re much harder to implement. Use simple algorithms as well as simple data structures.
  • Rule 5. Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

To be a better programmer, write little proofs in your head

Good timeless principles for crafting good systems.

When you’re working on something difficult, sketch a proof in your head as you go that your code will actually do what you want it to do. A simple idea, but easier said than done: doing this “online” without interrupting your flow takes a lot of practice. But once you get really good at it, you’ll find that a surprising amount of the time your code will work on the first or second try. It feels a little magical.

The Robotic Tortoise & the Robotic Hare

Interesting benchmark comparing Claude Opus 4.5 and a “dumber” model on a task, especially on the time spent.

With 3x faster responses, I could add an extra cycle : “critique the plan and address the critiques.” In the time the hare was still thinking, the tortoise ran another lap.

It is indeed better to choose the right model for the job at hand. BUT, I do think the benchmark was biased, because the author used opencode for the smaller model and claude code for Opus 4.5. And I do observed that claude code’s system prompt was tweaked recently (maybe just my feeling, as I did not measure…), and is way slower than other harnesses..

Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender

Research paper about how ai is affecting how human’s mind.

A key prediction of the theory is “cognitive surrender”-adopting AI outputs with minimal scrutiny, overriding intuition (System 1) and deliberation (System 2). Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3. Tri-System Theory thus characterizes a triadic cognitive ecology, revealing how System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.

The term “cognitive-surrender” seems a bit pejorative, but maybe that’s for the best, so that we are reminded not to surrender our critical thinking…