Jeremy Keith writes :
But you lose the learning. The idea of a cybernetic system like, say, agile development, is that you try something, learn from it, and adjust accordingly. You remember what worked. You remember what didn’t. That’s learning.
Outsourcing execution to machines makes a lot of sense.
I’m not so sure it makes sense to outsource learning.
I think this is already becoming a real issue in many workplaces experimenting with agentic development.
When agents generate or modify code, we can test the output. We can verify that the bug is fixed or that the feature works. But the learning loop becomes blurry: how do humans actually understand why the fix works and how the system reached that solution?
In several cases on our side we ended up doing entire review sessions where we read through the agent session logs and the final code diff after deployment. Not just to confirm the bug was fixed, but to reconstruct the reasoning behind the fix.
That reconstruction step matters because it’s where human learning normally happens in software development.
If execution becomes automated but understanding disappears, we risk creating systems that work but that fewer and fewer people actually understand.
There is probably a real need for new tooling here: something that treats agentic development sessions as first-class artifacts. Not just code diffs, but structured traces explaining decisions, iterations, and why certain approaches were abandoned.
In other words: if AI handles more of the execution, we need better ways to preserve the feedback loop for humans.

Comments
Sign in with your website to comment:
Loading comments...
No comments yet. Be the first to share your thoughts!