Software Practices from the Scrap Heap?
I’m going to keep writing half baked things about AI, because it’s what I’m spending a noticeable number of hours thinking about these days, and because I don’t think it’s possible to be fully baked on the topic. Apologies in advance for those who find it irritating.
Had a call with Jesse the other week and we discussed how we’re speed running the last few decades of engineering management. The swarm craze of the last few weeks bringing us solidly up to Fred Brooks.
Writing about software development over the years has always suffered from at least two key challenges:
-
the vast majority of the writing suffers from survivor bias and serves as a record of something that worked for this particular set of humans in this particular set of circumstances.
-
most of the writing has implicitly assumed a highly rational, implacable, rule following software engineer completely at odds with all the software engineers I’ve known (and wildly at odds with the best of them)
But if our software theories often assumed emotionally and morally stunted automatons with limitless patience, might they be useful in today’s context?
- TDD, always hit and miss for humans, clearly works well for agents.
- The folks I’ve spoken with who are experimenting with maximalist approaches to agentic coding talk a lot about components and isolation in ways that remind me of microservices (terrible for humans), and CORBA before it.
- Bake-offs (the agent might get attached to their implementation, but only for the space of that shell)
- Sub-agents have some of the insights of pair programming encoded in the practice, enough that it makes me wonder about the rest of the XP toolkit.
- UML and Rational Rose coming back?
I wonder what else from the scrap heap is worth poking at?
update: Big Up Front planning was pointed out to me as a classic technique that worked poorly with humans and is coming back as planning becomes more key