You Ain't Gonna Need It, YAGNI. A helpful tool to remind us to carefully consider building no more than we need to solve the problem in front of us. I see it mostly applied to software development questions but the same tension between investment cost, flexibility, and eventual value applies elsewhere and it's on my mind because I am thinking about two very different experiences with internal process.
Without going into too much depth, there was an ancient process that my team ran infrequently and which had been slated for removal for a long time. At one point it had probably been a sleek and streamlined racing yacht but, by the time I encountered it, it handled like a barnacle-covered leaky skip.
Big changes, however, were always postponed in favour of more pressing concerns and, when I last worked my way through it, I found that a whole new outrigger had been bolted onto the side to support another process.
It can be tempting to think of this crustiness as YAGNI at a kind of macro level: the process is not worth investing in, given what we know. We ain't gonna need to defoul this boat because we can live with the frustration and messiness at the level it currently exists for the length of time we expect it to be needed.
Martin Fowler notes that this is a common confusion:
Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify. Yagni is only a viable strategy if the code is easy to change, so expending effort on refactoring isn't a violation of yagni because refactoring makes the code more malleable.
Generalising, I read this as: YAGNI does not simply mean adding debt. Of course, our process was not easy to change or extend, and it's clear today that it would have been much better to improve it earlier.
I said I was thinking about two very different processes. The second was so deeply buried that I didn't even know it existed and that we were responsible for it.
When it came to light, all that we had were the artefacts from the previous time it had been run. Actually, that makes it sound better than it was: we had some source files and some output artefacts and the two did not match, and the route from the former to the latter was not clear. Oh yes, and there was content that had to be consistent between output artefects that required editing in multiple source artefacts.
I helped to do an emergency round of editing and assumption-making to produce an acceptable update and then we stepped back. Much like the first process, this was something that we thought would only need to be run occasionally, so how much should we invest in it?
We looked for existing tooling that would help but didn't find anything. On the back of that research, though, I thought that I could build something that would take a single source input and generate all of the required outputs.
So I prototyped it, evaluated it, and then built the smallest program that I could to reproduce what we'd made by hand. This included resisting requests to generalise the nascent tool for others to potentially use later.
When the next request came in to make a slight variant, and some different output formats, those changes were simple to add. And when the next set of content changes was requested, that was an edit of the single source file and regeneration of the output took mere seconds. In the event, many more changes were requested and turned around much faster than they would have been if we didn't have the tool.
I think I applied YAGNI at the code level here, and to very good advantage. I delivered the thing we needed today today and I was able to make the changes that tomorrow requested tomorrow. I even refactored once I'd made changes.
But notice that we made a significantly different choice at the process level. Why didn't we simply write a process doc and live with the hand-editing, making another clunky-but-occasional procedure?
And that's where I came in. I was reflecting on that choice. I think there were some key differences between the two processes. For the second:
- there was no legacy process, so we had to do something
- it would be very easy to do the wrong thing by hand, so automation could add a lot of value
- there were no external dependencies to consider
- an audit trail looked important and was missing; code version control gave that for free
- I didn't believe that there would be few changes in future
- I had a gut instinct that it was "worth it"
At the code level, Martin Fowler notes that YAGNI is not always the right play either:
Having said all this, there are times when applying yagni does cause a problem, and you are faced with an expensive change when an earlier change would have been much cheaper. The tricky thing here is that these cases are hard to spot in advance, and much easier to remember than the cases where yagni saved effort. My sense is that yagni-failures are relatively rare and their costs are easily outweighed by when yagni succeeds.
At this point I feel like we made the right choice in building a tool for the second process and I think, as objectively as I can, that I made the tool well. By working through this blog post, I also feel like I would be more cautious about accepting YAGNI for process in future because, in my experience, we tend to be less aggressive about refactoring processes and so "YAGNI" there is more like "yes, lets add some debt."
I'm now interested in another question though: what are reasonable heuristics for YAGNI to avoid finding ourselves crying We Actually Did Need It?
Image:https://flic.kr/p/eziRa
Comments
Post a Comment