This post is some over-analysis to reach a pretty obvious conclusion: It makes sense to fix the really terrible parts of your workflow, but be very careful of going overboard.
The programming workflow has a very nice property: since the tools you use are mostly software, and since you write software, whenever something is slow or annoying, you can go in and fix it for yourself.
This isn’t always practical - for example, if compilation is really slow, most people don’t have the expertise to dig into the guts of LLVM and come up with a 50% speedup. However, there’s often an approachable angle. Rather than fixing the compiler, maybe rewrite the slowest-to-compile chunks of the codebase.
Let’s say you succeed, and your time usage goes from looking like this
And since that was such a great success, let’s say you go through this a couple more times, until you reach this
and by now, the time cost of improving the workflow is getting high enough that it doesn’t really make sense to go any deeper. So, what have you accomplished?
Well, you’ve made your time something like TODO% more efficient, which is great. But on the other hand, there’s still a problem. You’re spending TODO% of your time, the biggest chunk of it, on a annoying, repetitive task. This didn’t come out of nowhere - you had to do the same thing before all these improvements, but at the time it was overshadowed by enough other stuff that it wasn’t really a problem.
So basically, you can fix the slowest and most annoying parts of your workflow, but whether you do or not, but you’ll end up still spending a bunch of time on the slowest and most annoying things - because that’s always defined relative to where you’ve ended up, not relative to the more-broken situation where you started.
* Maybe I should instead title this The Hedonism Treadmill of Nuisance? But that’s not quite right either.