You can't build everything: opportunity cost in software development
Why can't you just add this new feature or fix that lingering bug? After all, it's the time and effort for that change alone - right? But it's not just the cost of what work you choose to tackle - you're also paying the price for all the other work you didn't implement instead. That's the opportunity cost.
Opportunity cost is the recognition that we have limited resources, and by choosing one option we are excluding other options. And the biggest constraint in software development is often engineering time. By choosing to tackle a given development task - implement a certain feature, fix a specific bug, run a given experiment, tackle a ops issue, clean up tech debt, etc. - we are also choosing not to tackle the other possible tasks that we could work on instead.
One of the things I wondered earlier in my career - if a given feature seemed like a good idea, why didn't we implement it? Or a part of the codebase was messy, why didn't we clean it up? Over time in my career I learned from experience that it's not just whether something it's a good idea - it's whether it's worth doing that work over all the other work we could do instead.
Your software development teams only have so much bandwidth to build, and the tradeoffs for how they spend that limited time and focus is key. The cost of tackling a project isn't just the time and effort of that project itself, it's all the other projects you didn't tackle as well.
Where to spend your time?
Knowing that by choosing to work on certain tasks you're also not working all the other tasks that you could - how do you then decide what to work on? Which new feature do you build? Or do you focus on fixing bugs reported by customers? Or taming unruly alerts? Do you spend time learning the hot new tech or use tech you already know to build the capabilities your users are asking for?
Using the whims of individuals, which customer is yelling the loudest, or other ad-hoc decision-making practices won't cut it. The work you do tackle won't be worth the work you give up.
You also don't want to end in a state of analysis-paralysis where you spend so much time deciding that minimal time is left for execution. And you don't want to just throw darts at a board to decide.
Instead, having a defined approach for choosing what work to tackle with the limited time and focus of your teams is key. Spending a bit of time to define the impact of the work - plus the rough level of investment that makes sense for the work (e.g. two weeks) - can help you decide what work is most valuable to execute.
And using real-world data and research from users to help define the impact makes that analysis more valuable.
Validating ideas
To help chose the most impactful work to tackle, validate your ideas through qualitative and quantitative research. On the qualitative side, activities like user interviews, feedback surveys, email questions, etc. can help you hear directly from users what problems they need to solve. (The book The Mom Test is a great resource for conducting user interviews)
And on the qualitative side, adding user analytics to your research can help you understand how users are using the existing capabilities of your software. Are users getting the value you expect out of the existing capabilities you've built? Or is it more valuable to iterate on those features so they are more help to your users vs moving on to build the next capability?
Scale your investigation to the investment level of the work in front of you. Investing two weeks in a tweak to an existing capability? Not much team capacity at risk, so you'll likely need only a minimal level of upfront validation. But if you're looking to spend an entire year of a team's time to build a large new capability, more upfront user validation can help increase the chance your investment will pay off vs being wasted in a capability your users don't need.
Smaller steps
Once you've researched and chosen the work, it's time to execute. But even with the upfront research work you've put in, there is still the risk the work won't have the impact you expected. Is there a way to reduce the risk even further?
There is! Work in smaller increments. And deliver those increments to your users.
As we touched on above, there is less risk in a smaller investment. If that project ends up not being as valuable to your users as you thought, you spent less time on it than a larger project.
So if you break projects down into smaller increments where you can learn if the project is valuable, then less of your time is at risk than months or years of development before a project is in the hands of users. Working in smaller steps also gives you the chance to make smaller course corrections in response to user feedback as the project goes along - if needed. And if the project's impact is exceeding your expectations, it gives you the opportunity to double-down on the work and get even more value out of it.
Summary
Being conscious of what opportunities you're passing on when you decide what work to tackle can hopefully help you prioritize the most impactful of the possibilities you could choose. By working in smaller increments vs big-bang projects, you can reduce the risk that the work's impact is less than you thought. And then you can double-down on the work that is impactful to further enhance its benefits.
I spent time and focus writing this blog post over other topics I could cover. (Or, let's be real, over playing the new factory-building game I'm obsessed with) Hopefully it's helpful to you and it was worth the opportunity cost over other posts I could have written instead!