Miroslav Nikolov

Miroslav Nikolov

How To Kill A Fly With A Shotgun

April 25, 2023

Or building an optimal tech stack and doing so continuously.

Use a small set of well-known tools to solve as many problems as possible. Put a high bar for adding new technologies. Your tools should be battle-tested and you need to master them. There should be a common understanding (process) within the team for “buy vs build”. If not, there must be a single person to have the final word.

The Problem #

Cost (hidden). There is a certain cost with every tech decision. The cost splits between time, coordination/communication, learning efforts, cognitive load, etc (sometimes generalized as maintenance cost). It’s not equally distributed but needs to be paid anyway. More code, more layers of complexity, more modules = higher cost.

Some known tech people tend to say most of the cost is related to communication, coordination, and time. You talk more if the code is unclear, unfamiliar, or new and it takes more time to start, and get a good grip of your tools if they constantly change. To conquer such a fluid tech stack becomes causa perduta.

To make it worse: the cost is not always visible upfront. Sometimes it comes with a delay too. This inevitably tricks into the false belief of engineering choices being free — "We are using these technologies and everything is working ok". Such attitude is formed “thanks” to the lack of decision traceability in the team and honesty when it comes to tech setup evaluation, aka — "No one counts how many times we banged our heads in this wrong or premature abstraction, so we will continue to do it". Adding more tools looks cheap at first while the long-term cost is hard to track (but usually high). Every line of code is pricey and maybe you don’t need to write it.

Let’s see what happens in reality.

Metaphorical Reality Check #

  1. “This library is popular and solid, so we don’t have to discover the wheel”.

And it goes on: a package for this task, another for that one. Until your backpack is full while the project is still beginning. Why do we tend to form such complicated tooling? Why don’t start lean? Partly due to assumptions. We devs like to assume things. "What if this piece of UI is used in another place — I’ll extract it in a component"? Or "this form looks simple but we’ll for sure have more complicated ones so I’ll bring a form handling lib now". This would be perfectly fine if at least 70% of our assumptions (“what if”, “just in case”, “maybe”) got fulfilled at the end. But it’s not the case, otherwise, agile methodologies won’t emerge and point out: base assumptions on facts, data, or something empirical. Look at the past and present. No wishful thinking. We devs like to assume things but are overall bad at predictions. Most of the time we guess or hope (calling it a fact-based decision).

Being on the topic I remember once having to deal with two popular React libs — SWR.js and React Router — just to discover how challenging their interception is. Particularly when it comes to error handling as both want to own it. No docs, no GitHub issues. You are on your own. How modules interact with each other and not their number is the rabbit hole. But that’s another story. The lesson is: start your journey with a 10kg backpack instead of 30kg — that's what a professional hiker will tell you.

  1. “We chose the Ferrari for off-road because it’s fast and shiny”.

Starting a new greenfield project is both exciting and unknown. You can (re)build everything from scratch. At the same time, you need to be flexible and perhaps less ambitious leaving room for changes and uncertainty. MVPs are risky. They are like long trips where you start from point A and need to finish at B but don’t know much about the roads in between. Might be a highway (rarely), off-road, climbing, or a combination. In terms of toolsets, you have to drive either something like a Ferrari — fast, comfortable, and full of electronics/features — or a pickup truck — basic, easy, and multipurpose. I often see people choosing the Ferrari.

  1. “Our spaceship enters the meteor ring at high speed, what should we do? Turn on the autopilot”.

You always need a good understanding of your tools. It allows you to remain confident when things get out of control. That happens. Getting bad is like entering a meteor ring with your spaceship. You have to either switch to manual control or turn on the autopilot. I call “autopilot“ a highly ambitious tech stack with many promises but clumsy when it comes to switching from a narrow direction. Manual control requires you to be a master of your tools to such an extent that no combination of (n) external libraries can outperform your dexterity. Manual control does also make docs, community, and package popularity conversations obsolete. Don’t switch to autopilot when flying in uncertainty.

  1. ”We need to do simple task A so let’s pick lib X which can handle it together with 99 more use cases we may face at some point”.

Indeed, in some situations, we don’t count the burden of a general-purpose library. Even a router lib can be general purpose. Allow several of those to slip in, and the chances you start serving the tech stack and not vice versa increase. When similar features from different modules overlap or step on each other's toes, it adds to the hidden recurring cost you must pay. Similarly, starting with a simple task may end up obeying a whole framework with its specific rules in such an elusive way that you tend to regret the choice when it's too late. I can recommend "The best code is the code you don’t have to write" as a good source of ideas on the topic. Or simply try to use your existing tools to do task A.

What about the "how"s behind our non-optimal decisions?

Overcomplicate Your Tools #

We often overcomplicate the problem. There usually is a narrow domain problem we then extrapolate in the future to make bigger. This is where package conversations start and I can testify how furiously people refuse to address the task with the existing setup or build something small in-house. This is wrong. You need a common ground when it comes to problem definitions.

Beforehand (waterfall) planning is another reason to start with heavy tools. We devs expect things to become complex and prepare for this in advance. That’s bad for at least two reasons. First, things very often don’t get complex. And second, if they do, it’s not soonish. In both situations, your team is doomed to carry all heavy tech without really using them.

Magpies like shiny objects and so devs like shiny new tools. New, well-marketed technology with a certain hype attached is very likely to end up in your stack compare to a tool you would build yourself. Such libraries and frameworks promise to solve a bunch of problems. One thing to note though, is they are rarely built for the specific problem you have, nor are they built to interact with the set of tools you have chosen for a project. This is where much of the headache comes from later. No one advertises these things.

There is a common engineer mindset I call let’s use it now and maybe remove it later. It’s when you are so eager to try a package but someone questions your intention. Then you say, “ok, but let’s try it now and replace it later if not happy”. “Later” never comes and “later” is expensive. I have stories in my pocket about modules never being dropped (“later”) just because people’s emotional attachment was so strong at that moment. If you're still at the emotional level we can’t talk pragmatically. Don’t pick this module in the first place.

It's not rare that being negligent about tech choices is fueled by the business missing instruments to audit our work — I don't mean micromanaging — but, in contrast, fully trusting us to do the best thing. You need a good work ethic/moral to resist the urge to “do whatever you want” vs “do whatever is best”.

Few words on how to make things optimal.

Simplify Your Tools #

Use a small set of well-known tools to solve as many problems as possible. Put a high bar for adding new technologies. Your tools should be battle-tested and you need to master them. There should be a common understanding (process) within the team for “buy vs build”. If not, there must be a single person to have the final word.

Count on well-known (to you) tools that are proven in real life projects (battle-tested). If something has been useful in the past and makes sense today — pick it with no doubt.

Try to squeeze the max from your tools before reaching for anything else. You will be surprised how capable a small set of tools can be. Don’t pick the Ferrari.

Consider adding new tech to your stack as a last option. Third-party libs shouldn’t be your default choice. They increase the number of interactions with your app/tools and constrain the development within a certain frame. This is not ideal, especially for MVPs and risky areas. You need flexibility. You need manual control in the meteor ring.

Become a craftsman with your tools. Nothing compares with mastery and no AI can beat it. No doubt about that. To achieve high expertise you need critical thinking about how and when you write code. Every new tech you add postpones the point of mastering. Do invent your small wheel from time to time.

How do we miss the point of mastering

It’s good if your team has a shared mindset when it comes to technology. If that’s true most of this blog post may turn irrelevant. But sometimes it’s not the case (don’t want to claim “often”) and then you will argue with your colleagues endlessly.

To avoid arguing endlessly you need someone to take the final decision. At this point, you should say NO to democracy. Consensus === (often) compromise that leads to non-optimal decisions the whole team will live with. The business too (though, they don’t know it). It's what “group responsibility is personal irresponsibility” produces in the end. Have a tech lead. Discuss. Don’t vote.

Usually we learn things the hard way. That’s not necessarily bad as such an experience is very valuable — you can’t buy it with money. Taking reasonable decisions will become almost a gut feeling as you go through more code and situations that spark tech conversations. Anyway, I have put Dan McKinley’s “Buy vs Build” strategy to the rescue below. It resonates with me, hope makes sense to you too.

Buy vs Build Stress Test #

  1. What is the problem we really have? Often there is no problem at all. If there is, we have already solved it, or it's not our business.
  2. How would we solve the problem with the current setup? Think twice. Most of the time you don’t have to continue with the next question. But what happens instead is hearing arguments like “Tech A is very popular, well tested, with community and docs, etc. We should use it”. These arguments are sound but they belong to question N 3, not here.
  3. How do we add a new tech in a low risky way? This usually means some kind of strategy for gradual integration. Full replacement is often not ideal.
  4. What is the plan in case of failure? In reality, there is no plan but you should have one.

Disclaimer #

I laid down some bold statements in this post you may argue or perhaps fully disagree with. My goal is not to convince but rather give these personal observations a form, send them to the wider public, and maybe get brave opinions back — so be it critical.

Many of the thoughts expressed touch Greenfield projects (MVPs, PoCs) where you need flexibility, speed, and not so much quality (not in the beginning). An area where we devs like to imagine the bright future and figure things out beforehand. Meanwhile, MVPs get parked and that’s sad.

I don’t want to insist on the universality of the ideas above either but building a solid manageable tech stack (not necessarily perfect) is a challenge by itself and as such requires effort. In this regard, if something is reusable here, it will hopefully be of value to you.

The writings below have influenced this brain exercise in one way or another and may sound familiar to you. Worth (re)checking.

Resources #

💬 Discussion on Reddit