Development Strategies
Awoo! Finally an article devoted to the wheel-dogs of the software development world! This page describes some of the GoHusky methodologies that apply to sofware development. Read carefully! You will learn to make the best decisions to make it to the end of each safari!
The Chibanga Method
“Professor Chibanga não faz previsão para 2007, só faz previsão para 2070. Porque em 2070 não está cá ninguém para ver se era verdade ou mentira, e é mais dificil chatearem e processarem Professor Chibanga. Professor Chibanga não gosta de processos.”
-- Gato Fedorento
The Chibanga Method describes a way of getting things done without worrying about immediate foreseen consequences, and proposes that this is acceptable so long as those consequences don’t actually affect the outcome of your work. It is pointless to spend time and resources around concerns that won’t affect your intended purpose.
Some examples of practices that yielded bad results, as such they would be wrong applications of the Chibanga Method:
-
You will remember the time that Teslas started frying SSDs on their MCUs due to excessive logging. This is a known limitation for SSDs, but no one thought it was a concern until some time went by and they actually started to wear.
-
The sheer amount of implementations based on Unix timestamps in 32 bit systems that should make some people sweat bullets as we approach 2038 and they realize that their code base was not updated since the nineties. All because someone decided to Chibanga the design of the UNIX timestamp.
Now, I am not saying that the Chibanga Method is bad. In fact, GoHusky encourages the use of the Chibanga Method. All I am saying is that it is not to be applied blindly. If there is even the remote chance that your “chibangering” will come back to bite you, it’s better to apply the DIRTFEET method.
Consider the following scenario:
You are asked to develop a loyalty program for a bank. Being that banks are usually empowered by systems older than the CEO’s grandma, you are asked to develop this in C. You start panting at the sheer idea of having to calculate leap years. But how are leap years calculated?
A leap year occurs when the year is a multiple of 4, except when it’s a multiple of 100, unless it’s also a multiple of 400.
This is a bit of obscure knowledge, but let’s consider your options on implementing this logic:
- You can apply the Chibanga Method and just test to see if a year is a multiple of 4. After all, the year 2000 was an exception under the 400 rule, and by 2100 the bank will most likely have gone bankrupt after all the investors have moved to newer fintech platforms. No one will know, correct results will be had, and you get away with it by performing two less operations. Even the infrastructure team will thank you for the efficiency.
- Otherwise you can apply the DIRTFEET method and implement it right. It sure will be prettier, but you will find that you have a less efficient solution and take more time to implement it, and you will end up staring at the bottom of a whiskey bottle after doing unpaid overtime to implement that leap year check.
If we were to apply the infamous 80/20 rule to the Chibanga Method, I would say that 80 percent of all problems can be chibangered while 20 percent of them can be solved by DIRTFEET. And that is a fair cop.
The DIRTFEET Method
DIRTFEET stands for Do It Right The First Execution, Every Time. And it means exactly what it describes. It’s basically a fancy acronym for “do it once, do it right”, but emphasizes that you need to get your paws dirty if you want to get stuff done. And doing it right the first time around always yields the best results: Doing things the wrong way usually ends up with spending double the resources, both in analyzing the issues and redoing everything. In the event that the Chibanga Method cannot be applied to your problem, the DIRTFEET is your failsafe.
Always consider the importance of the outcomes of whatever implementation decision you're making. They all come at a cost.
Let's take the immediate example of the UNIX timestamps (someone will have my head for this example): You are tasked with developing a logging solution that monitors the time taken between steps of a process, in milliseconds.
The implementation seems like it's going to be a straightforward thing at first. Except when you log the very first entry, the delta between the previous (non-existing) entry and this one, is the current UNIX timestamp minus zero, and thus you end up with a number that doesn't fit a 32 bit integer. And you've just gone coding the whole solution with 32 bit integers it's going to be a LONG (pun intended) time to refactor everything.
You are presented with two options to work around this issue in your code:
- Option A: You decide that if the delta timestamp value is more than the maximum value of a signed integer, you'll set it at zero. After all, there is no way that your processes will have a delta of 25 days (that's the MAXINT in milliseconds, roughly) between steps, they timeout after 8 hours maximum. Thus, this is a technically safe approach.
- Option B: You decide that if the previous step's timestamp is zero, then you'll set the delta at zero as well. After all, you just want to set it at zero if it is the first step. This leaves the door open to an integer overflow if your process somehow takes more than 25 days between steps, but then again, you know this can't happen. This is a functionally safe approach.
Which one is the better option here, in the light of DIRTFEET? The answer is, none of these. You are just chibangering the solution, and that is just a-fine. Should have gone with those Longs from the start.