Thursday, March 27, 2014

Institutional Risk Aversion

I had a few chats today with people that got me to thinking. What happens when an institution's practices entail so much delay, instability, high potential costs that failing is considered an excessively bad thing?   I likened the situation to that of a patient splayed out on an operating table, chest split and spread for open heart surgery - anything less than overwhelming success means death.

Do rational people in this situation become risk averse? Not exactly. Rational people are already risk averse and know seek to manage the risk. They may not have the tools to do that, but they'll try. But people without the proper tools may, I've observed, fall into a mindset of anti-risk fundamentalism.

Anti-Risk

Most rational people understand that with risk comes reward, and there is some degree of an correlated relationship. People who come to believe they cannot effectively manage risk may seek instead to avoid it altogether. In a futile effort to avoid risks, the costs of the insurance schemes they impose skyrockets and risk is accumulated instead of being diffused.

Fundamentalism

The anti-risk orientation radicalizes into a belief system and persistent world-view. Instead of seeing the extreme costs and consequences that their avoidance will necessarily cause as an eventuality, they set themselves and others up for even bigger failures with colossal consequences. As the mindset infiltrates people and process, it becomes cliche, a dogma of accepted practice to be followed as a consensus position of creed.

Life Finds a Way

We don't want to be the open heart surgeon operating where one false move will kill the patient. We want to be a microbe, spawning offspring rapidly in multiple generations whose sole purpose is to test the boundaries of what can be done, spawn more potential solutions, and die cheaply. And do it all in a manner as quickly as possible.

In other words, we shouldn't be avoiding failure, but allowing the number of failures to float freely upward, while simultaneously pushing the individual costs of failures down as arbitrarily close to zero as we can practically get.

I won't use the "A" word here, but there are lots of tools available today that would help mitigate the pooling of risk into higher and higher levels.  Service orientation. Migrations. Test Driven Development. Version Control. Configuration Management. Virtual Machine environments. Automated Deployment. It isn't nearly as difficult as it used to be to get an isolated environment up and running.

Another code smell

I came across a little fragment of code with an all-too-familiar pattern:
if ( $complicated->test("one") && $complicated->test("two") && $complicated->test("N") || 1) {
  // complicated nested block of code
}

I have seen le pet in code!

This little smell occurs whenever someone believes they shouldn't remove a bit of dead code.  Perhaps it serves as some sort of misplaced placeholder. Or they didn't write the line and so feel some sort of weird obligation to leave it alone after they've already defaced it. Maybe they don't feel "ownership" of the code. Or they just forgot to clean up a stray bit of crap they were slinging during debugging.

In this case, the programmer had left a comment - something to the effect that the rest of the conditional was left in because we might need it later. Unfortunately, there was no explanation of tests one to N or why we might need a dead and forgotten line of code.

Many real-world gas passing events arise out of overconsumption of low-quality Franken-food, or poor adaptation to real foods that contain fiber. Code smells like this can be due to something analogous: coding without adequate version control,  and attempting to tolerate uncertainty instead of resolving it.

Do you really need to hang on to that bit of cruft? Even if the code is that important, once it is in a source repository history, you should delete it. You can always find it again on the off-chance that you really did need to refer to it. More likely you'll forget about it, and if you ever do need to implement whatever the heck it was trying to do you'll find you didn't need it anyway.

Tuesday, March 11, 2014

Farting with Code

Agile developers often talk about code smells. I would argue that a code smell can be strong enough to warrant a more pejorative name. I dub such a smell

The Code Fart

Here is a fine example of a programmer passing virtual gas, a real stink bomb:

/*! jQuery v1.7.1 jquery.com | jquery.org/license */
(function(a,b){
... stuff removed for brevity
})(window);

$(document).ready(function(){
 var screenWidth  =  (document.body.clientWidth != '') ? document.body.clientWidth : window.innerWidth;
 var screenHeight =  (document.body.clientHeight != '') ? document.body.clientHeight : window.innerHeight; 
 $.get("http://www.my.code.fart.com/appname/images/somefakeimage.jpg?s="+screenWidth+'x'+screenHeight);
});

I want to point out what was done here, because most of the stink is non-obvious.
  • Using Apache Multi-View to pretend that a PHP script is actually a JPG file.
  • Using an AJAX get to load an image that is never used.
  • Passing parameters to a GET request that isn't idempotent. It stores the data.
  • Jquery is loaded up on many application pages, and every time it loads - even from the cache - it spuriously dumps an entry into a log table.
  • Hard-coding the domain name. Seriously, who does that now?
  • The most obvious problem: hacking the jQuery distribution file. Seriously? None of this was under source code control system.

This sort of coding is too clever by half and half again. It has the rank smell of putrescine and cadaverine, making me so sick I just had to vomit this posting just to flush it out of my system.