Monday, December 31, 2012

Guinea Pig B

This is from R. Buckminster Fuller's book, Guinea Pig B:

I happen to have been born at the special moment in history in which for the first time there exists enough experience-won and experiment verified information current in humanity's spontaneous conceptioning and reasoning for all humanity to carry on in a far more intelligent way than ever before.

I am not being messianically motivated in undertaking this experiment, nor do I think I am someone very special and different from other humans. The design of all humans, like all else in Universe, transcends human comprehension of "how come' their mysterious, a priori, complexedly designed existence.

I am doing what I am doing only because at this critical moment I happen to be a human being who, by virtue of a vast number of errors and recognitions of such, has discovered that he would always be a failure as judged by society's ages-long conditioned reflexings and therefore a "disgrace' to those related to him (me) in the misassuredly eternally-to-exist "not-enough-for-all,' comprehensive, economic struggle of humanity to attain only special, selfish, personal, family, corporate, or national advantage-gaining, wherefore I had decided to commit suicide. I also thereby qualified as a "throwaway' individual who had acquired enough knowledge regarding relevantly essential human evolution matters to be able to think in this particular kind of way. In committing suicide I seemingly would never again have to feel the pain and mortification of my failures and errors, but the only-by-experience-winnable inventory of knowledge that I had accrued would also be forever lost--an inventory of information that, if I did not commit suicide, might prove to be of critical advantage to others, possibly to all others, possibly to Universe. The realization that such a concept could have even a one-in-an-illion chance of being true was a powerful reconsideration provoker and ultimate grand-strategy reorienter.

The thought then came that my impulse to commit suicide was a consequence of my being expressly overconcerned with "me' and "my pains,' and that doing so would mean that I would be making the supremely selfish mistake of possibly losing forever some evolutionary information link essential to the ultimately realization of the as-yet-to-be-known human function in Universe. I then realized that I could commit an exclusively ego suicide--a personal-ego "throwaway'--if to the voice of wants only of "me' but instead commit my physical organism and nervous system to enduring whatever pain might lie ahead while possibly thereby coming to mentally comprehend how a "me-less' individual might redress the humiliations, expenses, and financial losses I had selfishly and carelessly imposed on all the in-any-way-involved others, while keeping actively alive in toto only the possibly-of-essential-use-for-others inventory of my experience. I saw that there was a true possibility that I could do just that if I remained alive and committed my self to a never-again-for-self-use employment of my omni-experience-gained inventory of knowledge. My thinking began to clear.

Friday, December 21, 2012

"GIT Suckiness" ~= /GIT's Success/ ?

As I was looking up some information about XINC, I came across a rant by Scott James Remnant and had to comment, if for no other reason than it sounded like I could have written it.

Anyway, I felt like I needed to save my comment here too, because it fits with my train of thought about Software as a culture.

Quoth He:

I've complained about GIT's idiosyncrasies myself more than a few times and came to a subtle realization thanks to a quip I read somewhere by Linus T.   GIT is not an SCM or VCS; GIT is a collection of tools and interface primitives from which to construct such a system.

People new to GIT think it is an engine. GIT is nuts and bolts and gaskets and pistons and carburators. The collection is more than half assembled so it looks like an engine. Popular articles promoting GIT, and to some extent the documentation pages, can perpetuate the misconception by talking about GIT from the bottom up. But GIT is more of a bare-bones kit.

The advantage of the primitives approach is that it allows a diversity of practices and promotes a kind of fecundity in its ecosystem. It is somewhat, but not completely, process agnostic.  The disadvantage is that you have to grapple with working out a workflow yourself and the universe of discourse includes a large amount of chatter at a low level of mechanical detail which tends to look like noise when you're trying to address a process level issue.
Edit: On the other hand, my son was gracious enough to point out the Angry Hitler meme has been adapted for new GIT users ... 

Wednesday, December 19, 2012

Unposed Questions -> Punted Answers

Gaps of comprehension exist in every communicative act, especially programming. The absence of questions on the part of the programmer should not be taken to infer certainty in understanding. Rather more likely it implies that the programmer is punting.  

Saturday, November 17, 2012

Cultured Software, Part II

OK, I'm done quietly crying bitter tears of joy as PHP intrudes upon my thoughts, so let me continue with less rant but just as much conjecture.

As a profession, software struggles to remain relevant even as the technology becomes intractably entwined with our culture. Software ecosystems can build up code detritus so rapidly that it entraps their bat-like programmers as if it were some great pile of guano sealing off the mouth of their cave. There continue to be gaps between understanding, resourcefulness in construction, discipline in deployment, and proper accountability in maintenance. Some subcultures appear to be actively trying to shrink those gaps, yet they persist and increase both in quantity and scale. Like any infrastructure - roads, fuel and water pipelines, bridges, etc - software exists in an environment and it degrades over time unless sufficient back-pressure (work) is exerted to keep it together.

There is a kind of programmer with an unhealthy avoidance, an unwillingness to reconsider coding practices and account for sufficient code churn, that is, refactoring. They don't value, count the costs of, communicate, or ethically set expectations for the effort required to make software. Such programmers may look at software as a one-shot effort, rather than an on-going concern. We have probably all seen the ill-conceived notion of job security that holds work together with spit and baling wire - through regular but unpredictable manual intervention.

Now, I wrote "unhealthy," when the reality may be that such a programmer may be completely on top of their game, making money hand over fist, and having a large gaggle of more than slightly confused impressionable clients. "Unhealthy" is more a reference to the long term consequence to the profession of the uncounted costs, and to the client of the spit and baling wire that lie behind what might be a beautiful artifice. Nor is it necessarily a "fault" of the programmer: he is reacting organically to the culture that allows him (or her) to survive and thrive.

Refactoring is a software analog of composting: it breaks down code to make the primitive knowledge entrained in the source available to feed regrowth. In composting, some substances require special handling to reduce and lots of stuff just doesn't break down at all.  Unless conditions are insanely stable, there is a finite amount of time to use compost before it weathers and living creatures scavenge its useful energy. Reworking a software system, for instance, often surfaces hidden assumptions, wrong-headed wild guesses, and punts; the mind of the programmer reveals structure in part by ripping the system apart slowly and digesting the implications.  

A few substances that make their way into compost piles can poison and turn the pile putrid (alive, but fetid), even if in small amounts they are relatively inert and innocuous. Software has such seeds of disruption too. One class of putrefying software artifact I like to think of as a fixed point, to abuse a term from algebraic theory.

Cultured Software, Part I

With few exceptions, most software is crap. You know it. That makes sense: software originated as the study of life processes, and what is life but recycled crap? 

Human languages produce a lot of crap too. Clearly this is a side-effect of being a living process, but it isn't an accident. Side-effects are central to life processes.  Every generation of organism, every shift in language, produces detritus. In biology it takes the form of excrement and body mass; in language it is seen in archaic forms, marginal slang, and jargon. If a process is alive, it generates layers of crud.

Crud builds up over time unless broken down. In biological systems the breakdown products become the source from which new growth feeds. In language, memes beget metaphors and metaphors mixed, merged and partially forgotten by successive generations become the kernel of meaning in the etymology of new words and idiomatic phrases.

If you have ever walked an old forest, you've seen how layers of crud establish ecosystems. Centuries of dead plants, bugs, animal corpses and their excrement make up your typical forest floor. Ironically, old growth forests can be threatened by invasive earthworms, which strip the built-up humus, destabilizing the habitat in their wake. In practice, language too can be eaten away by ideological fundamentalists in an effort to return to roots that never were.

Software cultures unnaturally lack the level of fecundity found in living ecosystems systems. Vendors, language designers and users have yet to hit upon the mix of value choices and whatever thresholds are necessary to establish a self-sustaining environment. It may well be that the tensions between the community and vendors inhibits or prohibits the system from reaching a sustainable state. 

PHP (and perhaps to a lesser extent Perl) is a good example of an ecosystem gone septic, succumbing to its own popular success with a sea of craptastisicm. You can find a lot of code there, but it is of a dubious, messy yet inorganic quality - like a garbage dump.  The comparison-others Ruby, Python and Node.js, present smaller yet more vibrant and cohesive cultures; there is a lot of organic crud there too, but the public code is more organic and self-referential, akin more to a compost heap than a garbage pile.

I suspect a big part of the difference is in the recognition (or lack thereof) of the problem of factoring dependencies with solutions for package management and effective abstraction constructions. PHP just doesn't present an ecosystem that values this general class of problems, and this is reflected in the unusable crud it builds up over time.



Wednesday, October 31, 2012

phP is tWistEd nAIls

Suppose you've got a class that makes a database connection (forgive the mysql_ interface - alas, for I am stuck in 5.2.9 land):

class App {
  public function __construct() {
    $this->link = mysql_connect( blah blah blah );
  }
}

Then suppose that we've just included this class into another application. Perhaps it is a component or a framework. In any case, we don't expect to have any interaction with the internals.

Tough luck. You've already violated encapsulation big time, introduced a hidden serial ordering dependency, and established a crunchy fixed point in the code.

You might do this:

require_once "klass.inc.php";

$link = mysql_connect( blah blah, true); // new link regardless of connect strings
$fu = new Klass();

require_once "bar.inc.php";

mysql_query("select something from somedb where blah blah");


Well, OK. If that's your idiom, then you can't do anything really interesting.  It is going to fail if anything deep within the bowels of bar.inc.php invokes mysql_connect. That's because mysql_connect scribbles all over the internal buffer that holds the last link connected, as if it were a single global variable.

PHP's default connect link idiom is a little worse than useless, because it seduces naive users into writing a lot of this kind of fragile, easy to break and difficult to fix code.  When invoking mysql_connect, you are not expressing the intent "use this link and database by default" - the built-in default behavior is merely a side-effectual accident of requesting another connection. Any subsequent call to mysql_connect changes the implicit connection link.

You could instead use an explicit link in the call to mysql_query:

mysql_query("select something from somedb where blah blah", $link);

And that works. Sort of. But now we need to rewrite all queries to use a global value, provide gratuitous function arguments to pass gratuitous links around, or provide a centralized query object with a stack to manage the links ourselves. Each case has its shortcomings, and none is really sufficient to deal with the underlying structural flaws in the interface design.

Such structural flaws in PHP libraries make me imagine I'm framing a house with recycled nails. You can make them work if you whack the hell out of them, but even then they are twisted and you'll end up with a world of hurting fingers to show for it. It is just a stupid way to build stuff.

Wednesday, September 19, 2012

The energy cost of software language choices


After trying unsuccessfully to build a 32 bit version of Perl to work on OSX for testing a cron job to dump Oracle tables, I'm reminded yet again of how irritating Perl can be.

It isn't just the uglier-than-sin choices of syntax. It isn't just the utter inexpressiveness of the language. Perl ticks me off because Perl codebases seem to degrade rapidly into junk, defying attempts to repair and improve.

I'm conjecturing here that there is a more or less quantifiable floor of syntactic and semantic complexity inherent in the grammars and processing models assumed by any given computer language.

Perhaps that is something too obvious. Or maybe it isn't really objectively sense, but only in some fuzzy probabilistic sense. The key idea is that somewhere amongst the skills acquirable by your average joe (or jane) programmer and the expressiveness of the language, there is a best fit curve whose slope describes the technical debt one incurs just by participating in the ecosystem.

Think of it this way: in order to stay alive an organism needs to expend some amount of energy on a daily basis. In peaceful environments with plenty of free food, the outlay may be small, whereas in hostile environments with limited food availability the energy demands may extinguish populations.

There is an energy cost associated with just existing. That cost is modified by the ruggedness of the terrain, by the existence or absence of predators, parasites, and symbiotes, the relative ease with which food/fuel can be acquired, and the relative instability of the environment.  In software, the landscape is formed of choices, of technologies and techniques and artifacts; the legacy of code is the crud that forms the ecosystem.

Computer languages were originally devised to study the behavior of living systems.  One does not normally associate "energy expenditures" with programs, except in a trivial sense of big-O run-time estimates or CPU cycles. The outlay is not (just) in the running of a program, but in the attention and on-going investment that programmers and other stakeholders must make just in order to stand still in the ecosystem.

Some choices, like Perl, show great fecundity and stability, but pose a high on-going cost as well in the form of near-gratuitous abuse of syntax and parasitic idioms. Others, like Ruby, show great attention to forming symbiotic DSLs and component acquisition, but often does so at the cost of living on the quasi-stable edge of package dependency chaos, beyond which is the wild and wooly domain of PHP.  The V8-Node-NPM-Coffeescript-esque ecosystem is still relatively young but is similar to the Ruby ecosystem.  Still others, like Java and C# require not-insubstantial inputs of capital to prevent the codebase from seizing up into a dead crystalized mass.

Then again, there is a cost associated with leaving one ecosystem and entering another. That has implications on components and packaging, but I'll leave that musing for another day.

Friday, August 24, 2012

Autopoietic Code

Autopoietic means, roughly, self-creating. So a system that exhibits autopoiesis is, at least in part, self-maintaining.  Such a system feeds-back its own patterns to determine its future expressiveness.

I'm thinking now of things that many people own and maintain over long periods of time. A common view is that intentional human involvement somehow makes such situations non-autonomous and therefore less legitimate examples of self-ordering principles. But that is a rather anthropometric viewpoint; if it were ants pushing around piles of dirt no one would be questioning whether the bugs and soil together form mutually interdependent parts of the same system.

Seeing the pattern is really a matter of scale. If you look too closely and ignore the environment around the artifacts, you'll probably miss the connections, and miss-attribute the emergent side-effects of feedback loops. Old houses can start to look like Frankenstein monsters from years of small alterations; the feedback happens between the house and successions of owners. Software is like that too, except perhaps in an accelerated timeframe - where it may take decades for the architecture of a house to lose focus, a software project touched my multiple hands quickly turns into a Big ball of mud unless steps are taken to introduce positive feedback.

Sunday, August 19, 2012

The Purpose of Life

Evolutionary biology is often posed anthropomorphically in the popular media, as if every form and function had purpose, a result predicated upon an intent, an intent being a rationally contrived solution put together in a process that involves reflection. Given that a lot of evolutionary biologists would reject the notion of a God, it is hard to see this as anything but a kind of intellectual skeuomorph, meant to ease a bitter pill.

But what about the other way of thinking about the purpose of life, as in, what natural laws or universal properties are satisfied by the presence of life? Assuming that life need not exist is easy: we have not seen it anywhere other than here on Earth, and even here we know that extremes of temperature, pressure, and acidity are incompatible with life.  But life does exist here, so it is very safe to conclude that in some sense natural laws dictate that life must exist given the conditions reflected in the history of our environment.

People studying complex systems talk about phase transitions at the edge of chaotic regions. Somewhere along the way in this regime we call Earth, conditions went through a phase transition and life took shape as a result.

At that moment, the state of the system was at a fork in the road. One way led to life, and satisfied more laws with less energy or more entropy than the alternative.  But which laws?

And life will stop whenever some laws are no longer satisfied, for instance if there is too much energy, or too little. The threshold for continued life is probably different from the original formation, based on environmental changes introduced by the byproducts of living systems, but it is a similar question: what laws are being satisfied by the fact that a living system populates a medium, instead of it remaining sterile?

Edit: in case anyone is so befuddled as to wonder why anyone would ask such a question, the subjects involve autocatalytic chemical reactions, autopoietic structure, and the emergence of order among oscillating parts, among other very fundamental processes.

Friday, August 3, 2012

Storing Passwords in GIT

Eeek... made that mistake again: edited a test script config file that contained a username and password, and somehow got it committed and pushed up to a public repo. My Bad. 

What to do... well, first thing is: change the password and if I can, the username. Immediately. Done. 

Second thing: purge the repo of the offending file. 
git filter-branch --index-filter 'git rm --cached --ignore-unmatch MyBadPasswordFile.cfg'   --prune-empty --tag-name-filter cat -- --all

That will rewrite all my commits, but at this point I just don't care. If I did I might leave the file up, since the login information is no longer valid anyway. 

Typically I'd also throw in a line in .gitignore, to prevent the file from being seen again.

Wednesday, April 18, 2012

Web Design Meetup Reminder Button

Google has some convenient tools. One that would be very convenient for emailing or posting on blogs is this little form for composing Google Calendar event reminder buttons.
Google calendar reminder for Raleigh Durham Web Design Meetup on May 8th, 6:30pm to 9pm, at Panera Bread at Brier Creek
The button above is an example generated for the Raleigh Durham Web Design Group Meetup's May 8th event, where we will be discussing the use of the change management tool GIT.

Monday, April 16, 2012

Ever have one of those days?

I'm supposed to be doing my books today, running reports and paying bills -- that sort of thing. So it wasn't a great start.

Then I found out that, after paying Intuit about $300 in February for the privilege of using their software to do my own payroll myself, I will not be able to pay myself because QuickBooks Pro 2009 is falling off the support list. My guess is that it still works, but Intuit puts in checks into their service that blocks access to the service if the version is out of date. So I've got fully functioning software that I can't use.

Now, I hear a new QuickBooks for Mac 2012 is out, and it doesn't even provide a payroll feature. Instead, you're expected to sign up for anywhere from $300 to $600 in annual fees for a Web app.   Seriously, $25 is a pretty hefty fee for a SOHO freelancer to pay for the privilege of a writing out a single monthly paycheck for yourself.

Yak #1

So I update QuickBooks, finding an OK upgrade price on Amazon. Download, go run an errand, try to install...



Yak shaving time... or bang-head-on-keyboard time. [Regarding the image above: I just  noticed on the OSX Lion file dialogs that the search box doesn't work. The indexing service must have crapped out. Something else to fix...]

I'm running VirtualBox and the upgrade downloader says there isn't space on the device.

Yak #2

So how does one increase the size of the virtual disk in VirtualBox?  
  1. Shut down and clone the virtual machine into MyNewClone (or whatever). This will ensure that if something goes wrong, you haven't lost your original virtual machine or its snapshots.
  2. cd ~/VirtualBox\ VMs/MyNewClone     # or whatever you called it
  3. Do an ls -l to see the vdi file;
  4. VBoxManage modifyhd MyNewClone-disk1.vdi --resize 15360   # where new size is 15GB 
  5. Start your MyNewClone VirtualBox image.  
You get a virtual disk with one partition, followed by empty space. And it is time to bounce on to a new Yak... XP (yes, I'm still running XP; it is more stable and smaller than its successors), ahem, XP has no real built-in disk partition management that can resize partitions (diskpart won't work). Loading a live GNUParted CD and booting it with the vdi in VirtualBox may be an option but it seems strange and unproductive.

So what to do? Windows Freeware: Easeus offers a home version of its partition management tools, minitool partition wizard does as well and both support disk resizing. A quick download and a several minutes later and the disk partitions are merged into one bigger partition. That's Yak #2 shaved, one to go....

Back to Yak 1

Time to restart the QuickBooks Pro 2012 upgrade. Fortunately, I keep my accounting files checked into a Git repo on a disk outside the VM, so I've always got backups; but that reminds me I should make sure I've committed... done. Go find the upgrade tool in a folder on my VM XP desktop, and restart it... looks like it could be a long process... time for another break. Hopefully I'll have that Yak shaved and can move on to shave a few Yaks for the government, and then can get on to the real task at hand which is to write a paycheck. 

On a related note, I looked on opensourcerails.com and elsewhere through Google, but not surprisingly found no Rails based accounting software at all. There may be more out there in PHP land, but when I looked what I saw in PHP was a mess.  From reading forums and my own experience, this is a pain point for SOHOs and freelancers. Sure, we're cheap, but considering the expense and complexity of Intuit's service-bound fee-ware, there seems to be room someone to eat into QuickBooks sales with a clean one- or two-person payroll tool with support for a 1099 or two. 

[edit: OK, that was pretty cool. Apparently I bumped my power cord a few hours ago and detached the magsafe adapter. VirtualBox automatically saved the VM when I stepped away due to the low power warning. ]

Friday, April 6, 2012

Immature Cowboys vs Ranch Hands

Years ago a colleague who immigrated from eastern Europe as a youth, expressed her frustrations about working with teammates, who she referred to as "cowboy programmers".

I thought it might be that she didn'tt mesh with the tightly knit social fabric of the team. And that may be true to some extent, but it did not mitigate her observations.

A week ago, another colleague outlined similar difficulties coping with his developers. In a typical situation, one of the developers spent a large chunk of budget making re-implementing an unnecessary object-database mapping component in a Grails project, modelled after a Ruby component. Then the developer left, leaving the team with a piece of unfinished work that was unsupportable, fragile, and locked the system into unnecessary dependencies on antiquated libraries.

As for myself, I appreciate a good John Wayne style epic, knowing that the real life phenomena is much dirtier and more mundane at the same time.    I dislike the metaphoric reference for reasons that I shall for now only briefly summarize, but perhaps a better model is the ranch hand.

The real nature of a cowboy is to work together as a team, utilizing the same tools and technologies anyone else has, to manage the same herd as everyone else.  Yes, the cowboy personifies individualism, but that is much more about capability than direction. Real cowboys, that is, ranch hands, don't flaunt instructions, don't abandon the herd to ride off on their own whims, and never, ever, ever, leave rope hitches half-tied.

I interviewed at an institution's development shop recently. As far as I could tell, the prospective employer's development practices were ambiguous and very reactive rather than well-defined and proactive. For instance, they did not test using any sort of automation, hadn't adopted any higher level frameworks or tools, or indeed could not communicate anything resembling a defined process. They also self-admittedly spent a good deal of time in fire-fighting mode.  So their code was highly specialized, very one-off special-case stuff, and their processes non-existent.  Consequently they don't mature as developers or as an organization.  That's the real cost of substituting immature cowboys for ranch hands.

I'm fairly certain that the above describes the preponderance of development shops. It is a problem for our industry, one which reflects very poorly upon the status of programming as a profession.

Tuesday, March 20, 2012

Why bother?

I infrequently but regularly revisit the university and state job posting sites to see what's out there. Recently I noticed a shift in the way the sites were managed: they've gone to PeopleSoft applications. It is a real shame, because the new interface is crap.

NC University Jobs Search Interface
It looks good though, right?

Except that the layout components are fixed. To handle multiple devices, they just scale everything. Try to visit it on a smartphone and you get greeked type (too tiny to read).  Still, that is a minor ergonomic faux-pas, easily worked around with pinching and panning.

Except that it also has basic accessibility issues: missing labels for input fields, and missing alternative text for an image, no skip-to links to avoid the menu and results links, and result link texts that are contextually inadequate ("view").  These are technical infractions that may appear minor, but remember that this is a public space run by a public institution and as such must conform to ADA accessibility requirements.

Such technical issues in a template-driven site is a reasonable basis for asking whether any standards were followed at all. Given that a trivial check with the AIM Wave toolbar shows their pages to have demonstrable issues, it is also reasonable to question the soundness of their conformance checking procedures, or even if any such procedures exist.

As serious as the accessibility issues may be, what is even more telling is the omission of an important key detail: there is no way to filter results to those that are relevant to a job seeker.

I don't know of any job seeker who says "I wonder what kinds of jobs are available that are ARRA funded?"  But the lack of a professional job categorization is an unbelievably irresponsible omission.  Whether a PhD student looking for a research position or an technical professional looking for an IT position, making someone wade through 193 irrelevant posts for shipping clerks, educational specialists, and "Internal Only" postings, is unnecessary and thoughtless.

Omissions such as these are a not-so-subtle suggestion that an institution does not value the time of others, and will waste your time from the get-go.

Was any attempt at all was made to understand the needs of the process stakeholders? Was there any effort to assess the validity of the application and to schedule remedial corrective actions?  In the drive to introduce "enterprise" processes, the stakeholder's interests seem to have gotten misplaced along the route. 

When companies like PeopleSoft can be rewarded with contracts to deliver non-compliant software with less service-able interfaces, where is the motivation for an individual to be concerned about considerations such as Web standards, ADA compliance, or even stakeholder needs?  In other words, why bother?

Saturday, March 17, 2012

Git + Dropbox = bad

I thought about using Dropbox as a location for a repo. It doesn't take much thought to realize why this isn't safe.

If you push to a shared Dropbox based repo, Dropbox will detect the changes and begin syncing everyone else's copy. If someone else tries to read at that moment, they will get an inconsistent view, and if they try to do a commit or push themselves you are likely to see corruption as Dropbox and Git's writes clobber one another.

Clobbering repos is bad. Don't do that. Use GitHub instead, or a virtual server running something such as Gitolite and GitLab.

Wednesday, March 14, 2012

HTML5 is simpler than possible

See anything wrong with this line of code?

  <link rel=”stylesheet” href=”stylesheets/agilemarkup.css” />

Neither did I. But after copying up a minor style change to my company site, this little line botched up the appearance of the whole site.

The company site was suddenly raw and styleless, as if a developer tools add-in had disabled all styles.

A previous post here, CSS Grammar Considered Wrong remarked upon the muddy expressiveness of the syntax. As an expression of a solution the syntax lacks Focal Alignment. HTML is in a similar condition, and HTML5 especially.

Take another look at the quote character in the HTML:

href=”stylesheets/agilemarkup.css”

Yep, that's a completely different quote character, not the usual double-quote. HTML5's new parsing rules state that we can omit quotes around contiguous attribute values. So when the errant quote character is used, the actual resource that the browser attempts to download is:

”stylesheets/agilemarkup.css”

That is, the not-really-double-quote characters are silently taken as part of the attribute value.

HTML5 browsers are not supposed to complain or flag errors, just gobble up characters and do something predictable. That's what makes HTML5's parsing a simpler solution than possible.

Friday, March 9, 2012

Git Tagging

So, I'm setting up a GIT rep to track my company accounting files. Why? Quickbooks. The files are a moral hazard when, say, the software clobbers the data file, or an update goes awry, or you need to take a snapshot and then the software forgets which one was open. It can get really ugly. GIT ensures that only one file name need ever be present by doing the snap-shooting for you.

I know it is a lot of trouble, but there are two reasons that motivated me to do this: using file-name mangling is a wholly inadequate means of implementing revision control, and keeping the files locked down on one computer or on one network is risky and disruptive to getting things done when you move around to different machines. Under GIT, I fetch my accounting repo, make changes, commit and push the commit back to the remote. If I know I'm going to be moving around to another machine, I can clone into a Dropbox or other network folder and after I commit I can feel secure in removing the clone.

Of course, GIT won't version the Quickbooks file itself in any meaningful way. You can't for instance go back to a previous commit, branch it, and start applying journal debits and credits as corrections, and then merge back into the main line. Just won't work, because a QuickBooks data file is an opaque indexed binary structure, not a line-structured source file. 

But we can avoid mangling the file name for snapshot purposes. Yet if we don't mangle names, how will we know which commit is which without relying upon the comments? The answer lies in GIT tagging.
<TAG> 
you're it

There are two types of tags in GIT, and either would work for this purpose.

The first is lightweight tagging. A lightweight tag is one without other metadata -- things like the tag creator, the date of tagging, and a GPG signature.  If all you care about is finding a particular commit given a label, then this is what you'll want:

git tag FiscalYear2011-Final

The only thing this does is point the tag to the current commit.

The second kind of tag is an annotated tag. Here, you tell GIT to make an object with the metadata, and/or a signature:

git tag -a FiscalYear2012-Start -m 'Beginning use of GIT'

We can list tags with:

git tag

and show the metadata for a tag with:

git show FiscalYear2012-Start
or use a script to give you a summary report:

for c in $(git tag -l)
do 
  git show --quiet "$c"
  echo
done


If you really want to go hog wild, you can apply a digital signature, and verify it.  We're really only making read-only snapshots, and adding signing would be a good property for internal data auditing.

More info on tagging at github.

Wednesday, March 7, 2012

Inaccuracies at LiveStrong.com


Read this article on LiveStrong.com, and see if you can spot the technical inaccuracy with serious medical consequences for anyone taking it seriously:


Comparison Of Sucrose, Glucose & Fructose. Sucrose, glucose and fructose are all kinds of sugar. As you might expect, they all taste sweet -- though to varying degrees. 






As the article states, fructose is indeed "absorbed by cells" without insulin. But only by liver cells, not all cells as the uninformed reader would easily infer from the article. The resulting hepatotoxicity is a driver of fatty liver disease, obesity, and diabetes. By omission the article misrepresents the metabolism of fructose and supports the dangerous myth that glucose is "bad" and fructose is "good". 

See <http://en.wikipedia.org/wiki/Aldolase_B#Pathology> for the etiology of the disease mechanism. Overconsumption of fructose sets up an artificial relative shortage of of Aldolase_B. 

And if you think this doesn't apply to you, think again: given the current formulations of industrialized synth-food and predominance of fructose in popular fruits and vegetables,  most of us are chronic fructose-aholics. 

Tuesday, March 6, 2012

The terrible iTunes Connect Application process

First, one needs an iTunes content provider account. If you already have one, and used it for publishing apps, you'll need a new one just to apply for iBooks.

Second, one needs to have a paid iBooks publishing account to sell iBooks.
Alternatively, one needs to have a free iBooks publishing account to give away iBooks.
You sign up here.

Apparently, it isn't in Apple's imagination to give free iBooks and sell others under the same account name.  Perhaps it simplifies things for someone in the process, or keeps the legalese straight. You can start free but if you want to sell you will need to apply for a new account. The two don't mix.

An aggregator can help ease the process of raw conversion to ePub format:

http://www.smashwords.com/about/how_to_publish_ipad_ebooks

Other aggregators are listed here.

Bear in mind it is unlikely to be suitable for publishing unless the content was originally designed for the iPad.

Even if you get your content into ePub, you still need to validate it.

http://threepress.org/document/epub-validate

is one way to do so.

I resell the oXygen XML Editor, which is a little more robust way of getting into the ePub format and validating it. Since ePub is XHTML based, one can target an ePub for creation via a transformation process that performs composition on components. That is, you don't need to write chapters linearly, but can use topical chunks instead.   This can have application to Teachers' Notes, Homework, Solved Problems, Problems to Solve, Quizzes and Tests, and other areas.

Importantly for solved problems, problems to solve, quizzes and tests, generic markup allows the materials to be parameterized. If one knows how the parameters of the problem relate to one another, one can provide a means of customizing materials by varying one or more parameters while also providing a level of sanity checking and guidance to the instructor via information contained in the markup.

For instance, probabilities are usually expressed as a decimal number between 0 and 1; this range could be specified as a limit of a parameter. Or suppose parameter B depends upon parameter A, in that any A in (0,1) maps to some particular B in (2,20).  Such relations can be expressed in markup and enacted on a device via a host language, for instance, EcmaScript, via browser/cloud-based customization Web tools,  or by a more traditional desktop application.

The question arises: how do we then get these customized ePubs distributed???

Apple's official process, if you can call it a process, is by way of iTunes. But for my trouble of applying over a month ago I have received neither a confirmation nor any other communication about the application's status. Apple's approach to communication is quite opaque.

Checking just now, it looks like they've either changed my password or deleted my original account, because I cannot log in...  I do a password reset and change it back to what 1Password said it should be... and it says:

Apple ID does not have permission to access iTunes Connect.




ARGHHHHHH!!!!! I know that's the account I used to sign up! 1Password says so too.  So I pull a diagnostic trick: run the iTunes Connect account application again. Sure enough:


The following error(s) occurred:
  • The iTunes Store account entered has already applied to distribute content on the iBookstore. To continue with this application, you must enter a different iTunes account.


OK, whatever. I have an application, no way to log in, no way to get appraised of its status, and no reasonable expectation that I'll ever hear from Apple on it since they never bothered to confirm the application. Silence is not a good way to communicate.

The process for apps is a game of ping-pong that takes several days between ball bounces, but at least they give feedback. A month is a little long to wait with nary a word either way. I've never seen a business process so craptastic from an organization renowned for products designed to be easy to use.

So, following a trick on Apple's user discussion forums, I give up on the free iBook publishing account option. It seems more like a honeypot meant to divert non-profit oriented people, than a legitimate application process.

I use an alternate account application to the paid iBook application.  Even the paid account application assumes that you've got several eBooks in the waiting, just ready to be published. I put in 0 for all the numbers, since I don't yet even have the tools that I need the account to download. We'll see how long it takes, if it is even accepted and doesn't turn into a black hole like the last application. It is March 6th as I write this.



Update: March 8th I had my paid iBook account approved. So the stats are in: 2 days to get a paid account vs 40+ days to get a free account. Lesson learned: use a credit card, and vendors will pay attention.









Monday, March 5, 2012

Personal Responsibility in Rails

A commenter on this Rails issue on the subject of securing resource access to model attributes posits:

"Rails is all about conventions. Broken by default is not a good convention."

Too true. The issue ping-pongs back and forth, but the core idea here is visibility of mechanisms and application behavior. Hiding implementation details can be too much of a good thing, when people really need to know about the behavioral consequences of their actions.

Friday, March 2, 2012

OSX Lion Productivity Wasters

Today's SAAS applications environment has one thing in common with the networked office environments of the early 1990's: the networks are just good enough to lull you into a state of complacency until you are in the middle of something important, and then they crap out.

Back then, we learned that, given a choice, we should not to rely upon network services. It was just a recipe for continued frustration and disruption. The UNIX workstation/Windows PC was a Godsend, because while it could be networked, it had enough brains and brawn to get interesting things done all by itself. As centralized governance was imposed however, everything that was networked was subjected to a diseased governance that imposed centralized, non-robust domain security. So everything would automatically lock down at the slightest hiccup.

An interruption to an otherwise unimportant DNS server could turn your $5k workstation into a warm brick for at least as long as needed to disrupt your train of thought.

Today, I'm trying to pull together a summary of expenses using a docs.google.com' spreadsheet, using Chrome on an OSX Lion 10.7 MacBook Pro (e.g., a UNIX workstation). Sigh. I've been interrupted three times in the space of two hours, due to DNS lookup failures:

The server at docs.google.com can't be found, because the DNS lookup failed.

Restarting the modem/router/firewall doesn't help. A common solution is to flush the cache on OSX:

dscacheutil -flushcache


But sometimes this accomplishes nothing: DNS still comes up empty for sites like docs.google.com. Another solution is to restart the DNS responder:



sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist


sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist



But this just punts the problem down the road, and maybe not even all that far. The real questions are, why does the cache suddenly go stale for big SAAS providers, and why is Chrome more frequently affected?

The latter link suggests that Chrome may have a brain-damaged DNS lookup algorithm: try the first DNS server, then vomit errors if that doesn't work. A quick check of the System Preferences / Network settings / Advanced for the TCP/IP DNS servers shows that the wireless router is reporting itself as the first DNS server. I have a cheap Linksys WRT160Nv3 picked up from CompUSA as a refurb. Perhaps the real problem is that the router is brain-damaged, and is passing on its brain damage to Chrome.

I don't know. I'm still looking for resolution. If I have to, I'll jettison the cheap Linksys. OK, done, but Chrome has the same problem with a completely different Linksys. Maybe it is time to jettison Chrome?


Sunday, February 19, 2012

Deep Field Problems


A Deep Field Problem is one which at a glance appears trivial and solutions almost completely vacuous, yet upon longer observation reveals ancient embedded structures and an unbounded density of substance. 
Real world situations and phenomena are chock full of Deep Field Problems. Moving to the other end of the spectrum of scale, particle physics reveals boundaries of inspection below which we cannot go, yet even in the vacuum of space there is the zero point energy dance of virtual particles popping in and out of existence. The effect was used by Hawking to describe how black holes could evaporate. 
Human culture presents a Deep Field Problem in itself, as do the many societal domains layered upon it: economics, medicine, engineering, science, and governance to name a few. 
<colophon>
The background image was prepared from http://hubblesite.org/newscenter/archive/releases/1996/01 with The GIMP on a 1680x1050 resolution MacBook Pro, and placed into the background with a very nice CSS3 property (background-size: cover;).
GIMP preparations started with the high-resolution TIFF. The main image layer was duplicated, and a threshold of about 62% was applied to the foreground layer to mask out the fine galactic "noise". The foreground layer was then blurred slightly, and the mode changed to Soft Light. Finally, a rectangular area of the visible image was selected and pasted as a new image, scaled to fit my screen, and saved as a 68% quality JPEG.
</colophon>

Friday, February 17, 2012

A Call for IP Credit Unions


A message on a technology user group mailing list by a National Guardsman caught my eye one day. He wanted to promote the use of OpenSource within the government institution. The incorporation of OpenSource software was simply his way of saying "we can improve the institution by leveraging publicly available technology."

In my opinion, he was correct. But it is fascinating that our public institutions have placed themselves into the situation of depending upon closed source systems for mission critical services, and it is worse that the public doesn't get substantive benefit from offering protection to these closed systems. 

Original Intent

Both standards like POSIX and old laws like copyright seek a balance of power between wealth creators and wealth beneficiaries. The intent was to ensure that the creator could use his or her creations for profit while blocking others from doing so in an unauthorized manner, for a limited time. After some finite time period, the intellectual content of the work would enter into the public domain.

Undermining the Original Intent

The Court's Role in Undermining Public Standards

I wrote before how the courts undermined public efforts to require platform standards. In a nutshell, UNIX was standardized under the FIPS procurement requirements as a set of standards called POSIX. The Coast Guard required substantive POSIX compliance for RFP responses; the courts said no: constructive non-compliance was okey-dokey by them. And thus, Microsoft Windows NT became a viable purchase option, despite the fact that its compliance was a farse.  

NT no more satisfied the meaning of the law than would have the court of clerk by filing court records of decisions in Mattenänglisch. Yes, it does have the sound "english" in its name, but it is not even remotely among the languages used in culture. 

Legislative Undermining of the Public Interest

The Berne Convention entered into force in the United States on March 1, 1989. Under the convention, a copyright to a creative work is automatic and need not be registered or even declared. Recently, courts and the legislature acted to harmonize the treatment of formerly public-domain works in the US which had claims upon them of foreign copyrights.

This is all well and good, countries making treaties that simplify and straighten out differences in law. It would all be great, except that copyright has been extended to the point that the public no longer appears to have an interest in granting the protection.

The DMCA was also passed in 1998 to add punitive sanctions on technologies if they could be used to bypass copy protection schemes. It has been used to arrest researchers. The DMCA's criminalization of scientific discovery isn't at all in the public interest but a prosecutor will use the tools available. 

Technology's Role

In recent decades society has become ever increasingly dependent upon software to operate, coordinate, communicate, and manage throughout business and private life. Yet software itself is not only highly complicated, it is intractably interdependent and highly sensitive to the conditions under which it is deployed. Wealth in the form of software is still somewhat difficult to create (though it is easier now than ever before) but it even easier to destroy - and this is a key insight.

The public offers intellectual property protections for software wealth, in exchange for a future promise of release of that wealth into the public domain, but the public gets virtually no benefit by (a) extending protections indefinitely or (b) allowing the grant of such protection when no demonstration is made of the surety that the wealth will still be there, at least in as much as could be guaranteed at a technical level.

Modern professional software craftspeople practice test driven development, change management, and revision control.  Services like GitHub make some of these processes transparent for open source projects. But custom development projects and packaged deployments by closed-license vendors offer no such fiduciary-like accountability.  This places institutions and organizations at a distinctive disadvantage as vendors get sold, go belly-up, divest themselves of core competencies, or merely lose interest in the business. 

A Co-Operative as an Equation Balancer

There are numerous examples of co-operatives in modern society, ranging from:
...and the list goes on. The question is not "do we need co-operative organizations," but rather, how can we leverage their future to strike a more even balance between private and public interests?  Co-operatives give the community the power to sanction those who abuse the trust, but also help its members create more wealth.  That sounds like a vehicle that could be leveraged to mitigate some of the corrosive influences of over-reaching copyrights and monstrous patents.

A Software Credit Union/IP Credit Union

Credit unions (CU's) arose around the 1850's in Germany, spreading through Europe and then the Americas around the beginning of the previous century. Organized around co-operative principles, CU's traditionally served poor and middle class populations, allowing them to pool resources and build wealth as a community. The movement was instrumental in providing microfinance, in a manner reminiscent of the resurgence in crowdsourcing today.

I chose the Credit Union as a model for two reasons. First, even if you don't include wetware, software is certainly one of, if not THE chief representation of wealth in our society. Try to run a phone without software, or a car, or a television, or a debit card, or a CAT scan machine... the stuff is, like, freaking everywhere. Software is the stuff of real wealth, the asset that literally makes everything go.

Second, while other forms of co-operatives may have application to this arena, Credit Unions best capture the concepts of investment and maintenance of assets with accountability and fiduciary responsibility. Software is an asset, and those who pay money or put out effort to maintain the software are making investments. Those that utilize software assets gain from their access to it. By combining the valuation of the software with a Github style accountability, such an institution could conceivably provide a means of implementing limited exclusivity and limited timeframes on licensing, while ensuring that the public's ongoing interest in protected IP is itself protected.

Finally, public institutions and small organizations with limited capital must finds ways to mitigate the concern that Open Source projects may not provide demonstrable support capacity. While it often seems that the promise of support by commercial software organizations is an illusion, and that they often over-price and under-deliver in this respect, companies such as RedHat would have no business model were it not for this one issue. My thinking leans toward proposing Software CU's as a means to organize and ensure support by constructively using the income of the CU to serve the needs of a broader community of businesses and individuals. Public institutions in particular, could require a levels of support that the CU could provide, and by investing their software projects with the CU the institution would be ensuring that it would not face a long-term imbalance of power due to one vendor's proprietary de facto ownership of the IP.

This is a work in progress. As I write, numerous other examples and overlapping concerns pop in and out like virtual particles, and the field of view suddenly seems as crowded as a deep field snapshot of a seemingly empty region of space. It can be overwhelming. Rather than continue to blather on, I proposed this as a discussion topic at the 2012 NCSU FOSS Fair

Rails 3.2.1 with MongoDB quick checklist

I'm just writing this to remind myself of what I did to set up a baseline for Checkie, a checklist helper. Assumption: you have git installed, and you're a little familiar with RVM.


Install a gcc or configure RVM to use clang, and install Ruby 1.9.3

The OSX compiler set in XCode 4.2 changed out gcc for an LLVM based compiler. The corresponding compiler is called 'clang'. You may encounter issues installing Ruby 1.9.3 with RVM until fixes are made permanent; I figured out what to do in the meantime on StackExchange (please look it up there since YMMV).

I usually use an application-specific gemset. Assuming that 1.9.3 is current:

rvm gemset create checkie

I stick this sort of configuration into a local .rvmrc file in the working tree
echo "rvm use ruby-1.9.3-p0@checkie" > .rvmrc

The RVM install should have set up your .bash_profile or .bash_login so it will automatically read the .rvmrc when you cd to the directory. If not read up on it at the RVM home page.

Install mongodb

brew install mongo

Then follow instructions:

    mkdir -p ~/Library/LaunchAgents
    cp /usr/local/Cellar/mongodb/2.0.2-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents/
    launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist

Check out some references and tutorials

A few I looked through.
  • http://railsapps.github.com/installing-rails.html
  • https://github.com/RailsApps/rails3-mongoid-devise/wiki/Tutorial
  • http://railsapps.github.com/rails-heroku-tutorial.html

Fixup the Gemfile

I add a :production and :development group, things like the debugger and chose the server (thin, unicorn, whatnot...).  I also use Compass, Sass, HTML5 Boilerplate, Compass-Less, Fancy Buttons, Devise, Cancan, and a few other gems. Bundle, and check in to git and move on.

Initialize gems that need to be initialized

compass init rails .

vi app/views/layouts/application.html.haml   (add compass/sass stuff; round it out later)
%head
  = stylesheet_link_tag 'screen.css', :media => 'screen, projection'
  = stylesheet_link_tag 'print.css', :media => 'print'
  /[if IE]
    = stylesheet_link_tag 'ie.css', :media => 'screen, projection'


vi config/application.rb (add block to opened Application class):
    config.generators do |g|
      g.template_engine :haml
      g.test_framework  :rspec
      g.orm             :mongoid
    end  

vi config/compass.rb  (add fancy-buttons)
  require "fancy-buttons"

vi app/assets/stylesheets/application.css.scss
@import "fancy-buttons";

rails generate mongoid:config
rails generate barista:install
rails generate rspec:install

vi spec/spec_helper.rb   (add rspec config with cucumber, comment out activerecord stuff)
  config.mock_with :rspec

  require 'database_cleaner'

  config.before(:suite) do
    DatabaseCleaner.strategy = :truncation
    DatabaseCleaner.orm = "mongoid"
  end

  config.before(:each) do
    DatabaseCleaner.clean
  end

rails generate cucumber:install --capybara --rspec --skip-database  

cat <<EOT > features/support/database_cleaner.rb
require 'database_cleaner'
DatabaseCleaner.strategy = :truncation
DatabaseCleaner.orm = "mongoid"
Before { DatabaseCleaner.clean }
EOT

rails generate devise:install

vi config/environments/development.rb:
   config.action_mailer.default_url_options = { :host => 'localhost:3000' }

vi config/routes.rb.
       root :to => "home#index"

vi app/views/layouts/application.html.haml
%body
  %article
    %header
      %p.notice=notice
      %p.alert=alert

rails generate mongoid:devise User
vi routes.rb:
devise_for :users
 
rails generate cancan:ability

But wait...
rails g jquery:install --ui
 deprecated  You are using Rails 3.1 with the asset pipeline enabled, so this generator is not needed.
              The necessary files are already in your asset pipeline.
              Just add `//= require jquery` and `//= require jquery_ujs` to your app/assets/javascripts/application.js
              If you upgraded your app from Rails 3.0 and still have jquery.js, rails.js, or jquery_ujs.js in your javascripts, be sure to remove them.
              If you do not want the asset pipeline enabled, you may turn it off in application.rb and re-run this generator.

(no need to edit the application.js -- the lines are already there)

(copy a bunch of view templates from another haml app for boilerplate)

rails g controller home index
rails g controller vip index

vi app/controllers/vip_controller.rb:
  before_filter :authenticate_user!

There's a lot more to do...

CSS Stuff from the Function Pink Meetup

Object Oriented CSS
- good for themes
- typically feature based names (non-semantic): "circle", "rounded", "green"
- bad for finding stuff again (not DRY for the selectors)

CSS Reset
  Eric Myers'
  Necolas' Normalize.css
    has much better defaults
    decreases clutter in cascade

Use deployment concatenation to deliver single stylesheets
  - sass as a preprocessor
  - no imports (buggy in older browsers, still have multiple requests)

SMACSS
  Kind of like OOCSS, but with four (logical) components: base, layout, module, states

  Uses name mangling/conventional prefixes on class selectors
   - eg layout-stuff to indicate a layout rule
   - Cautions on the use of "id" attributes, only for things JS touches, or are linked-to
      - typically containers

   Layout typically contains... layout rules
     - and media queries are a logical to place here

  Modules typically use class selectors, not id's; representing the "skinning" of the site
    - try to use semantic class names
    - try to keep style selectors' span of matching as short as possible, at most (parent child), to avoid side-effects
     buttons, navigation menus, balloons

  States
    - use conventional naming "is-adjective" : .is-hidden, .is-open, .is-active, etc

Preprocessors
  SASS (and Compass)
    - can watch files (directories)
    - allows rule injection with @includes and @mixin :
     @mixin ie6 { .lt-ie7 & { @content } }
      @include ie7 { .btn { float: left; } }

  Less
    - Like SASS, but implemented in Javascript rather than Ruby
    - can translate on the fly in the browser

  Stylus (via Node.js)

Cross browser rules (IE handling)
  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">  // force IE standards mode
  <!--[if lt IE 7]> <body class="lt-ie9 lt-ie8 lt-ie7"> <![endif]-->

  <!--[if IE 7]> <body class="lt-ie9 lt-ie8"> <![endif]-->
  <!--[if IE 8]> <body class="lt-ie9"> <![endif]-->
  <!--[if gt IE 8]><!-- --> <body> <!--<![endif]-->--> // avoid quirks mode

  Target these with sandboxed selector hacks "*" idea. 


Avoiding Side Effects
  Over specifying markup fragment addressing
     - addressing by id
     - addressing structural relations of the markup
  Keep the selectors flat
    not chained or deeper than two levels
    if you do chain selectors make them close together




Wednesday, February 15, 2012

DNS Cache Issues

It could be something in my router, or something to do with the way large scale cloud services are routing for load sharing.   Whatever it is, sites like plus.google.com keep going off-line because the certificate doesn't match "myshoppify.com", which seems to be another Google property.

So I set up a launchd job to flush the DNS cache every four hours. The following was saved at /Library/LaunchAgents/com.agilemarkup.fixDNSCachingIssues.plist, and loaded with launchctl load ...

Got to get me a new router...


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>KeepAlive</key>
  <true/>
  <key>Label</key>
  <string>com.agilemarkup.fixDNSCachingIssues</string>

  <key>ProgramArguments</key>
  <array>
    <string>/usr/bin/dscacheutil</string>
    <string>-flushcache</string>
  </array>
  
  <key>RunAtLoad</key>
  <true/>

  <key>StartInterval</key>
  <integer>14400</integer>
  
  <key>StandardErrorPath</key>
  <string>/var/log/dnscacheutil.err.log</string>

  <key>StandardOutPath</key>
  <string>/var/log/dnscacheutil.out.log</string>
  
  <key>WorkingDirectory</key>
  <string>/tmp</string>
</dict>
</plist>

Wednesday, February 8, 2012

Install Git under Windows


Install Git 


Windows installer here.

Under Windows, use the Bash Shell or GUI app.
You can also fetch TortoiseGIT to work with the Windows gui environment.

Menu -> Git -> Git Bash

You will see:
Welcome to Git (version 1.7.8-preview20111206)

Run 'git help git' to display the help index.
Run 'git help <command>' to display help for specific commands.
$


Set up a proper workspace 


Please translate the username (whome) and pathnames as appropriate.

$ pwd
/c/Documents and Settings/whome

$ mkdir workspace
$ cd workspace

$ git config --global user.name "Who Me"
$ git config --global user.email "whome@some.domain.com"

$ git clone  some.repo.url/reponame.git

Cloning into 'reponame'...
done.

At this point, you'll have a working local git repo that can be pulled from the remote repo, and pushed (assuming the repo url allows write access).

More cheat sheets for git can be found here:
http://help.github.com/git-cheat-sheets/

Friday, January 27, 2012

Why we should care about server side JavaScript

A software practitioner recently asked me what are the benefits of a functional language, and why would one want to use, say, JavaScript for an MVC Web application stack. A stack to consider would be Ruby on Rails for instance. Clearly, developers in the Node.js community have their eyes set on that problem space.

I wasn't prepared answer his question, at least not adequately.

JavaScript certainly offers interesting things like closures and anonymous functions, which make event-driven programming interesting.  Closures, my friend pointed out, are like deep class hierarchies in an OO system, and can obscure the flow of control. It is true that closures come at a cost, but there are benefits as well in the reduction of code and less time spent in allocating temporary variables and in avoidance of copying. But closures are just one construct among several primitives in functional programming that work together to form an extensible system of logic. A grammar, if you will.

Why should we care?

Improved engines, especially Google's V8, have brought speed characteristics to rival that of C++. If JavaScript were still as slow and memory intensive as, say, Java, it might have survived in the browser space. Yet V8 brings JavaScript benchmarks into the same order of magnitude as statically compiled languages. It still isn't as fast as C or Perl5, but it is on-par with PHP and C++ and edges out Ruby (sorry Ruby, I do love you, but you're not quite as fast as V8). That is one characteristic that has some Web app developers all hot and bothered about Node.js.

And for whatever faults it has, JavaScript has less cruft than PHP, C++, or Java. Functional primitives combined with a kind of Lamarkian inheritance and minimal data types make it a very cohesive language despite its half-baked flaws. It doesn't have classes, but then again, it doesn't have classes. JavaScript does have prototypal inheritance. Modules and packages have much more practical benefit, and you can make those in JavaScript.  With cleaned up syntaxes such as that offered by CoffeeScript, some of the worst flaws can be entirely elided from the coding experience while making programs even shorter.

As a language, JavaScript leaves some things to be desired. The lack of tail-call optimization, while generally treated as YAGNI, prohibits some interesting implementation techniques. The widely used long-running script detection makes sense for browsers and servers, but for background processing tasks and monitoring not so much. There are some interesting ways of managing blocks of code, formulating methods, and handling control flow in Ruby and Python that I sometimes wish were in JavaScript.  The weirdness of falsiness and truthiness and == is eye-rollingly campy.

But in spite of its flaws, JavaScript is still a much saner language than Java and unlike Java/Ruby/Python/C++/Perl/PHP/whatever, it is part of almost every Web browser. Node opened up the path to a coding experience in which the seams between deployment environments are much cleaner and tighter, allowing them to be increasingly well-defined, well-factored, well-integrated, and well-tested with less code.

As a spiritual descendant of Scheme it is difficult to stay mad at JavaScript for very long even when the browser environment makes simple tasks grueling; eventually the elegance and simplicity of the language still draws you in.

Approachable beauty intrinsically engenders creativity and productivity. That in a nutshell, in my very humble and only poorly-informed opinion, is why programmers are noticing JavaScript.



Thursday, January 26, 2012

Class Variables versus Class Instance Variables in Ruby

I'm going to do a code dump and annotate.

#!/usr/bin/env ruby

class Something
  @@class_variable = 0

  def initialize( name )
    @@class_variable += 1
    @name = name
  end
  def value
    "#{@name},#{@@class_variable}"
  end
end

class SomethingElse < Something
  def initialize( name )
    super(name)
  end
end

joe = Something.new("Joe")
puts "Joe = #{joe.value}"
mary = Something.new("Mary")
puts "Mary = #{mary.value}"
sam = SomethingElse.new("Sam")
puts "Sam = #{sam.value}"
puts "Finished creating. Now Joe = #{joe.value}, Mary = #{mary.value}, and Sam = #{sam.value}"


Class variables are shared among all subclasses.

class SomethingEntirelyDifferent < SomethingElse
  @@class_variable = 0
  def initialize( name )
    super(name)
  end
end

puts
ferdinand = SomethingEntirelyDifferent.new("Ferdinand")
puts "Ferdinand = #{ferdinand.value}"
puts "Created subclass with same classvariable. Now Joe = #{joe.value}, Mary = #{mary.value}, and Sam = #{sam.value}"


Class variables are not shadowed. They are scoped wrt the inheritance chain.
Class variables are candidates for unintentional side-effects.

class SomethingAgain < SomethingElse
  @class_instance_variable = 0

  class << self; attr_accessor :class_instance_variable end

  attr_accessor :instance_variable

  def initialize( name )
    self.class.class_instance_variable += 1
    super(name)
  end
end

albert = SomethingAgain.new("Albert")
puts "Albert = #{albert.value}"
puts "Albert's class = #{albert.class.name} && class instance = #{albert.class.class_instance_variable}"
jane = SomethingAgain.new("Jane")
puts "Created subclass with class instance variable, and two instances."
puts "Jane's class = #{jane.class.name} && class instance = #{jane.class.class_instance_variable}"
puts "Albert's class = #{albert.class.name} && class instance = #{albert.class.class_instance_variable}"


Class instance variables are attached to an object's class object.

class SomethingMore < SomethingAgain
  @class_instance_variable = 1138
  def initialize( name)
    super(name)
  end
end

mike = SomethingMore.new("Mike")
puts "Mike = #{mike.value}"
puts "Mike's class = #{mike.class.name} && class instance = #{mike.class.class_instance_variable}"
puts "Created subclass of class with class instance variable, with its own class instance variable"


Class instance variables aren't visible to subclasses.
Class instance variables are required on subclasses when base-class methods that read or write them.

puts "Albert's class = #{albert.class.name} && class instance = #{albert.class.class_instance_variable}"


Class instance variables are not visible to other subclasses in the inheritance chain.


All in all, the @@class variables encourage collusive coding and appear to carry a high risk of causing race conditions and other unintentional side-effects. The @class instance variables carry a somewhat lesser risk. Barring some obscure trick, a class instance variable is always associated with precisely one class. But even with class instance variables, multiple object instances can gain access to the variable through their own "class" property, with the potential for unintended side-effects.

Wednesday, January 25, 2012

Simpler than possible

A scientific theory should be as simple as possible, but no simpler
- A. Einstein

I'm not sure of the precise context of Einstein's words, but it seemed to do with deflection of criticisms toward one of the relativity theories.

A kid with a magnifying glass intuitively understands the meaning: the focal length of the lens being a theory, too close in or too far away both give rise to fuzzy representations that aren't too bright.

By DrBob via Wikimedia Commons
The question is one of focus. Literally, not figuratively.

Considering that vision originates in the brain and its purpose is to create a predictive theory of the world around us, it is unremarkable that lenses are incorporated into our biology. The lens reduces the scale of the external problem visual field while concentrating signals in the process, and makes a projection onto a concavely curved surface covered with photoreceptors. The lens is an image transfer device. 

The brain then, is a device onto which images are transferred. Into, onto, it is hard to express: the memories modify the fine grained structure of neural dendrites, which incorporate the sensory inputs in analog gradients, and do so more or less as a whole. 

So I propose an idea of Focal Distance and the degree of Focal Alignment when considering how fit a software language, idiom, framework, system, or platform is to a particular purpose set of stakeholder needs. 

This assumes that the needs are in some manner, self-consistent -- they lie along a parallel trajectory. It may be that due to conflicting interests between stakeholders, the solutions deemed acceptable will never, ever, approach Focal Alignment. There could be orthogonal components to the needs, causing the lens -- and by extension the solution-image -- to skew. There could be absolute differences in stakeholder positions along the same trajectory or orientation in opposite directions along the same trajectory, giving a compromised Focal Distance and solutions that are blurry. 


Thursday, January 19, 2012

What Windows POSIX Compliance Teaches Us: a Wink and a Nod


Many years ago, the Federal government put out a series of requirements, the FIPS standards. IIRC, the POSIX specs are part of FIPS.  To generalize, Linux is an implementation of POSIX. 

POSIX was pushed because of a few factors:
  • vendors' operating systems are divergent, making it harder to migrate programs between systems
  • vendors go out of business, shut down product lines, and make radical changes to them, so development using their APIs are an unsound investment over time
  • government institutions have to carry the burden of systems they buy into for decades

FIPS are procurement standards. This means that in order to sell a computer solution to the Federal government, a vendor must satisfy the POSIX requirements. The intent is clear: to safeguard public investments. An improper balance of power in the hands of a supplier inevitably leads to deleterious actions against consumers. POSIX has the effect of making investors out of consumers.  

What happened afterward is a travesty: Microsoft Windows/DOS based PC clones had picked up steam in the consumer market, and NT was being aimed at enterprises including government. Microsoft gave NT a partial POSIX subsystem, which just about nobody used, to get a rubber-stamp for sale into government accounts. A panel of judges gave the nod, and forced the Coast Guard to accept Windows based proposals in a 1995 case.

Apparently, the NT POSIX subsystem has been replaced a few times, and was crippled from the start. That's why everyone and their brother uses Cygwin, UnixUtils, or MinGW for porting Unix apps to Windows. But due to Windows' non-compliance, it doesn't run Unix style applications all that well. 

credit: Zombie classified by bloodredrapture on Flickr
Microsoft only added the DOA POSIX subsystem so they could claim compliance, when their compliance was a sham on its face. The subsystem was virtually a zombie interface.  

The Coast Guard lawyers in the 1995 case would have done well to ask a multilingual colleague to assist, using a deliberately broken grammar in a non-English language to present some portion of their argument. Prior to being cited for contempt, they could then argue that their compliance with requirements for language interfaces in court was similar in kind to NT's conformance to POSIX interface requirements, and with identical outcomes. 

Government purchases of closed systems like Microsoft Windows amount to a collusion with vendors in  constructive non-compliance: apparently in conformance but with precisely the opposite effects as those intended by the standards authors. Such is the power of the judiciary to rewrite law. 


One way to re-approach the original intent of FIPS is to reformulate compliance in terms of public trusts, or something akin to a credit union in which software is the primary asset. We have very good institutional precedents in the form of non-profit organizations, like the Apache Software Foundation and the Mozilla Foundation. (Indeed, these two alone account for  a huge amount of the Web infrastructure that drives our economy.) 

For software to be purchased by a government entity its assets and its dependencies should be escrowed in the public trust, largely if not completely. In the case of open source software using distributed version control services, this could be accomplished easily: just identify yourself and the repositories. Private concerns would have to accept that the public's interest in not losing access to the intellectual property outweighs their interest in keeping it private, and trust the escrow service to not leak their IP prematurely; or chose to not play in the public space. 

Those institutions that adopted closed systems early got quick benefits, particularly in the predictability of the user interface and plug-and-play commodity hardware peripherals.  But those same institutions are  now bumping up against the inevitable consequences of the strategy. Adopting a strategy of developing for privately held operating systems is a good way to disadvantage yourself over the long term. A nod is as good as a wink to a blind horse. 

Monday, January 16, 2012

Compass imports

Yeah, some articles on this blog are a dumping ground for when a crib is needed.
The Compass docs are not particularly easy to scan through quickly.

Compass imports
ex: @import "compass/layout"

compass/

  grid-background
  sticky-footer
  stretching

  css3
    appearance – Specify the CSS3 appearance property.
    background clip – Specify the background clip for all browsers.
    background origin – Specify the background origin for all browsers.
    background size – Specify the background size for all browsers.
    border radius – Specify the border radius for all browsers.
    box – This module provides mixins that pertain to the CSS3 Flexible Box.
    box shadow – Specify the box shadow for all browsers.
    box sizing – Specify the box sizing for all browsers.
    columns – Specify a columnar layout for all browsers.
    font face – Specify a downloadable font face for all browsers.
    gradient – Specify linear gradients and radial gradients for all browsers.
    images – Specify linear gradients and radial gradients for many browsers.
    inline block – Declare an element inline block for all browsers.
    opacity – Specify the opacity for all browsers.
    text shadow – Specify the text shadow for all browsers.
    transform – Specify transformations for many browsers.
    transition – Specify a style transition for all browsers.

  typography
    links
      hover-link
      link-colors
      unstyled-link

    lists
      bullets
      horizontal-list
        bullets
        clearfix
        float
      inline-block-list
        bullets
        inline-block
        float
        horizontal-list
      inline-list

    text
      ellipsis
      force-wrap
      no-wrap
      text-replacement
    vertical-rhythm

  utilities
    links – Tools for styling anchor links. (from typography)
    lists – Tools for styling lists. (from typography)
    text – Style helpers for your text. (from typography)

    color – Utilities for working with colors.
     color-contrast

    general – Generally useful utilities that don't fit elsewhere.
      Clearfix – Mixins for clearfixing.
      Float – Mixins for cross-browser floats.
      Hacks – Mixins for hacking specific browsers.
      Minimums – Mixins for cross-browser min-height and min-width.
      Reset – Mixins for resetting elements (old import).
      Tag Cloud – Mixin for styling tag clouds.
    sprites – Sprite mixins.
      sprite-image
    tables – Style helpers for your tables.
      table-striping
      table-borders
      table-scaffolding