Sunday, March 27, 2016 without the bitter aftertaste looks pretty nice, but one thing I find irritating to the point of abandoning it is the way they botch the transaction descriptions read in from my credit union's records.

Consider some typical transaction descriptions from my statement, the way they are listed by Mint:

FEB 29   L Time Date   $43.41
FEB 29   L Time Date   $17.83
FEB 29   L Time Date   $6.43
FEB 29   L Time Date   $11.38
The "L Time Date" just happens to be the same on most of the transactions, and comes from a parse of strings that look like the following:
Statement Name: Point of Sale Debit L118 TIME 03:35 PM DATE 02-16 WM SUPERC WAL-MARRALEIGH NC
It isn't too difficult to see how the uselessly ambiguous description is derived, at least, functionally. The actual implementation steps may be different, but the net effect is that the first couple of phrases are excluded, and the next 30 characters or so are chopped and then stripped of non-alphabetic characters. I won't call that simplistic, because for all I know they've got some pretty signification inference algorithms going on back in the weeds somewhere. But when no one size fits all, there needs to be a method of tweaking the methods used.

The pain point here? Intuit has had years to address the issue, but still does not provide one alternative to manually editing a meaningful transaction description.

One might conjecture that Intuit has decided that nobody with technical skills is going to spend time on that particular issue. That would be be rather stupid, if it were true. Judging from the Mint Community comments, they've been ignoring the obvious and painful flaw for years - and grabbing this information is a killer benefit at which Mint is supposed to excel.

The problem cannot be that hard to address, with just a smidgen of imagination. In fact I did a proof of concept that solves the problem just fine for my purposes. If you are a Mint user, or a disgruntled Intuit developer who thinks the people making decisions are idiots, take a look:

jQuery('#txnEdit-merchant_input').on('click', function(e) {
  var truncateAfter = 66; // grab text after this many characters
  this.value = jQuery('table').first().attr('title').substr(truncateAfter);

This Javascript fragment was typed into the developer console in Chrome, while viewing the problem transactions page on  A click on a description highlights it, and configures the merchant input for editing - as Mint normally works. This handler catches any subsequent click on the input, and stuffs the rightmost part of the corresponding statement description into the input. A little tweaking and a little greasemonkey is all it would take to make this a full blown solution, sans Intuit's helpless support.

Now, I have no idea how long this little snot of a handler will continue to work, but it took me just a few minutes to put it together after poking around on the page to see what worked and what didn't. It is a trivial adaptation to use a regexp based match, or provide any other parsing and automation for editing the string for that matter. The information is all there present in the HTML page.

Sunday, January 24, 2016

Killing yourself with "good" supplements and protein sources

None of this is medical advice. It is simply a series of notes I took while reading on the subject of supplements, including choline and l-carnitine. I learned about Trimethylamine N-oxide (TMAO) as a primary causative factor implicated in several metabolic syndrome related consequences - atherosclerosis, fatty liver, heart disease - the effects are broad and pervasive.  

One study suggests that choline feeds bacteria in meat (red, white, fish, eggs, but not dairy) eaters, that produce TMAO that damages liver and promotes atherosclerosis; but this is mitigated in vegetarians who consume little L-carnitine and choline in their diet.

Eating foods rich in these nutrients (including supplements) would seem to have some serious known negative effects along with the postulated positive effects. Continuous dosing may lead to continuous high TMAO levels, and with it the stereotypical cascade of "Western" metabolic disorders. 

In one study I read that Anaerococcus hydrogenalis, Clostridium asparagiforme, Clostridium hathewayi, Clostridium sporogenes, Escherichia fergusonii, Proteus penneri, Providencia rettgeri, Edwardsiella tarda were implicated in TMA production from choline.   These germs can be pathological on their own.

  • A. Hydrogenalis eats meat
  • E. Fergusonii eats glucose and produces flatulance and eats other L-sugars as well and has been detected in clinical settings in human blood and spinal fluid
  • Clostridium species seem to eat meat and the lab medium is usually cooked meat
  • P. Penneri eats meat and sugar and prefers an alkaline environment in contaminated meat products, is implicated in kidney stone formation, and doesn't metabolize citric acid; E tarda is a meat eater and infects fish
  • P. rettgeri eats glucose and other sugars and sugar alcohols and has been found on meat products

Probiotics such as inulin (artichokes, garlic, leeks, chicory, asparagus) may supress these organisms by promoting other strains of microorganisms. Maintaining an acidic gut via acid-producing beneficial microorganisms, keeping to a more consistently vegetative diet, and/or shifting protein to vegetative sources and dairy consumption may shift the balance away from the pathogenic TMA producing strains and toward more beneficials. 

Sorry not to cite all my sources in the text, but as I said these are just my notes. I did jot down a few of the papers, but any search engine can show you more, particularly if you search PubMed for papers.

Tuesday, January 5, 2016

Is naming everything in programs an evolutionary dead end?

We program largely by using systems whose discrete components are bound by naming conventions.

Living systems don't use names much, at the scale of cellular biology anyway. Names are an affect of emergent brain behavior - the assignment of symbolic communicative phonemes to what a brain experiences, and their serialization as text, vocalizations, and actions.

Cellular biology has interfaces. But those constructions don't act like the interfaces we use in programming. The code interfaces we use are defined on the basis of symbolic names and structural signatures, and they are forced to bind deterministically rather than probabilistically. Out code signatures also largely conflate the  structure with the payload (eg the argument parameters). Cell biology may make packages in which the payload acts as the signature, but often uses complexes whereby the key signature is distinct from the payload: the interface acts as a carrier or wrapper or adjunct whose role diminishes once binding is accomplished.  That is, they often use monads.

Our code comprises a living system, for no other reason than it is an extension of us. No, it is not quite the same thing as dead skin cells, hair, or fingernails. Code is more akin to the living arrangement of dendrites in our brains, or the skeletal muscle... it can be remodeled, sometimes merely by existing.

Brains and musculoskeletal structures have been around for a very long time by comparison to code. Brains and skeletons scale as solutions, both in terms of performance and numbers of applications. Yet neither system architecture makes use of names as a principal organizing device. Names are created by brains, for the brain to recursively model, and muscles don't use naming systems at all. Molecular biology also makes do without the use of names, even though quite a lot of activity in that realm appears to be discrete - even tokenized.

Somehow these systems are able to perpetuate, grow, search solution spaces and address novel problem domains in ways that make our computational machinations look like toddler's toys. We know why we follow the use of currently popular programming language models - there exist effective processes for expressing solutions and analyzing them under these models - but it begs an important strategic question. If evolutionary pressures push our software systems toward closer and closer approximations of the real world, the designs will necessarily mean explicit names become secondary or disappear entirely from the architectural approach.

Monday, September 28, 2015

Emotion is the basis for knowing, not intellect

Society tries to teach us that intellectual pursuits will necessarily make you smarter - that "building your brain" around facts and problem solving tactics is the way to improve your understanding.

That's so wrong, it's a special kind of stupid.

Emotional capacity, not intellectual content, is the carrier wave of intelligence; emotional response is the method by which that carrier is modulated to encode intellectual content. Without the emotional carrier and modulation, intellectual content is indistinguishable from background noise or static.

That's why emotional events can be so disruptive to clear thinking, and why activities both intellectual and athletic that allow one to reach a flow state simultaneously makes one feel "in the groove".

Thursday, September 17, 2015

Comprehensive Sense Making

Trying to comprehend a process consisting of multiple stakeholders in multiple workflows, is akin to a kid watching the guard-rails pass the car window on a long road trip.

When your reference frame is traveling at a different speed and direction than the frame of the thing you're observing, you may get 20-20 vision, but the whole thing is still going to be a blur.

Such is the problem of trusting the viewpoints of individual users reacting to day to day business activities: their view is blurred by the pace of life. And so is yours.

Want to align visions better? Reduce the number of players in the game at any one moment. Allow them to ramp up their speed by communicating exclusively together.  When business needs push them apart, allow them to disengage from the process instead of thrashing against it.

Recognize that until the game is fully underway, what the players see of each other will be blurred, and any tooling created to support the process will necessarily incorporate the blurred understanding.

Sunday, July 19, 2015

Set -o vi FTW!!!

A programming buddy recently reposted some tips on using the OSX Terminal, from Tech Republic.
My terse response was

set -o vi

Which, of course, calls for some explanation. This configuration option to a Posix shell like Bash, sets it up to use the behavior of the ancient and ever popular "VI" editor as a command-line terminal interface.

Transliterating most of the Tech Republic article, here are the roughly equivalent shortcuts:

Puts the terminal command line into "Command Mode".  Single letter commands from VI will now work, including searches.

I or i or a or A
Puts the terminal command line into "Insert Mode".
Uppercase I puts the cursor at the beginning of the line, lowercase i leaves the cursor where it is.
Uppercase A puts the cursor at the end of the line, lowercase a puts the cursor after the next character.
Hit Escape to leave insert mode.

w and b
Command Mode: These shortcuts allow you to move the cursor between separate words in a command line.
Use 'b' to move back and use 'w' to move forward down the line.

Command Mode: This shortcut moves the cursor to the end of the current word on the command line.

d b
Using this key sequence will delete the word immediately before, or to the left of, the cursor.

d b b p
This will the swap two words that appear immediately before the cursor.
So, if "this is" sits before the word the cursor is on, using d b b p will change that to "is this."

/somesuch RETURN
If you need to locate a previously used command in Terminal,
use /somesearch and hit the RETURN key.
It will perform a search on the command history and allow you to find a previously used command
that you may need to access again.
Hit 'n' to find the next match.

?somesuch RETURN
Same as /, except searches history in reverse order.

Using '$' will take you to the end, or the far right, of the line where your cursor is.

This shortcut is the opposite of '$'. Typing '0' will take you back to the beginning, or the far left,
of the line you are currently working on.

Control + C
If you need to kill what's currently running, us Control and C in Terminal to abort the current application.

d 0
This shortcut clears the entirety of the line before the cursor.
So, if you get to the end of a line and realize the whole thing is wrong, use 0 D to delete it all.

0 D
These shortcuts will clear the entire line.

Using D will have a similar effect as using 0 D, but will only clear the line that appears after the cursor.
It is helpful if you need to change or delete the latter half of a line.

Comand + Q
This will clear the entire Terminal by killing the underlying process.
Let's face it, you weren't getting anything useful done anyway.

Control + L
This will clear the entire Terminal screen you're working on, deleting everything.
The same thing can be accomplished by typing "clear" into Terminal.

Control + D
This will cause you to exit the current shell in Terminal.
The same thing can be accomplished by typing "exit" into Terminal.

Control + Z
Suspends what you are currently running in the foreground and makes it a sleeping background process.
Type 'bg' and hit Return to awaken what you were running as a background process.
Type 'fg' to awaken what you were running in the foreground.
Type 'kill %1' to send the kill signal to the first suspended job.

This executes the last command entered. If you run into permission issues, try entering sudo before !!.

Control + H
Essentially the same thing as backspace, but it is useful if you want to stay anchored on the home row keys.

Typing "top" into Terminal will display all of your active processes.
Similar to what you'd get from Activity Monitor, but within Terminal. Press "Q" to quit.

history + a number
If you've lost track of a command you typed earlier, you can type "history" into Terminal to retrieve a history of your commands. However, if you want to view a specific number of past commands simply type a space then a number after history. So, "history 5" would show you the last five commands you typed.

bind -p | grep -v '^#\|self-insert\|^$'
This shortcut shows the key bindings that are active in the current editing mode (Command mode and Insert mode have different bindings).

ls -ltr
Show an extended list of files, ordered by time of creation, in reverse order (most recently created last).

find . -name "*.php" -exec grep -l "somesuch" {} \;
Do a deep search for the string "somesuch" in all files ending with ".php", starting from the current "." directory.

Sunday, May 24, 2015

Has the Time Come for Software Cooperative CUs?

As we were departing php[Tek] 2015 last week, I asked my fellow attendees where they were heading. "London," replied Derick Rethans cheerfully, to which I replied with mock seriousness "Ah... that's a long drive." The look of confusion on his face told me that a crucial element was missing from the conversation - he didn't know me well enough to tell that I was joking. Open source is a lot like that. It can be difficult for small businesses in particular, to distinguish between technologies - and technologists - that presented a lasting opportunity, and those that had the potential to fall flat and kill a business model in the process. At php[Tek], as with most of the grass-roots conferences I've attended over the past decade, I recognized an emerging phenomena. It may not be so much a trend as a series of pieces falling into place in the economy and the community at large. Similar to what must have preceded the establishment of Credit Unions in the 1850's, there is an increasingly undercapitalized population relying ever more on applied software technologies. Open Source was a major component falling into place, not merely because it made certain software technologies cheap, but because it democratized access and learning about managing the technologies. Much in the same way, Credit Unions made it possible for individuals and small businesses in impoverished communities not just to self-finance but to learn to oversee and manage their own growth. Regardless of how accountants see software, creating and curating it is a primary factor in the success or failure of modern business operations. Yet even for those that are technologically skilled, the tangible and intangible capital costs can overwhelm the ability of any one business to maintain. Again, Open Source helps by lowering costs and making acquisition of skill levels feasible. Yet even Open Source can present too high a cost of adaptation and configuration management over time. What Open Source does not yet do, and seems to be about to do, is provide a way for neighbors in the technology community - providers and consumers - to secure the future of a software technology together. That is, I think formal cooperatives, or "Software Credit Unions," are about to emerge from the economic primordial soup we call the Market. Many if not most of the core technologies already have found homes in foundations, consortiums, non-profit charities, and public corporations. That's not what I'm pointing out. The applications of these technologies, which provide real value to business process stakeholders, are assets that are frequently constructed with non-trivial personal or business funding. A cooperative form of business would provide pooling of the capital investments, sharing of risks, and amortizing of maintenance costs and risks for members, as well as avoidance of "razing" of small intellectual properties when such small businesses close. If such organizations were formed under the same kinds of fiduciary ethics and practices as a credit union, it would be a boon to the future of technological small business clients and open source contributors alike. It would help define financial and legal standing for popular projects, help quantify the value of contributions, and support necessary but otherwise marginal projects for their members. A cooperative form can also help ensure that peer-review processes (already a part of open source culture) are enforced to members' quality standards and not to requirements of some ill-conceived third party. Software has already irreversibly infiltrated our lives. I don't think it is even possible that cooperatives akin to software credit unions can be avoided. It is happening now. The question is not "if" organizations that hold our software assets will exist. They do now. The question is whether any such organizations will hold a fiduciary role for consumers and producers alike as a membership organization in a credit union model, or leverage us all at a disadvantage like a bank.