Tuesday, March 20, 2012

Why bother?

I infrequently but regularly revisit the university and state job posting sites to see what's out there. Recently I noticed a shift in the way the sites were managed: they've gone to PeopleSoft applications. It is a real shame, because the new interface is crap.

NC University Jobs Search Interface
It looks good though, right?

Except that the layout components are fixed. To handle multiple devices, they just scale everything. Try to visit it on a smartphone and you get greeked type (too tiny to read).  Still, that is a minor ergonomic faux-pas, easily worked around with pinching and panning.

Except that it also has basic accessibility issues: missing labels for input fields, and missing alternative text for an image, no skip-to links to avoid the menu and results links, and result link texts that are contextually inadequate ("view").  These are technical infractions that may appear minor, but remember that this is a public space run by a public institution and as such must conform to ADA accessibility requirements.

Such technical issues in a template-driven site is a reasonable basis for asking whether any standards were followed at all. Given that a trivial check with the AIM Wave toolbar shows their pages to have demonstrable issues, it is also reasonable to question the soundness of their conformance checking procedures, or even if any such procedures exist.

As serious as the accessibility issues may be, what is even more telling is the omission of an important key detail: there is no way to filter results to those that are relevant to a job seeker.

I don't know of any job seeker who says "I wonder what kinds of jobs are available that are ARRA funded?"  But the lack of a professional job categorization is an unbelievably irresponsible omission.  Whether a PhD student looking for a research position or an technical professional looking for an IT position, making someone wade through 193 irrelevant posts for shipping clerks, educational specialists, and "Internal Only" postings, is unnecessary and thoughtless.

Omissions such as these are a not-so-subtle suggestion that an institution does not value the time of others, and will waste your time from the get-go.

Was any attempt at all was made to understand the needs of the process stakeholders? Was there any effort to assess the validity of the application and to schedule remedial corrective actions?  In the drive to introduce "enterprise" processes, the stakeholder's interests seem to have gotten misplaced along the route. 

When companies like PeopleSoft can be rewarded with contracts to deliver non-compliant software with less service-able interfaces, where is the motivation for an individual to be concerned about considerations such as Web standards, ADA compliance, or even stakeholder needs?  In other words, why bother?

Saturday, March 17, 2012

Git + Dropbox = bad

I thought about using Dropbox as a location for a repo. It doesn't take much thought to realize why this isn't safe.

If you push to a shared Dropbox based repo, Dropbox will detect the changes and begin syncing everyone else's copy. If someone else tries to read at that moment, they will get an inconsistent view, and if they try to do a commit or push themselves you are likely to see corruption as Dropbox and Git's writes clobber one another.

Clobbering repos is bad. Don't do that. Use GitHub instead, or a virtual server running something such as Gitolite and GitLab.

Wednesday, March 14, 2012

HTML5 is simpler than possible

See anything wrong with this line of code?

  <link rel=”stylesheet” href=”stylesheets/agilemarkup.css” />

Neither did I. But after copying up a minor style change to my company site, this little line botched up the appearance of the whole site.

The company site was suddenly raw and styleless, as if a developer tools add-in had disabled all styles.

A previous post here, CSS Grammar Considered Wrong remarked upon the muddy expressiveness of the syntax. As an expression of a solution the syntax lacks Focal Alignment. HTML is in a similar condition, and HTML5 especially.

Take another look at the quote character in the HTML:

href=”stylesheets/agilemarkup.css”

Yep, that's a completely different quote character, not the usual double-quote. HTML5's new parsing rules state that we can omit quotes around contiguous attribute values. So when the errant quote character is used, the actual resource that the browser attempts to download is:

”stylesheets/agilemarkup.css”

That is, the not-really-double-quote characters are silently taken as part of the attribute value.

HTML5 browsers are not supposed to complain or flag errors, just gobble up characters and do something predictable. That's what makes HTML5's parsing a simpler solution than possible.

Friday, March 9, 2012

Git Tagging

So, I'm setting up a GIT rep to track my company accounting files. Why? Quickbooks. The files are a moral hazard when, say, the software clobbers the data file, or an update goes awry, or you need to take a snapshot and then the software forgets which one was open. It can get really ugly. GIT ensures that only one file name need ever be present by doing the snap-shooting for you.

I know it is a lot of trouble, but there are two reasons that motivated me to do this: using file-name mangling is a wholly inadequate means of implementing revision control, and keeping the files locked down on one computer or on one network is risky and disruptive to getting things done when you move around to different machines. Under GIT, I fetch my accounting repo, make changes, commit and push the commit back to the remote. If I know I'm going to be moving around to another machine, I can clone into a Dropbox or other network folder and after I commit I can feel secure in removing the clone.

Of course, GIT won't version the Quickbooks file itself in any meaningful way. You can't for instance go back to a previous commit, branch it, and start applying journal debits and credits as corrections, and then merge back into the main line. Just won't work, because a QuickBooks data file is an opaque indexed binary structure, not a line-structured source file. 

But we can avoid mangling the file name for snapshot purposes. Yet if we don't mangle names, how will we know which commit is which without relying upon the comments? The answer lies in GIT tagging.
<TAG> 
you're it

There are two types of tags in GIT, and either would work for this purpose.

The first is lightweight tagging. A lightweight tag is one without other metadata -- things like the tag creator, the date of tagging, and a GPG signature.  If all you care about is finding a particular commit given a label, then this is what you'll want:

git tag FiscalYear2011-Final

The only thing this does is point the tag to the current commit.

The second kind of tag is an annotated tag. Here, you tell GIT to make an object with the metadata, and/or a signature:

git tag -a FiscalYear2012-Start -m 'Beginning use of GIT'

We can list tags with:

git tag

and show the metadata for a tag with:

git show FiscalYear2012-Start
or use a script to give you a summary report:

for c in $(git tag -l)
do 
  git show --quiet "$c"
  echo
done


If you really want to go hog wild, you can apply a digital signature, and verify it.  We're really only making read-only snapshots, and adding signing would be a good property for internal data auditing.

More info on tagging at github.

Wednesday, March 7, 2012

Inaccuracies at LiveStrong.com


Read this article on LiveStrong.com, and see if you can spot the technical inaccuracy with serious medical consequences for anyone taking it seriously:


Comparison Of Sucrose, Glucose & Fructose. Sucrose, glucose and fructose are all kinds of sugar. As you might expect, they all taste sweet -- though to varying degrees. 






As the article states, fructose is indeed "absorbed by cells" without insulin. But only by liver cells, not all cells as the uninformed reader would easily infer from the article. The resulting hepatotoxicity is a driver of fatty liver disease, obesity, and diabetes. By omission the article misrepresents the metabolism of fructose and supports the dangerous myth that glucose is "bad" and fructose is "good". 

See <http://en.wikipedia.org/wiki/Aldolase_B#Pathology> for the etiology of the disease mechanism. Overconsumption of fructose sets up an artificial relative shortage of of Aldolase_B. 

And if you think this doesn't apply to you, think again: given the current formulations of industrialized synth-food and predominance of fructose in popular fruits and vegetables,  most of us are chronic fructose-aholics. 

Tuesday, March 6, 2012

The terrible iTunes Connect Application process

First, one needs an iTunes content provider account. If you already have one, and used it for publishing apps, you'll need a new one just to apply for iBooks.

Second, one needs to have a paid iBooks publishing account to sell iBooks.
Alternatively, one needs to have a free iBooks publishing account to give away iBooks.
You sign up here.

Apparently, it isn't in Apple's imagination to give free iBooks and sell others under the same account name.  Perhaps it simplifies things for someone in the process, or keeps the legalese straight. You can start free but if you want to sell you will need to apply for a new account. The two don't mix.

An aggregator can help ease the process of raw conversion to ePub format:

http://www.smashwords.com/about/how_to_publish_ipad_ebooks

Other aggregators are listed here.

Bear in mind it is unlikely to be suitable for publishing unless the content was originally designed for the iPad.

Even if you get your content into ePub, you still need to validate it.

http://threepress.org/document/epub-validate

is one way to do so.

I resell the oXygen XML Editor, which is a little more robust way of getting into the ePub format and validating it. Since ePub is XHTML based, one can target an ePub for creation via a transformation process that performs composition on components. That is, you don't need to write chapters linearly, but can use topical chunks instead.   This can have application to Teachers' Notes, Homework, Solved Problems, Problems to Solve, Quizzes and Tests, and other areas.

Importantly for solved problems, problems to solve, quizzes and tests, generic markup allows the materials to be parameterized. If one knows how the parameters of the problem relate to one another, one can provide a means of customizing materials by varying one or more parameters while also providing a level of sanity checking and guidance to the instructor via information contained in the markup.

For instance, probabilities are usually expressed as a decimal number between 0 and 1; this range could be specified as a limit of a parameter. Or suppose parameter B depends upon parameter A, in that any A in (0,1) maps to some particular B in (2,20).  Such relations can be expressed in markup and enacted on a device via a host language, for instance, EcmaScript, via browser/cloud-based customization Web tools,  or by a more traditional desktop application.

The question arises: how do we then get these customized ePubs distributed???

Apple's official process, if you can call it a process, is by way of iTunes. But for my trouble of applying over a month ago I have received neither a confirmation nor any other communication about the application's status. Apple's approach to communication is quite opaque.

Checking just now, it looks like they've either changed my password or deleted my original account, because I cannot log in...  I do a password reset and change it back to what 1Password said it should be... and it says:

Apple ID does not have permission to access iTunes Connect.




ARGHHHHHH!!!!! I know that's the account I used to sign up! 1Password says so too.  So I pull a diagnostic trick: run the iTunes Connect account application again. Sure enough:


The following error(s) occurred:
  • The iTunes Store account entered has already applied to distribute content on the iBookstore. To continue with this application, you must enter a different iTunes account.


OK, whatever. I have an application, no way to log in, no way to get appraised of its status, and no reasonable expectation that I'll ever hear from Apple on it since they never bothered to confirm the application. Silence is not a good way to communicate.

The process for apps is a game of ping-pong that takes several days between ball bounces, but at least they give feedback. A month is a little long to wait with nary a word either way. I've never seen a business process so craptastic from an organization renowned for products designed to be easy to use.

So, following a trick on Apple's user discussion forums, I give up on the free iBook publishing account option. It seems more like a honeypot meant to divert non-profit oriented people, than a legitimate application process.

I use an alternate account application to the paid iBook application.  Even the paid account application assumes that you've got several eBooks in the waiting, just ready to be published. I put in 0 for all the numbers, since I don't yet even have the tools that I need the account to download. We'll see how long it takes, if it is even accepted and doesn't turn into a black hole like the last application. It is March 6th as I write this.



Update: March 8th I had my paid iBook account approved. So the stats are in: 2 days to get a paid account vs 40+ days to get a free account. Lesson learned: use a credit card, and vendors will pay attention.









Monday, March 5, 2012

Personal Responsibility in Rails

A commenter on this Rails issue on the subject of securing resource access to model attributes posits:

"Rails is all about conventions. Broken by default is not a good convention."

Too true. The issue ping-pongs back and forth, but the core idea here is visibility of mechanisms and application behavior. Hiding implementation details can be too much of a good thing, when people really need to know about the behavioral consequences of their actions.

Friday, March 2, 2012

OSX Lion Productivity Wasters

Today's SAAS applications environment has one thing in common with the networked office environments of the early 1990's: the networks are just good enough to lull you into a state of complacency until you are in the middle of something important, and then they crap out.

Back then, we learned that, given a choice, we should not to rely upon network services. It was just a recipe for continued frustration and disruption. The UNIX workstation/Windows PC was a Godsend, because while it could be networked, it had enough brains and brawn to get interesting things done all by itself. As centralized governance was imposed however, everything that was networked was subjected to a diseased governance that imposed centralized, non-robust domain security. So everything would automatically lock down at the slightest hiccup.

An interruption to an otherwise unimportant DNS server could turn your $5k workstation into a warm brick for at least as long as needed to disrupt your train of thought.

Today, I'm trying to pull together a summary of expenses using a docs.google.com' spreadsheet, using Chrome on an OSX Lion 10.7 MacBook Pro (e.g., a UNIX workstation). Sigh. I've been interrupted three times in the space of two hours, due to DNS lookup failures:

The server at docs.google.com can't be found, because the DNS lookup failed.

Restarting the modem/router/firewall doesn't help. A common solution is to flush the cache on OSX:

dscacheutil -flushcache


But sometimes this accomplishes nothing: DNS still comes up empty for sites like docs.google.com. Another solution is to restart the DNS responder:



sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist


sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist



But this just punts the problem down the road, and maybe not even all that far. The real questions are, why does the cache suddenly go stale for big SAAS providers, and why is Chrome more frequently affected?

The latter link suggests that Chrome may have a brain-damaged DNS lookup algorithm: try the first DNS server, then vomit errors if that doesn't work. A quick check of the System Preferences / Network settings / Advanced for the TCP/IP DNS servers shows that the wireless router is reporting itself as the first DNS server. I have a cheap Linksys WRT160Nv3 picked up from CompUSA as a refurb. Perhaps the real problem is that the router is brain-damaged, and is passing on its brain damage to Chrome.

I don't know. I'm still looking for resolution. If I have to, I'll jettison the cheap Linksys. OK, done, but Chrome has the same problem with a completely different Linksys. Maybe it is time to jettison Chrome?