Archive for the ‘Bugs’ category

The Technical Story of the Adium Fixes in Version 0.32

December 31, 2008

Here are the problems I had updating my Adium plugin for version 1.3 of Adium, in the order they were discovered.

First, Adium plugins now must declare a minimum version of Adium. If you don’t say “I require at least version x.x of Adium” then Adium assumes the plugin is too outdated to work and won’t load it. (Turns out that was the right assumption for my plugin!) You declare a minimum supported version by adding a value for AIMinimumAdiumVersionRequirement in your info.plist.

Second, AppleScript support for getting the current status message is broken. So I had to add a way for IMLocation to ask the plugin for the current status message. (It also means that I’m stuck requiring a plugin for any integration with Adium, which is a shame.) For what it’s worth, I prefer the new AppleScript interface for Adium (modulo the “not working right now” bit).

Third, the way plugins are written for Adium has changed. From a software-engineering perspective the changes are for the best, and will make a better Adium. But of course I’m complaining because I had to do more work :-).

Plugins used to inherit from AIPlugin, which had a field named adium that was analogous NSApp. Now plugins conform to the AIPlugin and AIPluginInfo protocols, and can inherit from anything. (Hint: #import <Adium/AISharedAdium.h> will give you the adium back, but now as a proper global.)

Fourth, the way I was updating the status message for Adium wasn’t working so well. And the more I looked into it, the more it looks like very little (if anything) was changed in Adium. It just had a different way of dealing with statuses that wasn’t working out all that well for me.

Solving this issue took me longer then solving all other problems combined. But in the end it was definitely worth it. I have better integration with Adium now. And I was able to get rid of some update-throtteling code of mine that prevented too many changes to the status being made too quickly (Adium is now smart enough to do that internally!).

Fifth, testing, testing, testing. All code that’s written must be tested. And because so many changes were made to Adium as well as my plugin’s code, I waited a few days before officially pushing my changes, to allow more time for bugs to be discovered.

Advertisements

Adium Support is Broken

December 24, 2008

Regretfully, integration with Adium isn’t working right now.

I apologize for releasing without adequate testing.

I did not change any of the Adium-related code from version 2.9 and I foolishly assumed it all still worked because nobody had complained. But in the 13 months since I had last tested my Adium plugin, Adium was updated in a way that broke it. I had gotten out of the habit of using Adium (I prefer iChat’s simplicity and video capabilities) and so I missed this issue completely.

Unfortunately, it looks like getting the status message from Adium with AppleScript is broken, so I don’t yet have a fix out.

I can get the status type (eg away, available), but without a way to get/set the status message there’s not much useful I can do.

I am working on updating my Adium plugin so I can ask it for all details of the the current status. I expect a fix shortly, holidays permitting.

Messages to Nowhere

January 3, 2008

This article has been updated, and moved here

NSWorkspace Bug in – openFile: ?

December 7, 2007

EDITED TO ADD: I could reproduce this bug in a large project, but not isolate it in a smaller one. It is much likely for my code to have a bug then NSWorkspace. I’m still not 100% certain that this issue wasn’t my fault in some way I don’t understand. But no matter what caused the bug, the work-around I describe here has been working for me. Please let me know if you have this same issue as well, or any insight into what I could have done to cause it.

The Problem:
I need to launch a background-application from inside the application-support directory. The OS is Mac OS X 10.5.1 build 9B18.
[[NSWorkspace sharedWorkspace] openFile:path];
where path is the correct path to the application, in my case “/Users/user/Library/Application Support/IMLocation/IMLocationHelper.app.”, works about 3 out out 4 times, but intermittently fails. It also reports success without actually launching IMLocationHelper.app, but this is far more rare.

Details:
When openFile: explicitly fails, it prints, “LSOpenFromURLSpec() returned -600 for application (null) path /Users/user/Library/Application Support/IMLocation/IMLocationHelper.app.” to stderr.

Error -600 is procNotFound /*no eligible process with specified descriptor*/.

The path is correct, the application exists there. I verified this by
assert([[NSFileManager defaultManager] fileExistsAtPath:path]);
before calling openFile:.

Calling openFile: again immediately after a failure often works

Also, while(![[NSWorkspace sharedWorkspace] openFile:path]);
will terminate, meaning it reported success at some point, but it will not always have actually launched the application at path.

(EDITED TO ADD: At first I thought that, NSWorkspace was probably opening IMLocationHelper, but not blocking until it has finished being launched. But sleeping, to give IMLocationHelper time to finish launching, did not solve the problem.)

performSelectorOnMainThread did not solve anything ether.

I verify that IMLocationHelper has been launched by
assert([[ShellTask executeShellCommandSynchronously:@"ps -axww | grep IMLocationHelper | grep -v grep"] length] > 0);

A Workaround:
[[NSTask launchedTaskWithLaunchPath:@"/usr/bin/open" arguments:[NSArray arrayWithObject:path] waitUntilExit];.

This calls the open command to launch the application. I have not been able to make it fail when path exists.

Hick’s Law

November 21, 2007

I do not put much faith in Hick’s Law. I’ve seen it misapplied and drastically misinterpreted. Its limits, and edge-cases, are not widely known. I am convinced that it is generally not a dominant factor, even when it is relevant. I don’t agree with many design choices it is used to justify. In the past 50 years, exceptions to Hick’s Law have been found.

Hick’s “Law” is simply the observation that the time it takes a person to make a decision is proportional to the information-theoretic entropy of the decision. Put another way reaction-time = a-constant-delay + O(entropy of possible responses) ≤ a-constant-delay + O(log(number of choices)). So it takes longer to decide between more options. But adding an option increases the time sub-linearly (at least with a “few” options) — and adding a likely choice slows down the decision time more then adding a few unlikely choices.

Write it right

Unfortunately, most people do not have a good understanding of what Entropy is in information theory. Interaction designers and programmers should at least understand the concept. Unfortunately they don’t always.

When every option has the same probability of being chosen, entropy is maximized. Recall that lg(N) is the entropy when every one of N options is equally probable. So lg(N) is the maximum possible entropy involved in selecting one of N options. (The minimum possible entropy, 0, occurs if one item is always chosen 100% of the time, or no item is ever chosen.) Owing to it’s simplicity, and attractive (but misleading) similarity to Fitt’s Law,
reaction_time = a + b*lg(N), where a and b are empirically determined positive constants, has become the most common formulation of Hick’s Law.

I am not fond of that formulation.

It implies a connection to Fitt’s Law, when it’s pretty clear to me that none exists. Hick’s Law deals with the cognitive processes of decision-making; but Fitt’s Law deals with the iterative physical action of pointing to an object. The two equations are not related, except that that they appear together in HCI literature, and both model a human completing some task. Logarithms also appear in equations modeling radioactive decay — but have no connection to ether’ Hick’s or Fitt’s law.

Stating Hick’s Law in terms of entropy gives better intuition about the decision-process. It shows that the time to make a decision depends as much on the qualities of the alternatives, as how many of them there are. For example, imagine you’ve just won a new sports-car on a game show — now you have to pick one of several different paint-jobs, and drive it off the set. Your choices are: a classic red, safety-green, neon-pink, or Chartreuse and violet tiger-stripes. Like most people, you will probably choose red, and quickly. Now imagine that the choices are: this elegant silver-blue, or classic red. Even though there are only half as many options, it’s clearly a much harder decision, that will take more time. This contradicts the “reaction-time ~ lg(N)” model, but is clearly explained by the entropy-model, because two equally-likly options have a higher associated entropy then one popular option, and several very unpopular options.

A bad justification for bad ideas

Hick’s law has been used to argue that, “giving a user many choices simultaneously is usually faster than organizing the same choices into hierarchical groups. Making choices from one menu of eight items is faster than is making choices from two menus of four items each.” (The Humaine Interface, page 96). Sometimes this is called the Rule of Large Menus. I strongly disagree with this rule of thumb.

The decision that Hick’s Law models is only made after the user has divined enough relevant options. Hierarchically organizing options makes it easier, and faster, for the user to find relevant options. And this makes the whole process faster. Even when Hick’s Law is applicable, it’s not necessarily dominant. Other factors, such as if the users has to scroll or not, have a far greater impact on how fast, and how ergonomically friendly, completing a task is. But we can have our cake and eat it too.

A hierarchically organized presentation does not mean people will build a a hierarchical mental-model. For example, the word processor I am typing this in has hierarchically-organized menus. The Edit menu has top-level commands, including cut/copy/paste, and a sub-menu called Find that has 6 different commands to search for strings in a document. Each command has a keyboard shortcut, ⌘C for copy, ⌘F to enter a string to search, ⌘G to select the next occurrence of the string, and so on. Any of these shortcuts can be used at any time to initiate any of the commands. When I decide what shortcut to use, I am selecting one shortcut out of all possible shortcuts that I know.

People will string-together multipul commands, making them one action in their head. For example, if a “delete” command is always followed by a confirmation dialog, users will learn to automatically hit enter after hitting delete . So the two actions: “delete” and “confirm delete” become one action “delete and confirm”. (This is why confirmation dialogs are a bad idea). So as long as commands exist to navigate a hierarchy, they can be strung together to make a “flat” command that directly selects an option. A user can use consider all “flattened” commands at the same time.

I am not aware on research into, the limits on Hick’s Law — aka what happens if there are a lot of choices? People simply can’t hold 4 billion choices in their head, yet Hick’s Law tells us that choosing between 4 billion equally-likely options should only be about 30 times slower then choosing between 4. And I just can’t accept that as true. At some point, the number of options exceeds a person’s mental capacity — and I would expect that to affect reaction time. But exactly what this limit is, or if it even matters, is not commonly known.

Whisky. Tango. Foxtrot.

I’ve come across some amazingly … incorrect … takes on Hick’s Law. And that makes me even more skeptical of it’s utility.

Here’s a video of Tony Blauer (self-defense instructor) claiming that hicks law means:

If I add more choices, I slow down response time. And if I add more stimuli, I slow down response time. Exponentially.

Exponential growth is of course the exact opposite of what happens, which is logarithmic growth. Yet according to Hock Hochheim, “Many modern instructors just associate a doubling ratio to Hicks-that is, for every two choices, selection time doubles per added choice.” His rebuttal of that exponentially-wrong take on Hick’s Law is interesting reading, if for no other reason then it shows just how prevalent a bit of bad-science can become in a field. It also touches on the notion that the brain has a “fast-track” for dealing with sudden “fight or flight” situations.

Exceptions

I don’t know enough about research into the amygdala and the brain to give any hard facts. But it is my understanding that current research suggests instinctual responses to danger can occur much faster then deliberate thought. Humanly taping into this stress-response seems difficult though…

Another “I don’t know for sure, but it’s worth keeping an eye on” is muscle memory and sports. Athletes seem to be able to respond to a stimulus (a flying ball, a punch, etc.) with blinding speed and without conscious thought.

A phenomenon that Hick’s Law does not account for is habituation. If there is one option, A, in a menu that is chosen many times in a row, the user can not help but develop an automatic response to select A after clicking on the menu.

Conclusions
Hick’s law is best stated as: “Reaction-time = a-constant-delay + O(entropy of possible responses)”.
Hick’s law has been totally misunderstood, and used to draw some very strange conclusions.

Leopard Unready

October 24, 2007

EDITED TO ADD: IMLocation now runs on Leopard.

I thought maybe it might receive more attention if you, the Mac OS X software-buying public, were aware of the situation. The third-party software that you’re paying for, depending on, and hoping to run on Leopard, we cannot test on the final release build until we can run down to the Apple Store and pick it up, hopefully at least a few minutes before you do.

Our only two other options for quality assurance of our software on the final Leopard build are:

1. Cross fingers and hope nothing that affects us has changed between the most recent seed and the one that went to production.

2. Something illegal.

One of the most negative things about Apple is their secrecy. But they get a lot of milage out of the buzz it generates, it’s difficult to quantify exactly how much. They wouldn’t be the Apple we know and love without it. They operate on a strictly need-to-know basis, even internally. I wouldn’t be surprised if GM builds are withheld from some software developers inside Apple until after they’d been publicly released.

It makes an interesting contrast with Microsoft, which is a much more open company. As an intern, I got to actually play with a prototype of Surface (then known as PlayTable) a couple of years before it was announced. At Apple many interns can’t tell their friends what their working on until it’s out.

I believe the indie developer situation is more a symptom of Apple’s secretive culture, then a case of them thumbing their noses at the little guy. But I agree with Steven Frank that the public should understand the implications of Apple’s decision here. Leaks still happen, so I think Apple is making a mistake by not letting indie developers give the quality-assurance their (and Apple’s) customers deserve. It would make upgrading to Leopard a better experience for Apple’s customers.

So is IMLocation ready for Leopard? *shrug* maybe. The truth is, I haven’t tested on any seed of Leopard, because the Student ADC membership does not give me access to it, and I don’t have the cash to shell out for a “premier” membership.

My educated guess is that iChat support may break, because it’s experiencing a major revision. There is also a chance that everything will break if Apple has changed the private libraries I use to find the MAC address of wireless access points. But support for Adium should be unaffected.

We’ll know for sure in a few days.
EDITED TO ADD 2007-10-26: It turned out that IMLocation v0.27 was NOT Leopard ready. An issue with the new version of iChat caused a background process to silently deadlock, meaning nothing worked. So I was right about iChat breaking, but wrong about it’s scope, and right that everything could break, but wrong about the cause.

When Computers Kill: Radiation Overdose

October 22, 2007

I was watching BBC News on EyeTV this morning, and caught the tail end of a horrific story about hundreds of French patients who received crippling, and sometimes fatal, overdoses of radiation.

Earlier this year, a major scandal erupted in France when it was discovered that between 1989 and 2006, two radiotherapy units had accidentally given hundreds of cancer patients too high a dose of radiation. Five patients have since died and many others have been left in crippling pain.

My first thought was how eerily similar this is to Therac-25. But this incident could be worse once all the facts are out. 5 are already dead, and hundreds affected, according to the BBC.

A major investigation is now under way to try to establish how so many mistakes could have been made…. Incredibly, one of the lines of inquiry will be why the instruction booklets that accompanied the equipment were in English when the hospital staff of course were French.

This investigation is very much worth following. A lot can be learned about designing safe and usable systems from this disaster. Cynically, I worry that the massive liability involved will lead to politics and cover ups, instead of through investigation. Be prepared to read between the lines.

… staff then explained to newcomers how to operate the programmes, who later explained to subsequent trainees, and so on. To add to the confusion, the procedures were all in English.

Eventually, an incorrect default setting was made that resulted in a number of patients being given overdoses of between 20% and 40%.


Poor training is an issue, sure. But the real question I have here is, how could the software be designed so that it could possible be rendered lethal by default?

According to the AP “In both the Epinal and Lyon incidents, hospitals blamed the problems on human error.” I agree, but I think the humans at fault were the designers, not the operators. “Human error” is usually a euphemism for “operator error”, or “customer error”, or “blame them”. Disasters are a chain of failures; operators are only one link in that chain. The system as implemented in the hospitals included hardware, software, training, and standard operating procedures. From all accounts, it looks like there were systematic errors, over a period of years — about the strongest indicator you can have that the system was deeply flawed.

What Therac-25 was to engineering, this could be to interaction design. I think there were probably engineering mistakes made, but if the instructions weren’t even in the right language, chances are usability was a bigger factor. Actually, the similarities to Therac-25 still bother me. It’s a bit of history that should not be repeated.

I’ve said it before and I’ll say it again, these incidents are worth following. I just wish more hard facts were public (and in English as well, I can’t read French).