This article has been updated moved to a new location.
Archive for December 2007
EDITED TO ADD: I could reproduce this bug in a large project, but not isolate it in a smaller one. It is much likely for my code to have a bug then NSWorkspace. I’m still not 100% certain that this issue wasn’t my fault in some way I don’t understand. But no matter what caused the bug, the work-around I describe here has been working for me. Please let me know if you have this same issue as well, or any insight into what I could have done to cause it.
I need to launch a background-application from inside the application-support directory. The OS is Mac OS X 10.5.1 build 9B18.
[[NSWorkspace sharedWorkspace] openFile:path];
path is the correct path to the application, in my case “/Users/user/Library/Application Support/IMLocation/IMLocationHelper.app.”, works about 3 out out 4 times, but intermittently fails. It also reports success without actually launching IMLocationHelper.app, but this is far more rare.
openFile: explicitly fails, it prints, “LSOpenFromURLSpec() returned -600 for application (null) path /Users/user/Library/Application Support/IMLocation/IMLocationHelper.app.” to stderr.
Error -600 is
procNotFound /*no eligible process with specified descriptor*/.
The path is correct, the application exists there. I verified this by
assert([[NSFileManager defaultManager] fileExistsAtPath:path]);
openFile: again immediately after a failure often works
while(![[NSWorkspace sharedWorkspace] openFile:path]);
will terminate, meaning it reported success at some point, but it will not always have actually launched the application at
(EDITED TO ADD: At first I thought that, NSWorkspace was probably opening IMLocationHelper, but not blocking until it has finished being launched. But sleeping, to give IMLocationHelper time to finish launching, did not solve the problem.)
performSelectorOnMainThread did not solve anything ether.
I verify that IMLocationHelper has been launched by
assert([[ShellTask executeShellCommandSynchronously:@"ps -axww | grep IMLocationHelper | grep -v grep"] length] > 0);
[[NSTask launchedTaskWithLaunchPath:@"/usr/bin/open" arguments:[NSArray arrayWithObject:path] waitUntilExit];.
This calls the
open command to launch the application. I have not been able to make it fail when
Actually I do have one big issue with the article:
If there’s one behavior of your application that you should focus on eliminating, it’s the behavior of crashing. Above all other “nuisances,” this is the one that is absolutely unacceptable.
But preserving someone’s data is more important then crashing. Having to rewrite your paper because your PC devoured it is worse then crashing. Crashing may be the worst “nuisance”, but there are more important bugs to squash first. However that is a topic for another time — we all agree that crashes are a problem should be fixed.
Although Daniel shows how to synthesize debugging symbols from hex-address, I think it’s worth considering leaving debugging symbols in your shipping app.
The reasons for [building applications without debugging symbols] are mostly to keep the shipping application as small as possible, but there are also some good reasons to hide the symbols if they reveal too much about your business strategy.
I can’t say anything about your business strategies, but removing information that can help you diagnose problems “in the field” seems like a very bad trade-off for slightly smaller files.
Hard-drives cost about $0.30 per gigabyte (GB), and the price is still falling fast*. Because the GB is the unit hard-disks are sold by, I am going to use it instead of MB or KB; I think it puts file-sizes into the right perspective.
Today’s applications are “big”, for a very good reason. That article says it better then I can, but the gist of it is that megabytes are cheaper then air and bigger programs can do more, making them more useful (the cost of a GB of storage space has fallen over 20 fold since the article was written, by the way).
The largest program I use every day that was built with debugging symbols on is EyeTV. It weighs in under 0.11 GB, and I don’t consider that “bloated”, because I get good value for my 3 cents of disk-space. Stripping debugging symbols with
strip makes it 0.0076 GB smaller. That translates into $0.002 worth of hard disk, that could store 13.7 seconds of TV . And that is insignificant. A few thousandths of a GB make little difference, and that’s all stripping debugging symbols will get you.
Of course, this is all academic if no one ever sees the crash-logs. Unfortunately, developers know, that’s the current crappy state of affairs. Crash reports are sent to Apple, and only Apple. The developers who wrote the program — the ones who could best fix the problem, and who desperately want to know about it, are completely out of the loop.
If Apple passed crash logs on to developers, everyone would win. Developers would be able to squash more bugs in less time. Users would have a better, more productive and bug free, experience. Apple could sell those improvements. Microsoft already does this, and it seems to work well for them. Most people are unaware of this SNAFU, and probably think that reporting the crash to Apple gets the information to the right people. I don’t know if educating people about the issue would light a fire under Apple, but it might.
If enough people start using flash memory over current magnetic-platter harddrives, then the price-per-GB ratio could change, because flash is currently about 100x more expensive (per GB). But the trend of the current storage-medium’s price exponentially falling will continue
But by the time flash-based computers become popular, their cost-per-GB will probably be as good, or better, then full-sized hard drives of today. Tiny hard-drives, using conventional magnetic platters, like the ones in the iPod, are also a compelling alternative to flash.
Less than half the population of the world has the manual dexterity to wiggle their fingers at the speed of 50 words per minute or better.
–Dr. Alan Lloyd, seminal typing instructor.
Computer professionals often seem to have unrealistically high expectations of what the “average” typist can do. For example, according to this Wikipedia article (as of 2007-12-04)
An average typist reaches 50 to 70 wpm, while some positions can require 80 to 95 (usually the minimum required for dispatch positions and other typing jobs), and some advanced typists work at speeds above 120.
But as we shall see 70 WPM is an absurdly high “average”. 120 WPM means 12 strokes a second, or a split-time of 83mesc between keypresses. That borders on the physically impossible.
As Teresia R. Ostrach, President of Five Star Staffing, Inc. says,
After 27 years in the Staffing Industry, I’ve encountered just about every misconception regarding the performance of office workers. The most frustrating of these is the belief in what constitutes “average” typing scores.
“For years I tried to explain that 65 WPM is a lot faster than average, but I had no proof. After all, everybody knows what an average typist is, right? Somebody who types between 50 and 60 WPM? Well, isn’t it? Well, NO, it’s not!
Here are her findings:
Mean = 40 WPM = 240 characters/minute
Median = 38 WPM = 228 characters/minute
Standard Deviation = 16.7-WPM = 100 characters/minute
Notice that that out of the three thousand four hundred and seventy five applicants, not a single one could manage 120 WPM. And only the top 5% of applicants could manage 70 WPM or higher.
Typing Speed: How Fast is Average
4,000 typing scores statistically analyzed and interpreted
It’s an excellent paper. Short and accessible, yet relevant, authoritative, and eye-opening. Well worth the read. (Unfortunately it’s laid out poorly in the liked PDF. If someone has a more readable source I’d love to link to it).
But what’s more interesting to me is this chart:
Which shows an average error-rate of about 6% per word. Put another way, more then 1 out of every 17 words has a typo in it, which is kind of a big deal.
The error-rate is probably artificially high, because subjects were taking the test under a lot of pressure — it determined if they got a job or not! But even the best group of over-qualfied typists still had a 4% error rate; or a fumble on 1 out of every 25 words. And that’s significant.
The implications of a 4%-6% error-rate are enormous. If people are making that many errors, then good spellcheckers, and auto correctors are essential. If one out of every 17-25 words is mistyped, then long command-lines seem like a very bad idea, because something like one out of every 20 commands would be in-error. Systems should be able to gracefully recover from bad input; because they will be inundated with it.
It looks like the average typist is much slower, and makes more mistakes, then “folk-wisdom” leads us to believe.