Better than Syllogisms
According to the very latest research:
I'm reminded of the following quotation:Natural Sciences and Engineering Research Council grantee Peter Austin and three other researchers at the Institute for Clinical Evaluative Sciences in Toronto have just completed a survey of hospital visits in Ontario, showing that, compared to people born under other astrological signs, Virgos have an increased risk of vomiting during pregnancy, Pisces have an increased risk of heart failure, and Libras have an increased risk of fracturing their pelvises.
In fact, each of the 12 astrological signs had at least two medical disorders associated with them, thus placing people born under a given sign at increased risk compared to those born under different signs.
The study, which used data from 10,000,000 Ontario residents in 2000, was conducted with tongue firmly in cheek.
“Replace astrological signs with another characteristic such as gender or age, and immediately your mind starts to form explanations for the observed associations,” says Austin. “Then we leap to conclusions, constructing reasons for why we saw the results we did. We did this study to prove a larger point – the more we look for patterns, the more likely we are to find them, particularly when we don’t begin with a particular question.”
One horse-laugh is worth ten thousand syllogisms. It is not only more effective; it is also vastly more intelligent.Peter Austin could have discussed the problems of data mining or retrospective studies and have been ignored. Instead he came up with a memorable horse-laugh that made the same point.
Addendum: I just remembered that I once tried making a similar point in response to a Usenet post from Phideaux:
My response:I'm taking a very informal look at a strange little phenomenon which I don't want to describe in plain terms because I really don't believe that some of the apparent parts are truly related, and I don't want to be lumped with the crackpots.
In general terms:
Event A occurs in a definite cycle. (As reliable as sunrise.)
Event B can only occur when Event A happens, and the chance of it happening has been worked out to slightly more than 2.08% (this is not observation, but calculated. My personal observation puts it slightly less, but that's using less than a four year timeline).
Event C is totally unpredictable and the conditions surrounding it are beyond the scope of study, so there is no hope of reproducing it under controlled conditions. It has happened >.214% of the time. There are no near-misses where C could be said to have happened: it either did or it didn't, and always between one A and the next.
However, and here comes the sticky bit, every time C has happened, B has also occurred. (C can't be triggering B unless you believe in astrology, telekineses, little green men, or other nonsense.)
Now I would like to continue to observe this and make no definite conclusions until I've got hundreds of examples of concurrance, but that would take thousands of years.
Now here's my question:
At what point do you start to believe that C is a reliable predictor of B when there is no known science that could possibly link the two?
Let's consider how many events you could find possible correlations between. If there are 1000 possible events resembling B (e.g., a hurrican hitting North Carolina or a large uptick in the stock market) and 100 possible events resembling C (e.g., three typos in The New York Times_ or your cat throwing up) and you look for all possible correlations between those types of events you're likely find at least one correlation with odds of 100,000 to 1 against.
I have read that critics of the "efficient market hypothesis" frequently find non-random correlations in stock prices. For some reason, most of those correlations stop working after a while.
Addendum II: Medgadget points out the horrible possibility that some people might take this seriously.
0 Comments:
Post a Comment
<< Home