Nickeled and Millionaired
I started reading two books in Barnes & Noble today, both of which I plan to read in full and have heard fairly-to-very good things about. And yet within ten minutes of starting each one I noticed them making claims that struck me as being a mixture of odd, misleading, and plain wrong.
On the third page of the introduction to Nickel and Dimed: On (Not) Getting By in America, Barbara Ehrenreich cites the statistic that, according to the National Coalition for the Homeless, "in 1998...it took, on average nationwide, an hourly wage of $8.89 to afford a one bed-room apartment, and the Preamble Center for Public Policy was estimating that the odds against a typical welfare recipient's [sic] landing a job at such a "living wage" were about 97 to 1." Having not done studies on housing affordability in 1998, I obviously won't say that this is wrong. But it seems designed to mislead, since what it measures isn't really relevant. I think this is pretty well known, but I'll briefly explain that Ehrenreich's project was to go out and get a series of around minimum wage jobs, and write up the experience as investigative journalism. Part of the inspiration for that was changes that were going on in the welfare system which would lead to many former recipients having to go it alone. But the studies above aren't going to help figure that out, I think. The problem is using a national average to measure people who will be systematically and predictably not average. The issue isn't whether or not the people going off of welfare can afford the average nationwide one-bedroom apartment, but whether or not they'll be able to afford apartments in areas where low-income people tend to live. Since that isn't the average area, I would think that using those studies in this way disguises more than it illuminates.
The other book is James Surowiecki's The Wisdom of Crowds. It's about how by aggregating the information of groups, you can, with certain types of questions, get more intelligent answers than if you were to ask the smartest members of the group. He has lots of good anecdotal evidence for this, and I've only read the introduction and the start of the first chapter, so I'm sure there's plenty more evidence to come. But one piece of evidence he used is wrong. On the third and fourth pages of the first chapter, he discusses the show "Who Want to be a Millionaire?" and the different "lifelines" which were available on the show. He describes the friend in the "phone a friend" lifeline as, "a person whom, before the show, she had singled out as one of the smartest she knew, and ask him or her for the answer" and the audience in the "ask the audience" lifeline as, "random crowds of people with nothing better to do on a weekday afternoon than sit in a TV studio." If you were a regular viewer of the show and understand the thesis of his book, you should already know where this going and why it's wrong. He goes on to give the percentage of questions the friends got right (65%) and the audience (91%). Surowiecki then graciously admits that these results, "wouldn't stand up to scientific scrutiny. We don't know how smart the experts were, so we don't know how impressive outperforming them was. And since the experts and the audience didn't always answer the same questions, it's possible, though not likely, that the audiences were asked easier questions." There are a couple of problems with this last part.
First, while it is true that if something happened almost never that thing also happened "not always," it's odd to say it that way. While contestants could use two lifelines on one question, using ask the audience and phone a friend together was exceedingly rare. I don't have access to the actual numbers, but trust me. I watched the show a lot. Furthermore, he claims that it is not likely that the friend questions were harder than audience questions, while I think it is nearly certainly the case. Based on my extensive viewing of the show, I would wager a large sum that "phone a friend" was systematically used later on in the questioning than "ask the audience". Since the questions got more difficult as the show progresses, this means that the experts were getting more difficult questions. That was also the strategy that I always advocated for contestants who I watched, since the early questions were more likely to be general knowledge or pop culture, while the later ones tended towards specific sub-fields. Since the experts really were answering harder questions the vast majority of the time, the difference in rates of success proves nothing.