Showing posts with label mathematics. Show all posts
Showing posts with label mathematics. Show all posts

Saturday, January 12, 2013

Algebra is a liberal plot



Well, of course algebra is a liberal agenda! It's an Arabic word, after all. No doubt it's just another example of creeping Sharia law, huh?

Seriously, I'd been hoping that The Young Turks would cover this, because I thought the whole idea was just hilarious! Not to mention as batshit crazy as Fox 'News' ever gets (and that's pretty crazy).

Don't any of these idiots know that distributive property is a real thing in algebra? That illustration, on page 78 of the worksheet, was just a clever play on the phrase, to try to make math interesting to children, as much as possible.

"Liberal agenda"? Well, yes, if you consider math to be a liberal agenda, if you consider knowledge to be a liberal agenda, if you consider reality to be a liberal agenda.

Of course, these right-wingers do think that. Here's a description of math in a Christian textbook funded by the taxpayers of Louisiana, thanks to Bobby Jindal's voucher scheme:
Unlike the "modern math" theorists, who believe that mathematics is a creation of man and thus arbitrary and relative, A Beka Book teaches that the laws of mathematics are a creation of God and thus absolute. ...

A Beka Book provides attractive, legible, and workable traditional mathematics texts that are not burdened with modern theories such as set theory.

Yes, because we all know that set theory is just a tool of the Devil, right? And if you think their math is loony, you should see their history and their 'science'! It's bad enough that these things are taught to children at all, but to use public money? In America?

Anyway, getting back to that Fox program, this part (as TYT also points out) was even crazier, if that's possible:
[Fox 'News' host Eric] Bolling advised parents to read their children’s history books because his son’s textbook addressed the Iraq war “and they were very, very liberally biased, saying George Bush went in there because he heard there were weapons of mass destruction and they were never found. It was a very liberal bias to the history books.”

Say what? I thought that was being generous to Republicans. The liberal take on that is probably that Bush knew there weren't weapons of mass destruction in Iraq, but just used it as an excuse to invade the country (either for oil, domestic political benefit, or bizarre neocon theories about bringing Christianity to the Middle East, take your pick).

That explanation, on the other hand, is giving President Bush the benefit of the doubt by indicating that he was just mistaken. If they think that's an example of "very liberal bias," what in the world do they think the truth is? That Satan hid the WMDs by covering them with government-issued condoms, once we invaded? That gay people disguised them with glitter and smuggled them out of Iraq during a gay pride parade? The mind boggles, doesn't it?

I know I keep saying this, but these people just get crazier and crazier. Is there no limit to their insanity?

Thursday, March 18, 2010

I think my head is going to explode

I'll be honest. I don't understand statistics. I don't think most people - even most scientists - understand statistics, but I know that I don't. So I had a hard time wrapping my head around this article in Science News.

Frankly, I wouldn't even inflict it on you, if it weren't for the "boxes" at the end of the article, where they give some examples of how misleading statistics can be. For example:

One set of such studies, for instance, found that with the antidepressant Paxil, trials recorded more than twice the rate of suicidal incidents for participants given the drug compared with those given the placebo. For another antidepressant, Prozac, trials found fewer suicidal incidents with the drug than with the placebo. So it appeared that Paxil might be more dangerous than Prozac.

But actually, the rate of suicidal incidents was higher with Prozac than with Paxil. The apparent safety advantage of Prozac was due not to the behavior of kids on the drug, but to kids on placebo — in the Paxil trials, fewer kids on placebo reported incidents than those on placebo in the Prozac trials. So the original evidence for showing a possible danger signal from Paxil but not from Prozac was based on data from people in two placebo groups, none of whom received either drug.

Get that? If you compared the "statistical significance" of the two studies, you might come to exactly the opposite conclusion from what the evidence showed. One study just had more incidents from kids in the control group, those who didn't get the drug at all.

Or try this:

For a simplified example, consider the use of drug tests to detect cheaters in sports. Suppose the test for steroid use among baseball players is 95 percent accurate — that is, it correctly identifies actual steroid users 95 percent of the time, and misidentifies non-users as users 5 percent of the time.

Suppose an anonymous player tests positive. What is the probability that he really is using steroids? Since the test really is accurate 95 percent of the time, the naïve answer would be that probability of guilt is 95 percent. But a Bayesian knows that such a conclusion cannot be drawn from the test alone. You would need to know some additional facts not included in this evidence. In this case, you need to know how many baseball players use steroids to begin with — that would be what a Bayesian would call the prior probability.

Now suppose, based on previous testing, that experts have established that about 5 percent of professional baseball players use steroids. Now suppose you test 400 players. How many would test positive?

• Out of the 400 players, 20 are users (5 percent) and 380 are not users.

• Of the 20 users, 19 (95 percent) would be identified correctly as users.

• Of the 380 nonusers, 19 (5 percent) would incorrectly be indicated as users.

So if you tested 400 players, 38 would test positive. Of those, 19 would be guilty users and 19 would be innocent nonusers. So if any single player’s test is positive, the chances that he really is a user are 50 percent, since an equal number of users and nonusers test positive.

Wild, huh? So what does all this mean? If you want to read the whole article and explain it to me, in very simple language, feel free. But I'll tell you what it doesn't mean. It doesn't mean that we can't trust anything scientists say. It doesn't mean that we're not learning more all the time - in every field of study. It doesn't mean that our gut is just as good at determining the truth as scientific research. Not at all.

But I would take most studies that rely on "statistical significance" - and especially meta-analyses - with a grain of salt. I'd be cautious about concluding anything based on research that shows only a slight, statistical effect. (I'd be even more cautious about accepting the accuracy of research as reported in the popular press, since the media have needs - and problems - of their own.) And certainly, I'd want multiple independent studies backing up any preliminary findings.

None of this is easy, and it's particularly difficult when we're talking about human health. We can't do research on human beings without being very careful not to cause harm. I would never want to change that, but it does make determining the truth more difficult than it might otherwise be. Statistics is a tool, but it's a tool that can easily be misused - and even more easily be misinterpreted. Lying with statistics is easy, even if it isn't always deliberate.