Forgot your password?
typodupeerror
Cellphones Communications Medicine

10-Year Cell Phone / Cancer Study Is Inconclusive 248

Posted by kdawson
from the definite-maybe dept.
crimeandpunishment writes "A major international (retrospective) study into cell phones and cancer, which took 10 years and surveyed almost 13,000 people, is finally complete — and it's inconclusive. The lead researcher said, 'There are indications of a possible increase. We're not sure that it is correct. It could be due to bias, but the indications are sufficiently strong ... to be concerned.' The study, conducted by the World Health Organization and partially funded by the cellphone industry, looked at the possible link between cell phone use and two types of brain cancer. It will be published this week."
This discussion has been archived. No new comments can be posted.

10-Year Cell Phone / Cancer Study Is Inconclusive

Comments Filter:
  • Re:Limited study (Score:5, Informative)

    by goose-incarnated (1145029) <lelanthran.gmail@com> on Sunday May 16, 2010 @05:29PM (#32230208) Homepage Journal
    To get statistical significance, you don't need to sample the entire population. Beyond a certain number for a certain confidence level, you don't get very much more.
  • Re:"Survey"? (Score:1, Informative)

    by Anonymous Coward on Sunday May 16, 2010 @05:51PM (#32230380)

    From the article:

    It was also based on people searching their memories to estimate how much time they spent on their cell phones, a method that can throw up inaccuracies.
    It analysed data from interviews with 2,708 people with a type of brain cancer called glioma and 2,409 with another type called meningioma, plus around 7,500 people with no cancer.

    They only asked people how much time they spent on the cell phones. Risk of getting cancer was based on hard data (medical diagnosis).

  • by Kohath (38547) on Sunday May 16, 2010 @05:53PM (#32230400)

    There's an even bigger difference between inconclusive and a strong positive result. If cell phones caused a huge number of cancers, studies would not be inconclusive.

  • Re:Limited study (Score:5, Informative)

    by SoVeryTired (967875) on Sunday May 16, 2010 @05:56PM (#32230420)

    The principle is correct, but you're failing to take into account the probability of an the respective events. Given that winning 60% of the vote is considered a landslide, you can think of asking someone whether they're voting Republican or Democrat as a coin flip with a small bias in one way or the other. Because the race is so close, a few extra republicans or democrats in your sample won't produce a huge error in your estimate.

    On the other hand, a brain tumor can be thought of as a rare event. If the true incidence rate of brain cancer is five occurrences per thousand people over ten years, and your sample of 1,000 people has six incidences, you have a sample error of 20%. It's because of this that a small variation in the numbers can produce a large error. Therefore if you want to accurately assess the rate of cancer, you need a much bigger sample size.

  • Re:Limited study (Score:4, Informative)

    by mmarlett (520340) on Sunday May 16, 2010 @05:57PM (#32230438)

    It really seems silly when, in America at least, age-adjusted rates of brain cancer have fallen or held steady since the 1990s. From the National Cancer Institute [cancer.gov]:

    From 1990 to 2002, the overall age-adjusted incidence rates for brain cancer decreased slightly; from 7.0 cases to 6.4 cases for every 100,000 persons in the United States. The mortality rate from 1990 to 2002 also decreased slightly; from 4.9 deaths to 4.4 for every 100,000 persons in the United States. The incidence and mortality rates for cancers that originate in the brain and central nervous system have remained relatively unchanged in the last decade.

    It would seem to me that falling cancer rates are no reason for assuming that widespread cellphone use has been a health concern.

  • Re:Limited study (Score:4, Informative)

    by goose-incarnated (1145029) <lelanthran.gmail@com> on Sunday May 16, 2010 @06:00PM (#32230458) Homepage Journal

    If U.S. political polls select a sample size of between a few hundred and a thousand out of 300 million with only 3%..."

    I'm not so sure those percentages are accurate.

    They look accurate to me. From me undergrad stats classes, I seem to recall that to get 5% confidence level out of population of 10k, one needed a sample of around 850. For populations of 1000k, the sample size only went up by a few tens (perhaps to 900). Sampling is not linear, and it drops off the higher you go - IIRC (and I think I do), their is very little difference in the sample size for a population of 100k as there is for twenty times that number.

  • Re:Limited study (Score:3, Informative)

    by thrawn_aj (1073100) on Sunday May 16, 2010 @06:25PM (#32230676)
    While people in large numbers are essentially predictable (and therefore boring, which is why statistics - for the most part - works), those theorems are strictly valid only for true random variables. As GP pointed out, the differences between different polls sometimes like far outside the error bounds set by the poll itself. Kinda makes the error bound meaningless since it has been repudiated by empirical means. As always, observations reign supreme and if there's a conflict with theory, it is usually a case of unjustified assumptions - in this case, taking the approximate equivalence between mathematical random variables and real world people to be exact.

    Also, you are right about more not being any better. At some point, you are just adding more and more precision to an inaccurate answer. It's like a calculator fetish - getting predictions to the 18th decimal point using a flawed model and wondering why they don't match reality.
  • by Joce640k (829181) on Sunday May 16, 2010 @08:22PM (#32231544) Homepage

    +1 Insightful, came here to say the same thing.

    DNA doesn't break until you get into the UV-light range of electromagnetic waves, cell phone frequencies are orders of magnitude away from being able to do it....but don't let the pesky facts get in the way of anecdotes and scaremongering.

  • Re:Limited study (Score:3, Informative)

    by pipedwho (1174327) on Sunday May 16, 2010 @08:59PM (#32231750)

    Absolutely, the sample size is inversely relative to how close the differential result is to the 'noise floor'.

    In this respect, your first example is slightly flawed. As the expected determinant gets closer to the noise floor (ie. if the margin for a Republican or Democrat victory is going to be 0.01%, or 50.01% vs 49.99%), then a much greater sample size is needed to maintain confidence in the resultant prediction.

    As you say, 60% is a landslide. So if that is the expected result, then a few percent error either way isn't going to change your final determination of the winner.

  • Re:It's all relative (Score:5, Informative)

    by AK Marc (707885) on Sunday May 16, 2010 @10:34PM (#32232388)
    Perhaps you could come up with an example where there is no correlation, but there is causation?

    I always wonder when I get these "challenges" whether someone really doesn't understand how statistics work, so they are wandering around in a constant state of confusion (or worse, confident ignorance). Or whether they are all pedantic asses who are too lazy and/or stupid to have an independent thought.

    I can think of trillions of examples of a causation without correlation. I'll stick to something related to this topic. People who use cell phones have different habits than those without. Perhaps, because people aren't tethered to the desk phone, when they take calls at a desk, they push away or are more likely to walk around. If the CRT radiation has a greater effect than the cell phone radiation, then you'll find a result that correlates cell phone usage with lowered cancer, even though cell phones cause cancer.

    The short answer is "confounds." They are everywhere, and you eliminate as many as possible in a study, but you never know what you missed, and you find what you can, publish what you find, and if anyone else identified a confound that wasn't accounted for, they can re-run the study with that in mind to see if it had any effect.

    But, that you can't think of even one possible solution to the question you asked means you are too narrow minded or too stupid to worry about. I'm just posting this for those that have reasoning skills left. It's like all the people here, especially when I see people talking about voting and balloting systems, where if they can't think of a solution to a problem, then it's somehow proof that the solution doesn't exist.
  • by bjourne (1034822) on Monday May 17, 2010 @04:01AM (#32234502) Homepage Journal
    Wrong. The intensity of the radiation decreases with the square of the distance. OP is rationally more concerned about cell phones than cell phone towers.
  • by Kilrah_il (1692978) on Monday May 17, 2010 @08:46AM (#32235808)

    They didn't take a random sample of 13,000 people and found ~5,000 cases of cancer between them (which would be a very high percentage). They took ~5000 cancer patient and 7,500 people with no cancer and surveyed their cell phone usage. This is why it is called a retrospective study.
    The major problem with this kind of research is that you ask people about their usage, and some don't remember correctly:
    Q: You have brain cancer?
    A: Yes.
    Q: Did you use the cell phone a lot?
    A: Well, now that you mention it, I did!

    Of course, I am oversimplifying it. The interviews are usually more elaborate, but the example serves to clarify the problem. And that's why the Europeans are launching a prospective trial (from TFA).

The flow chart is a most thoroughly oversold piece of program documentation. -- Frederick Brooks, "The Mythical Man Month"

Working...