## 10-Year Cell Phone / Cancer Study Is Inconclusive 248

crimeandpunishment writes

*"A major international (retrospective) study into cell phones and cancer, which took 10 years and surveyed almost 13,000 people, is finally complete — and it's inconclusive. The lead researcher said, 'There are indications of a possible increase. We're not sure that it is correct. It could be due to bias, but the indications are sufficiently strong ... to be concerned.' The study, conducted by the World Health Organization and partially funded by the cellphone industry, looked at the possible link between cell phone use and two types of brain cancer. It will be published this week."*
## Re:Limited study (Score:5, Informative)

## Re:"Survey"? (Score:1, Informative)

From the article:

It was also based on people searching their memories to estimate how much time they spent on their cell phones, a method that can throw up inaccuracies.

It analysed data from interviews with 2,708 people with a type of brain cancer called glioma and 2,409 with another type called meningioma, plus around 7,500 people with no cancer.

They only asked people how much time they spent on the cell phones. Risk of getting cancer was based on hard data (medical diagnosis).

## Re:No answer is sort-of an answer (Score:2, Informative)

There's an even bigger difference between inconclusive and a strong positive result. If cell phones caused a huge number of cancers, studies would not be inconclusive.

## Re:Limited study (Score:5, Informative)

The principle is correct, but you're failing to take into account the probability of an the respective events. Given that winning 60% of the vote is considered a landslide, you can think of asking someone whether they're voting Republican or Democrat as a coin flip with a small bias in one way or the other. Because the race is so close, a few extra republicans or democrats in your sample won't produce a huge error in your estimate.

On the other hand, a brain tumor can be thought of as a rare event. If the true incidence rate of brain cancer is five occurrences per thousand people over ten years, and your sample of 1,000 people has six incidences, you have a sample error of 20%. It's because of this that a small variation in the numbers can produce a large error. Therefore if you want to accurately assess the rate of cancer, you need a much bigger sample size.

## Re:Limited study (Score:4, Informative)

It really seems silly when, in America at least, age-adjusted rates of brain cancer have fallen or held steady since the 1990s. From the National Cancer Institute [cancer.gov]:

It would seem to me that falling cancer rates are no reason for assuming that widespread cellphone use has been a health concern.

## Re:Limited study (Score:4, Informative)

If U.S. political polls select a sample size of between a few hundred and a thousand out of 300 million with only 3%..."

I'm not so sure those percentages are accurate.

They look accurate to me. From me undergrad stats classes, I seem to recall that to get 5% confidence level out of population of 10k, one needed a sample of around 850. For populations of 1000k, the sample size only went up by a few tens (perhaps to 900). Sampling is not linear, and it drops off the higher you go - IIRC (and I think I do), their is very little difference in the sample size for a population of 100k as there is for twenty times that number.

## Re:Limited study (Score:3, Informative)

Also, you are right about more not being any better. At some point, you are just adding more and more

precisionto aninaccurateanswer. It's like a calculator fetish - getting predictions to the 18th decimal point using a flawed model and wondering why they don't match reality.## Re:Number of cellphones vs. Cancer patients (Score:3, Informative)

+1 Insightful, came here to say the same thing.

DNA doesn't break until you get into the UV-light range of electromagnetic waves, cell phone frequencies are orders of magnitude away from being able to do it....but don't let the pesky facts get in the way of anecdotes and scaremongering.

## Re:Limited study (Score:3, Informative)

Absolutely, the sample size is inversely relative to how close the differential result is to the 'noise floor'.

In this respect, your first example is slightly flawed. As the expected determinant gets closer to the noise floor (ie. if the margin for a Republican or Democrat victory is going to be 0.01%, or 50.01% vs 49.99%), then a much greater sample size is needed to maintain confidence in the resultant prediction.

As you say, 60% is a landslide. So if that is the expected result, then a few percent error either way isn't going to change your final determination of the winner.

## Re:It's all relative (Score:5, Informative)

Perhaps you could come up with an example where there is no correlation, but there is causation?I always wonder when I get these "challenges" whether someone really doesn't understand how statistics work, so they are wandering around in a constant state of confusion (or worse, confident ignorance). Or whether they are all pedantic asses who are too lazy and/or stupid to have an independent thought.

I can think of trillions of examples of a causation without correlation. I'll stick to something related to this topic. People who use cell phones have different habits than those without. Perhaps, because people aren't tethered to the desk phone, when they take calls at a desk, they push away or are more likely to walk around. If the CRT radiation has a greater effect than the cell phone radiation, then you'll find a result that correlates cell phone usage with lowered cancer, even though cell phones cause cancer.

The short answer is "confounds." They are everywhere, and you eliminate as many as possible in a study, but you never know what you missed, and you find what you can, publish what you find, and if anyone else identified a confound that wasn't accounted for, they can re-run the study with that in mind to see if it had any effect.

But, that you can't think of even one possible solution to the question you asked means you are too narrow minded or too stupid to worry about. I'm just posting this for those that have reasoning skills left. It's like all the people here, especially when I see people talking about voting and balloting systems, where if they can't think of a solution to a problem, then it's somehow proof that the solution doesn't exist.

## Re:Statistical significance (Score:3, Informative)

## Re:Statistical significance (Score:2, Informative)

They didn't take a random sample of 13,000 people and found ~5,000 cases of cancer between them (which would be a

veryhigh percentage). They took ~5000 cancer patient and 7,500 people with no cancer and surveyed their cell phone usage. This is why it is called a retrospective study.The major problem with this kind of research is that you ask people about their usage, and some don't remember correctly:

Q: You have brain cancer?

A: Yes.

Q: Did you use the cell phone a lot?

A: Well, now that you mention it, I did!

Of course, I am oversimplifying it. The interviews are usually more elaborate, but the example serves to clarify the problem. And that's why the Europeans are launching a

prospectivetrial (from TFA).