If you ever see a newspaper headline claiming that Activity X increases the chances of Bad Thing happenning to you by 50% (or whatever), chances are it is based on risk (or incident rate) ratios.
In this example, researchers might (for example) have determined that 15% of the people who indulge in Activity X suffer Bad Thing whereas only 10% of the control group who do not indulge do so. The risk ratio is thus 15%/10% = 1.50. The calculation is actually a lot more complicated than this to allow for confounding variables. These may be such things as drinking,smoking or taking excercise. As the incidence is 50% higher in the group undertaking the activity, this is presented as the risk being increased by 50%. This is fine for cohort studies – put simply, these are long term studies in which the health of a group who do undertake the activity under study is compared with those who do not. This is one example, indicating an increased risk of diabetes amongst people who eat large quantities of processed meats. Another was the
Danish cohort study of autism incidence amongst children who had the MMR vaccine and those who did not, This one came out with a risk ratio of 1.0, ie the incidence was the same whether or not children had the vaccine.
It is also informative to look the actual numbers involved, not just the risk ratios. Sometimes we see headlines claiming that a particular substance doubles the risk of some adverse outcome. Examination of the data might show that the risk amongst those not exposed is 0.0001% whereas it is 0.0002% amongst those who are exposed. This is indeed a risk ratio of 2.0 and thus a doubling of risk but as the figures show, exposure represents an extra 0ne chance in a million of suffering the adverse outcome. Despite the risk ratio, exposure is not a serious hazard.
Another form of study is the case control study in which the incidence of Activity X amongst people who receive medical treatment for Bad Thing is compared with the incidence of Activity X amongst those who do not suffer Bad Thing. I looked at one example of this, and the misreporting of it, here. We might find that 15% of those hospitalised for Bad Thing undertook Activity X whereas only 10% of the control group did so. This could be presented as an incident rate ratio of 1.5 and the press will then jump on it as evidence that Activity X increases the risk by 50% – which is totally wrong.
We are no longer looking at what fraction of a group suffered Bad Thing. They all did. We are now looking at different activities which might be a contributory factor and comparing the incidence amongst a group all of whom have suffered Bad Thing with a control group none of whom have suffered Bad Thing. In this case it is not the ratio onf incidence which is informative as to increased risk but the difference. In this case, the extra 5% who undertook Activity X amongst the hospitalised group can be attributed to the activity. The remaining 95% had other causes and would have been hospitalised regardless of Activity X – which thus represents an increased risk of Bad Thing happenning of 5.26%. This is a definite increase so caution about the activity is warranted but it is hardly a 50% increase.
The points to take away from this is that when you read articles about 50% (or whatever) increases in risk, check that it is a cohort study and the interpretation of the ratio is justified. Even if it is, check the actual numbers involved to see how serious the increase in risk actually is.