# Effect size

This is the standardised effect observed. By standardising the effect, the effect size becomes dimensionless (and that can be helpful when pooling data). The effect size then becomes:

- A generic term for the estimate of effect for a study.
- A dimensionless measure of effect that is typically used for continuous data when different scales (e.g. for measuring pain) are used to measure an outcome and is usually defined as the difference in means between the intervention and control groups divided by the standard deviation of the control or both groups.

The effect size can be just the difference between the mean values of the two groups, divided by the standard deviation, as below, but there are other ways to calculate effect size in other circumstances.

Effect size = (mean of experimental group - mean of control group)/standard deviation

Generally, the larger the effect size, the greater is the impact of an intervention. Jacob Cohen has written the most on this topic. In his well-known book he suggested, a little ambiguously, that a correlation of 0.5 is large, 0.3 is moderate, and 0.1 is small (Cohen, 1988). The usual interpretation of this statement is that anything greater than 0.5 is large, 0.5-0.3 is moderate, 0.3-0.1 is small, and anything smaller than 0.1 is trivial. There is a good site that describes all this that is worth a visit for those really interested (http://davidmlane.com/hyperstat/effect_size.html).

Reference

- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New Jersey: Lawrence Erlbaum.