Back in 2008, Jan M. Hoem wrote an interesting reflexion paper titled “The reporting of statistical significance in scientific journals” (VOLUME 18, ARTICLE 15, PAGES 437-442; 03 JUNE 2008 http://www.demographic-research.org/volumes/vol18/15/18-15.pdf . The piece was an expanded version of a previous paper (http://www.demogr.mpg.de/papers/working/wp-2007-037.pdf )

He wrote (and I quote):

“Scientific journals in most empirical disciplines have regulations about how authors should report the precision of their estimates of model parameters and other model elements. Some journals that overlap fully or partly with the field of demography demand as a strict prerequisite for publication that a p-value, a confidence interval, or a standard deviation accompany any parameter estimate. I feel that this rule is sometimes applied in an overly mechanical manner. Standard deviations and p-values produced routinely by general-purpose software are taken at face value and included without questioning, and features that have too high a p-value or too large a standard deviation are too easily disregarded as being without interest because they appear not to be statistically significant. In my opinion authors should be discouraged from adhering to this practice, and flexibility rather than rigidity should be encouraged in the reporting of statistical significance. One should also encourage thoughtful rather than mechanical use of p-values, standard deviations, confidence intervals, and the like. Here is why:”

Hoem then dissects five points related with misusing statistical significance results and automatic software solutions. I’m listing these below.

  1. The scientific importance of an empirical finding depends much more on its contribution to the development or falsification of a substantive theory than on the values of indicators of statistical significance.
  2. Measures of statistical significance may be misleading. When a model has been developed through repeated use of tests of significance to include and exclude covariates, to split or combine levels on categorical covariates, and to determine other model features, the user often loses control over statistical-significance values, and the values computed by standard software may be completely misleading.
  3. Standard p-values can be insufficiently precise indicators of statistical significance, particularly if their values are given only in grouped levels, which are often indicated by asterisks beside parameter estimates (“* = p<0.1, ** = p<0.05, *** = p<0.01”, and so on).
  4. It may be more important for an understanding of demographic behavior or other phenomena studied to know whether the inclusion of a categorical covariate in its entirety contributes significantly to an improvement of the model than to know the significance indicators of each of its levels.
  5. Standard deviations, when used, should be reported for interesting contrasts, not for features selected automatically by statistical software.

I completely agree with Hoem.

SEOMOZ and their statistical “studies”

These days search engine optimization marketers (SEOs/SEMs) keep misinterpreting statistical results spitted from software without stopping and thinking about the significance-behind-the-significance, especially when it comes to a correlation coefficient, r (Pearson, Spearman, etc).

When one reads SEO hearsays and urban legends at SEOMOZ about very small correlation coefficients (0.17, 0.32, etc) derived from large sample sizes, as evidence that variables are “highly correlated” or “well correlated”, it is time to stop and put into question such “studies”. For reference see the following links

http://www.seomoz.org/blog/lda-and-googles-rankings-well-correlated
http://www.seomoz.org/blog/lda-correlation-017-not-032
http://irthoughts.wordpress.com/2010/04/23/beware-of-seo-statistical-studies/  

Fortunately, leading search marketers like Danny Sullivan has put into question those “studies” at a recent search engine conference

http://outspokenmedia.com/internet-marketing-conferences/evening-forum-with-danny-sullivan/ ,

and that was even before SEOMOZ admitted the 0.32 result has to be recanted as 0.17.

Sean Golliher, founder and publisher of the Search Engine Marketing Journal (SEMJ.org) has also put into question their results (http://www.seangolliher.com/2010/uncategorized/185/ ), which Hendrickson from SEOMOZ still insists in defending.

Since then they have never disclosed the source of the mistake, dismissing it just as a programming error. Unfortunately, they are still claiming that a 0.15 – 0.30 range validates their “studies” (http://www.seo.co.uk/seo-news/seo-tools/the-seomoz-lda-tool-%E2%80%93-our-disappointing-findings.html) .

Small and Large Sample Sizes

When William Sealy Gosset (aka “Student”) proposed, and Ronald A. Fisher expanded on, the test later termed Student’s t-test of significance, the test was meant to be used to assess information from small samples, not from large samples. I have discussed the case of small sample sizes in another post (http://irthoughts.wordpress.com/2010/10/18/on-correlation-coefficients-and-sample-size/ ).

In order to apply a t-test (and other small sample analysis tests) to large samples, divide-and-conquer techniques, like stratification, were eventually developed. In the case of correlation and regression, the reason for doing this is that applying something like a t-test to, for instance, a single correlation coefficient coming from a huge sample can produce misleading results. Let see why.

For large enough sample sizes eventually any correlation coefficient, even the smaller ones, will always be significant (t-observed > t-table). At that point it might be tempting to assume that the variables in question are highly correlated. Wrong assumption!

The fact is that statistical significance does not necessarily equate to variables being highly correlated and vice versa. Let address this point in two parts: (1) the question of statistical significance and (2) the question of high correlation.

Statistical Significance: Bigger not always is better

As noted in a Wikipedia entry, “given a sufficiently large sample size, a statistical comparison will always show a significant difference unless the population effect size is exactly zero. (http://en.wikipedia.org/wiki/Effect_size ). I have discussed effect size and power analysis in a previous post (http://irthoughts.wordpress.com/2010/10/21/on-power-analysis-and-seo-quack-science/  ).

The reason for the above effect has a lot to do with the definition of statistical significance itself. Statistical significance is the confidence one has in a given result and that such a result is not by random chance.

In mathematical terms, the confidence that a result is not by random chance is given by the following formula by Sackett (http://en.wikipedia.org/wiki/Statistical_significance , http://www.cmaj.ca/cgi/content/full/165/9/1226 ):

Confidence = (Signal/Noise)*Sqrt[Sample Size]

This simple expression or derivatives of it appears in many different scenarios and disciplines. It describes a generic Confidence Function, F, in terms of a Signal, a Noise, and a Sample Size; that is, F(Signal, Noise, Sample Size). In general, such a generic function tells us that:

  • Confidence is proportional to a Signal source (S).
  • Confidence is inversely proportional to a Noise source (N).
  • Confidence is proportional to a Signal-to-Noise ratio (S/N).
  • Confidence is proportional to a Sample Size.

Let’s apply a version of this expression to correlation. To do this, let Y be the dependent variable and let X be the independent variable. Let also make the following substitutions:

  • Confidence: expressed as t2
  • Signal: expressed as r2; i.e., fraction of explained variations in Y (due to X).
  • Noise: expressed as 1 – r2; i.e., fraction of unexplained variations in Y.
  • Sample Size: expressed as degrees of freedom; i.e., n – 2 for a two-tailed test.

F(Signal, Noise, Sample Size) = t-observed2 = [r2/(1 – r2] [n – 2]

Taking the square root (Sqrt) at both sides, we obtain the so-called formula for a two-tailed t-test.

t-observed = r*Sqrt[(n – 2)/(1 – r2)]

Evidently for a given r value, t-observed increases when n increases. By rearranging this expression, it is possible to compute for a large enough sample size a critical value above which r values will be significant. For very large samples at a 95% confidence level, t-table= 1.96 (http://en.wikipedia.org/wiki/Student%27s_t-distribution#Table_of_selected_values ). Replacing arbitrarily this value in the above expression (t-observed = t-table = t = 1.96) and solving for r, we obtain that the critical r value is given by

r = t/Sqrt[(n – 2) + t2]

The following table lists values for very small r values and huge sample sizes. I’m intentionally using several decimal places and ignoring significant figure rules since I want to make a point on the small values used. I’m also using a 0.95 confidence level for illustration purposes, but for the large samples I could and should use other confidence levels as well.

n n – 2 t r S = r*r N = 1 – r*r S/N
1,000 998 1.96 0.0619 0.003835 0.996165 0.003849
10,000 9998 1.96 0.0196 0.000384 0.999616 0.000384
100,000 99998 1.96 0.0062 0.000038 0.999962 0.000038
1,000,000 999998 1.96 0.0020 0.000004 0.999996 0.000004

For a sample size of 10,000 observations the critical r is 0.0196 or about 0.02, meaning that for such a huge sample size any r value above this small and critical r value will be significant. However, something interesting is observed from this table: (PS See footnote update)

When one moves to large sample sizes the Noise becomes greater than the Signal. For instance, at n = 10,000 the amount of Signal is very small (0.000384) while the amount of Noise is above 0.9996… or 99.96…%, giving a quite trivial S/N ratio. A similar reasoning can be applied for r = 0.17 (S = 0.0289, N = 0.9711) and r = 0.32 (S = 0.1024, N = 0.8976). The corresponding S/N ratios are trivial.

One can also solve the above expression for n to find sample sizes for some small r values and arbitrary t as shown in the following table (PS See footnote update).

t r S = r*r N = 1 – r*r S/N n n – 2
1.96 0.0200 0.000400 0.999600 0.000400 9602 9600
1.96 0.1500 0.022500 0.977500 0.023018 169 167
1.96 0.1700 0.028900 0.971100 0.029760 131 129
1.96 0.3000 0.090000 0.910000 0.098901 41 39
1.96 0.3200 0.102400 0.897600 0.114082 36 34

Still, note that the amount of Noise completely overcomes the Signal, producing trivial S/N ratios. In general for small r and large n values significance is achieved at the cost of Noise masking the Signal. When this occurs the statistical significance is not a practical guideline for drawing useful conclusions from the data at hand.

This drives the present discussion to the substantive part of the problem missed by SEOs, and that is …

Statistical Significance Does Not Necessarily Mean Highly Correlated Results

Simply stated, statistical significance does not necessarily imply that the X, Y variables are highly correlated.

A simple scatterplot will convince anyone that for the above small r values there will be no pattern or trend in the data set. The corresponding regression model will be useless for forecasting or inferring anything of value, except that the data spreads so wildly that it has no method to its chaos. What else is to be expected from a data set with a large Noise and small S/N ratio?

This is something that SEOs/SEMs still don’t seem to understand: t-observed > t-table not necessarily means high correlation, and vice versa. I don’t have any personal stake (or take) against them, but when folks like Hendrickson, Fishkin, and others from SEOMOZ ignore Signal-to-Noise ratios and start referring to small r values as evidence that experimental variables are “highly” or “well” correlated, it is more than fair to call such “studies” Quack “Science”. That label might sound harsh, but in this case is appropriate.

Search engine marketers might be good at selling snakeoil, publishing sloppy “studies”, or recanting on overhyped statements, but not at doing real Science. They should know better; i.e., that

  • “significance” does not mean “correlation”.
  • “significance” does not mean “important”.
  • “insignificance” does not mean “unimportant”.

Statistical “significance” only means that any confidence in the data is not by random chance. Therefore, a significant correlation does not necessarily mean a “high”, “well”, or “strong” correlation between variables.

To understand all this we need to distinguish between statistical significance and practical significance.

Statistical Significance vs. Practical Significance

As stated at this Wikipedia entry (emphasis added in boldfaces) http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Criticism ):

A common misconception is that a statistically significant result is always of practical significance, or demonstrates a large effect in the population. Unfortunately, this problem is commonly encountered in scientific writing. Given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant, and statistical significance says nothing about the practical significance of a difference.

Use of the statistical significance test has been called seriously flawed and unscientific by authors Deirdre McCloskey and Stephen Ziliak. They point out that “insignificance” does not mean unimportant, and propose that the scientific community should abandon usage of the test altogether, as it can cause false hypotheses to be accepted and true hypotheses to be rejected.

Some statisticians have commented that pure “significance testing” has what is actually a rather strange goal of detecting the existence of a “real” difference between two populations. In practice a difference can almost always be found given a large enough sample. The typically more relevant goal of science is a determination of causal effect size. The amount and nature of the difference, in other words, is what should be studied. Many researchers also feel that hypothesis testing is something of a misnomer. In practice a single statistical test in a single study never “proves” anything.

That pretty much settles the question of discerning between statistical significance and practical significance of correlation coefficients, but does not tell us how to quantitatively discern between the two concepts. In an upcoming article, I will derive expressions that might help to quantitatively assess these.

Since the tutorial on correlation coefficients http://www.miislita.com/information-retrieval-tutorial/a-tutorial-on-correlation-coefficients.pdf has been updated several times and is getting too long, I will put that upcoming material on a separate pdf file. As a sneak preview, we will be examining extreme cases (too high/low r values, too high/low sample sizes, and too high/low signal-to-noise ratios, etc.).

PS. I updated this post to fix some little typos.

Footnote. I found erroneous including the entries for n = 10 and n = 100 in the first table so I removed these altogether and limited the discussion to the large n values. A reader asked why I used t-table = 1.96 for all entries.  I thought it was clear from the discussion that the above tables are meant to show calculations for arbitrarily set t-values.  In a real test, you would need to use the actual t values from statistical tables. For instance, for n = 10 you would have to use a t-table value of  t = 2.306 at the 0.95 level. You should get

n n-2 t r S = r*r N = 1 – r*r S/N
10 8 2.306 0.6319 0.399293 0.600707 0.664705