New Relevant Tool: https://irthoughts.wordpress.com/2018/09/14/regression-correlation-calculator-updates-and-improvements/

I came across Professor R.J. Rummel page on Understanding Correlation. This is an old, but still relevant book-like Web page on how to interpret properly correlation coefficients.

In Chapter 4 he discusses on the proper way of looking at correlation coefficient values. He writes and quote (emphasis added in boldfaces):

“As a matter of routine** it is the squared correlations that should be interpreted**. This is because the correlation coefficient is misleading in suggesting the existence of more covariation than exists, and this problem gets worse as the correlation approaches zero. Consider the following correlations and their squares.”

“Note that as the correlation r decrease by tenths, the r^{2} decreases by much more. A correlation of .50 only shows that 25 percent variance is in common; a correlation of .20 shows 4 percent in common; and a correlation of .10 shows 1 percent in common (or 99 percent not in common). Thus, squaring should be a healthy corrective to the tendency to consider** low correlations, such as .20 and .30, **as indicating **a meaningful or practical covariation.** ”

Rummel’s page is very relevant these days where SEOs from SEOMOZ and few other snakeoil marketing sites are buying the bogus discourse from Fishkin and Hendrickson that low correlation coefficients in about that range are evidence of LDA scores and Google ranks being “highly” correlated.

As mentioned before at this blog, SEO marketers are good at selling that kind of snakeoil or “quack” science.

Statistical significance does not equate to high correlation. For large enough sample sizes even very low r values (0.1, 0.01, etc) eventually become significant, but these do not equate to high correlation.

On a side note, I’m reading an IR thesis wherein Spearman’s and Kendall’s coefficients are used. Quite interesting.

First PS

According to a Sloan Consulting article at ISIXSIGMA.COM site and quote (emphasis added)

“As a rule of thumb **a strong correlation **or relationship has an ** r**-value range of between 0.85 to 1, or -0.85 to -1. In a moderate correlation, the

**-value ranges from 0.75 to 0.85 or, -0.75 to -0.85. In a**

*r***weak correlation, one that is not a very helpful predictor**,

**ranges from 0.60 to 0.74 or -0.60 to 0.74. Though an entirely random relationship equals, 0.00,**

*r***any relationship that has a correlation**.”

*r*-value that is 0.59 and below is not considered to be a reliable predictorAccording to this Intel Teach Program a correlation between 0 and 0.19 is a very weak one while one between 0.2 and 0.39 weak enough.

True that there are many correlation charts out there and some do not agree in specific degrees or ranges, but they all tend to agree in one thing: that a correlation value below 0.20 is a very, very weak correlation, never deemed as evidence of variables being “highly correlated” as claimed by SEOMOZ in their LDA fiasco posts.

Second PS

Here is a list of reference links wherein these marketers make correlation claims based on quite weak correlation values and in the process keep misleading naive peers and the public. “Highly correlated”? “Remarkably well correlated?” Evidently Statistics is a Loss for SEOs.

https://irthoughts.wordpress.com/2016/04/18/virus-evolution-citation/

http://www.seomoz.org/blog/lda-correlation-017-not-032

http://www.seomoz.org/blog/lda-and-googles-rankings-well-correlated

http://www.seomoz.org/blog/google-vs-bing-correlation-analysis-of-ranking-elements