One of the trickiest aspects of publishing statistical studies is the sample size to be used. Not stipulating a valid procedure for estimating a proper sample size can hurt, for instance, a grant proposal. Ethical committees are concerned about the right number of observations in a study, asking submitters to justify on statistical grounds how they arrived at a given sample size. Research projects with too few or too many observations or no sample size methodology at all often get rejected. This is something those conducting SEO quack “science” don’t seem to understand or are not aware of.

Too small samples are unethical, because the researcher cannot be specific enough about the size of, for example, the effect of a drug in a population. Too large samples are also unethical, because represent a waste of funding. True that a large sample improves precision, but it might involve an unjustified cost. Stratification is preferred, but it gets too complicated with huge sample sizes, not to mention that statistical significance not necessarily scales between samples.

As Rahul Dodhia from RavenAnalytics (http://ravenanalytics.com/Articles/Sample_Size_Calculations.htm ) indicates: a 2000-sample might not be very different from a 20000-sample, but a 200-sample maybe very different from a 2000-sample even when in each case the sample ratio is 10. So, a large sample not always is justified, even if such a sample size improves statistical significance and precision.

Consider the case of search engine ranking results. Upon a query, search engines are capable of finding many results, frequently in the range of thousand or million results per query. Still search engines and retrieval systems show to users a limited answer set. For instance, Google limits its viewable answer set to a maximum of 1,000 results (100 pages, 10 results/page).

Like in most retrieval systems, relevant results are accumulated at the first few result pages forming clusters. This is in agreement with Rijsbergen’s Cluster Hypothesis, which states that documents that cluster together have a similar relevance to a given query. Moving down the list of search results one often find cluster transitions wherein the quality and aboutness of documents is polluted with off-topic content.

Documents buried in a list of results often contain content irrelevant to the initial query or full of spam techniques. If one wants to conduct a statistical study of ranking results versus a particular document feature, one can do better by considering a sample from the first few result pages than from the entire answer set of 1,000 results.

In general, in a non-search engine scenario one cannot just arbitrarily select large samples to “force” the statistical significance of very low correlation coefficients and then use those values to draw conclusions. Furthermore, what is the selection criterion for using 1,000 or 10,000 results?

Simply stated: If 10,000 observations are arbitrarily selected, why not use 100,000 or 1,000,000 instead? We already know that very small correlation coefficients between any two arbitrary pair of random variables will be significant at those huge sample levels, anyway. And?

As noted in a Wikipedia entry, “given a sufficiently large sample size, a statistical comparison will always show a significant difference unless the population effect size is exactly zero. (http://en.wikipedia.org/wiki/Effect_size ).

For example, a correlation coefficient of r = 0.04 would be significant at a 95% confidence level if coming from a 10,000-sample (t-calc = 4.003 >> t-table = 1.96) while a correlation coefficient of r = 0.01 would be significant at a 95% confidence level if coming from a 100,000-sample (t-calc = 3.162 >> t-table = 1.96). And? This proves nothing, especially when the magnitude of a “signal” approaches the magnitude of its “noise”.

As noted at the above Wikipedia entry, a correlation coefficient of 0.1 is strongly statistically significant when sample size is 1000, (t-calc = 3.175 >> t-table = 1.96) but reporting only the small p-value from this analysis could be misleading if a correlation of 0.1 is too small to be of interest in a particular application. (http://en.wikipedia.org/wiki/Effect_size ).

Statistical significance of extremely small r values is not surprising as is just a mathematical consequence of the fact that a t-value is a function (F) of a weighted ratio: the ratio of explained-to-unexplained variations weighted by the number of degree of freedoms:

F(r, n) = t = SQRT[(r2/(1 – r2))*(n – 2)]
F(r, n) = t = r*SQRT[((n – 2)/(1 – r2))]

For a given r value, increasing n increases t. No surprise here. One thing is what a math equation tells you and another different thing is what the nature and obvious boundaries of a physical system tell you.

At trivially low r values any claim with regards to the statistical significance or strength of some results proves nothing and one cannot do much with such trivial r values. For instance for r = 0.04, r2 = 0.0016, meaning that 1 – r2 = 0.9984 or 99.84% of the variations in the dependent variable (y) are not explained by variations in the independent variable (x).

In such a scenario, assessing the effect of x on y is a futile exercise. Such a model would be useless for drawing conclusions or predicting anything. And here is the point that many SEOs at SEOMOZ (http://www.seo.co.uk/seo-news/seo-tools/the-seomoz-lda-tool-%E2%80%93-our-disappointing-findings.html , Fishkin, Hendrickson, and others elsewhere) don’t seem to grasp:

When a correlation coefficient is useless for all practical purposes.

If the raw data constantly changes, that’s another “Chaos Layer” that compounds the problem.

Enters Cohen’s Power

According to Cohen’s work, when conducting a sample size study of correlation coefficients, one needs to consider the required confidence level and power of the test, the desired probability for Type I and Type II Errors, and the hypothesized or anticipated correlation coefficient (http://www.medcalc.be/manual/correlation_coefficient.php ). One cannot just use an arbitrary sample size for testing things.

In general, given any three of the following, the fourth one can be determined (http://www.statmethods.net/stats/power.html ):

1. sample size
2. effect size
3. significance level = P(Type I error) = probability of finding an effect that is not there
4. power = 1 – P(Type II error) = probability of finding an effect that is there

One also needs to consider what is the statistical parameter that is undergoing the power analysis. One needs to ask questions like the following:

Are we testing means from a given group? http://www.nss.gov.au/nss/home.nsf/pages/Sample+Size+Calculator+Description?OpenDocument

Are we testing means from different groups? http://www.ncbi.nlm.nih.gov/pmc/articles/PMC137461/

Are we testing correlation coefficients? Read Simon’s take on the impact of sample size on the desired level of precision in correlation coefficients (http://www.childrens-mercy.org/stats/weblog2005/CorrelationCoefficient.asp ).

Are we interested in significance level, effect size, sample effect, or power?

When conducting an effect size analysis one must keep in mind that effect sizes estimate the strength of a possible relationship, rather than assigning a significance level. However, effect sizes do not determine significance levels, or vice-versa.

So, how do we go about implementing Power Analysis?

For those interested in implementing power analysis written in the R Language, I recommend the libraries at http://www.statmethods.net/stats/power.html

Software for conducting power analysis is also available elsewhere, as shown in the following table. My favorites are G*Power and SPSS SamplePower (http://www.spss.com/software/statistics/samplepower/).

Power Analysis SoftwareSource: http://www.epibiostat.ucsf.edu/biostat/sampsize.html
Software Remarks
G*Power License: Free Uses both exact and approximate methods to calculate power. It will deal with sample size/power calculations for t-tests, 1-way ANOVAs, regression, correlation, and chi-square goodness of fit. For t-tests and ANOVAs you find the effect size by supplying mean and variance information. For correlation coefficients the effect size is a function of r2. http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/
PC-Size License: Free Deals with sample size/power calculations for t-tests, 1-way and 2-way ANOVA, simple regression, correlation, and comparison of proportions. http://www.esf.edu/efb/gibbs/monitor/usingDSTPLANandPCSIZE.pdf
ftp://ftp.simtel.net/pub/simtelnet/msdos/statstcs/size102.zip
DSTPLAN License: Free Uses approximate methods to calculate power. It will calculate sample size/power for t-tests, correlation, a difference in proportions, 2xN contingency tables, and various survival analysis designs. http://biostatistics.mdanderson.org/SoftwareDownload/SingleSoftware.aspx?Software_Id=41
PS License: Free Performs sample size/power calculations for t-tests, Chi-square, Fisher’s exact, McNemar’s, simple regression, and survival analysis. http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/PowerSampleSize
Tibco Spoffire S+ License: Paid The only commercially-supported statistical analysis software that delivers a cross-platform IDE for the award-winning S programming language, the ability to analyze gigabyte class data sets on the desktop, and a package system for sharing, reuse and deployment of analytics in the enterprise and in validated environments. Used widely in validated production environments (e.g., 21 CFR Part 11).http://spotfire.tibco.com/products/s-plus/statistical-analysis-software.aspx
NQuery Advisor License: Paid Performs sample size/ power calculations for t-tests, 1 and 2 way ANOVAS, tests of contrasts in 1-way ANOVAs, univariate repeated measures designs, regression (simple, multiple and logistic), correlation, difference of proportions, 2XN contingency tables, and survival analyses. http://www.statsol.ie/nquery/nquery.htm
PASS License: Paid Performs sample size/power calculations for z-tests, t-tests, 1, 2, and 3-way ANOVAs, univariate repeated measures designs, regression (simple, multiple and logistic), correlations, difference in proportions, 2xN contingency tables, survival analyses and simple non-parametric analyses.  http://www.ncss.com/pass.html
Stata License: Paid It has some simple built-in power and sample size functions. http://www.stata.com/
SPSS SamplePower License: Paid  If your sample size is too small, you could miss important research findings. If it’s too large, you could waste valuable time and resources. Finds the right sample size for your research in minutes and test the possible results before you begin your study, with IBM SPSS SamplePower. Strikes the right balance among confidence level, statistical power, effect size, and sample size using IBM SPSS SamplePower. Compares the effects of different study parameters with its flexible analytical tools. http://www.spss.com/software/statistics/samplepower/  
About these ads