Regardless of your research field, soon or later you need to generate average statistics, for instance a weighted correlation coefficient between any two variables, x and y.
Computing weighted averages of correlation coefficients depends on the weighting strategy used: unit weights, sample size, optimal weights, and within/between study variances, etc. Most text books advocate the use of Fisher’s Z Transformation for, for instance compute confidence intervals and average correlations.
One thing that has been bothering me for a long time now is this: what would be the discriminatory power of such weighting strategies if we are in the presence of identical data sets of correlation values, but coming from samples with different variabilities effects in the dependent variable?
Research conducted for the last four months let me to realize an alternate approach to the above weighting strategies.
At that time I was putting together a new tutorial series on meta-analysis, so this problem diverted my attention and was always in the back of my head.
So after many meals and long nights, I finally decided to include my research findings as Part 1 of the tutorial series, which you can read here: On the Non-Additivity of Correlation Coefficients.
I hope you like it. Since this is relevant to many research areas, please send your feedback through private, confidential email and not through this blog.
PS. If others are interested in testing how the proposed approach compares with other weighting strategies, feel free to contacting me. I’m interested in testing with real data (non-simulated) from any field: science, engineering, education, behavioral & social sciences, allied health, literature, politics, marketing, etc.)