For completeness, we have added the following content to the Exercises section of the Matrix Inverter tool available at
and mentioned in the post
The following information was found online (Quora, 2013, StackExchange, 2013a; 2013b).
Let Ʃ be a covariance matrix and Ʃ-1 an inverse covariance matrix, commonly referred to as the precision matrix.
With Ʃ, one observes the unconditional correlation between a variable i, to a variable j by reading off the (i,j)-th index.
It may be the case that the two variables are correlated, but do not directly depend on each other, and another variable k explains their correlation. By computing Ʃ-1 we can examine if the variables are partially correlated and conditionally independent.
Ʃ-1 displays information about the partial correlations of variables. A partial correlation describes the correlation between variable i and j, once you condition on all other variables. If i and j are conditionally independent then the (i,j)-th element of Ʃ-1 will equal zero. If the data follows a multivariate normal then the converse is true, a zero element implies conditional independence.
In general, Ʃ-1 is a measure of how tightly clustered the variables are around the mean (diagonal elements) and the extend to which they do not co-vary with the other variables (non-diagonal elements). The higher the diagonal elements, the tighter the variables are clustered around the mean.
So far I found that to be, in my opinion, the simplest explanation on the subject. So there you have a good application for our Matrix Inverter tool.
- Quora (2013). What is the inverse covariance matrix?.
- StackExchange (2013a). How to interpret an inverse covariance or precision matrix?.
- StackExchange (2013b). What does the inverse of covariance matrix say about data? (Intuitively).