Here is the paper that started all: A Protocol for Packet Network Intercommunication, by Vince Cerf and Bob Kahn.
The current issue of IRW is already out –a bit delayed due to reasons previously mentioned. Enjoy it.
A vector should be represented by a letter, in boldface or with a right arrow on top.
A caret should be used to indicate a unit vector.
An inner product should be indicated by placing a dot between two letters representing vectors.
Note that Dirac Notation is a different animal.
The APS Style Guide has additional guidelines.
I came across Thesaurus as a complex network, a fascinating 2003 paper written by Adriano de Jesus Holanda, Ivan Torres Pisa, Osame Kinouchi, Alexandre Souto Martinez and Evandro Eduardo Seron Ruiz from Universidade Sao Paulo, Brazil in which they model thesauri using graph theory. The abstracts reads:
“A thesaurus is one, out of many, possible representations of term (or word) connectivity. The terms of a thesaurus are seen as the nodes and their relationship as the links of a directed graph. The directionality of the links retains all the thesaurus information and allows the measurement of several quantities. This has lead to a new term classification according to the characteristics of the nodes, for example, nodes with no links in, no links out, etc. Using an electronic available thesaurus we have obtained the incoming and outgoing link distributions. While the incoming link distribution follows a stretched exponential function, the lower bound for the outgoing link distribution has the same envelope of the scientific paper citation distribution proposed by Albuquerque and Tsallis . However, a better fit is obtained by simpler function which is the solution of Ricatti’s differential equation. We conjecture that this differential equation is the continuous limit of a stochastic growth model of the thesaurus network. We also propose a new manner to arrange a thesaurus using the “inversion method”.”
The study is important because it provides an interesting look at word relationships. They have identified an underlying power law, which in my opinion might be worth to be investigated as to whether it is at core of semantic relationships.
They briefly mentioned the limitations of LSA.:
“However, LSA has been criticized as a poor approach for predicting semantic neighborhood”.
Indeed, LSA (or LSI) not necessarily describes or predicts semantics, as originally thought. In my view, LSA/LSI itself is a misnomer. Research references can be provided to support this view.
I do have one additional comment on the paper. In it, LSA is described as a PCA technique. The authors write:
“Another interesting way to treat data is the Latent Semantic Analysis (LSA)  which deals with word covariance in a corpus. LSA is a principal component analysis (PCA) technique , i.e., the covariance matrix is diagonalized and from the most important eigenvalues (around 300) the eigenvectors are considered to span an Euclidean vector space.”
This might not be entirely accurate. Let see why:
1. PCA was invented by Karl Pearson in 1901 so is more than half a century older than Golub and Kahan’s SVD algorithm which was published in 1965. See G. Golub and W. Kahan, J. SIAM, Numer. Anal. SEr. B, Vol 2, No. 2 (1965).
2. In 1988 Dumais, et al applied Golub’s SVD to text and called that LSA (LSI). See Proceedings of the Conference on Human Factors in Computing Systems, CHI. 281-286, Dumais, S. T., Furnas, G. W., Landauer, T. K., Deerwester, S. & Harshman, R. (1988). See also, Improving information retrieval using Latent Semantic Indexing. Proceedings of the 1988 annual meeting of the American Society for Information Science. Deerwester, S., Dumais, S. T., Landauer, T. K., Furnas, G. W., & Beck, L. (1988).
3. In LSA (LSI) the SVD algorithm can be applied to matrices that not necessarily are populated with covariance values.
4. It was only later realized that SVD can be applied to a covariance matrix to obtain the PCA components.
5. See the PCA & SPCA Tutorial
6. PCA is not LSI. See http://irthoughts.wordpress.com/2007/05/05/pca-is-not-lsi/
1. The current issue of IR Watch will be out over the weekend–a bit delayed due to getting ready for school, preparing lessons and research projects. If things go as expected, my academic schedule will be a bit busy between teaching and research at two different universities.
2. I’m researching for a manuscript that deals with affine transformations applied to several IR problems. It expands on Vector Space Theory and allows one to think out of the “term-document” box. Great stuff.
4. Talking about semantically motivated, sentiment analysis, spam, etc… Funny how some folks in the SEO world like to damage the reputation of others without presenting any evidence. This time the trolls took on Kim Krause Berge ( http://cre8pc.com/archives/1489 ). I always admire Kim’s work, consider her an usability icon, and had the privilege of meeting her back in 2005. I was surprised to see these folks having a field day at her expense at Rand’s site. Kim, I feel your pain. However, more than one SEO forum/blog had lose credibility by allowing these folks, most of which think they can be socially “ranked” by attacking whoever is at the “top”. The fact is that most trolls are paper tigers that go hidding at the first Cease & Desist or defamation lawsuit.