Note: I added this post content to the  Stochastic Matrix tutorial.

The spreading of incorrect knowledge or at best innaccurate representation of concepts is prevalent in circles associated to search engine optimization (SEO). This is a social phenomenon more notorious in the blogosphere and through public forums (sites and discussion forums). Because of this, we call the phenomenon a bunches of “blogonomies”.

Many of these are promoted by well known SEO and SEM specialists. These folks are called “experts” by their followers and pose as such in their SEM conferences. They often like to write the fine line of fallacies, producing material where false concepts are decorated with scientific terms.

We are not interested in investigating what motivates the phenomenon of blogonomies. However, we want the reader to be aware of these. As a sample of what you could expect to see listed in our SEO Blogonomies here is one: The Search Engine Markov Chain Blogonomy.

Some SEOs have written -giving the impression to readers- that search engines use a mythical Markov Chain to find patterns in search engine search results or sites, like if such chain is a special kind of detection instrument, tool or technique that is applied to find keyword patterns in a web page or to detect how the document was optimized. This is pure non sense.

There is no such thing as a mythical Search Engine Markov Chain, which only exists in the mind of these folks and followers, who often misquote research articles. A markov chain is simply a random process that occurs over time according to some transition probabilities. To illustrate, suppose we run an experiment that has N possible results (states). Suppose that we keep repeating the experiment and that the probability of each of the results or states occurring on the (N+1)th repetition depends only on the result of the Nth repetition of the experiment. This is called a markov chain.

Thus, a markov chain is not an instrument, technique, tool or the like that allegedly is used by search engines to rank web pages or to find word patterns in documents. True that there is a lot of research in which things have been modeled as markov processes in an attempt at understanding better behaviours and link graphs, but the analogy stops there.

True that there is something called an absorbing markov chain, but this is a specific case involving random walks with absorbing states. Perhaps it might be a good idea to write a tutorial on regular markov chains and absorbing markov chains or, better, recommend readers to take a look at the book of James T. Sandefur, Discrete Dynamical Systems, Theory and Applications (Oxford University Press; Chapter 6 Absorbing Markov Chains) (3). If you like fractals, chaos and iterations, this book is for you.

Meanwhile, if while drunk you walked randomly from one point to another, chances are that you might have “markov-chained yourself”, already.

This is a legacy post originally published in 7/8/2006

Advertisements