• About IR Thoughts
  • Minerazzi Tools
  • Minerazzi Tutorials

IR Thoughts

~ Thoughts on Information Retrieval, Search Engines. Data Mining, and Science & Engineering

IR Thoughts

Monthly Archives: January 2008

Back Mapping for the Masses

31 Thursday Jan 2008

Posted by egarcia in Data Mining, IR Tutorials, Machine Learning

≈ Leave a comment

In a recent tutorial on association and scalar clusters, http://www.miislita.com/information-retrieval-tutorial/association-scalar-clusters-tutorial-1.pdf, I introduced a back mapping technique wherein once features conforming clusters are extracted from objects, the clusters are mapped back to objects.

The technique works well with clusters of terms extracted from documents. The reverse case is also possible: given a cluster of documents extracted from terms, it is possible to map these back to terms.

What do we gain from such two-way manipulations? A lot. Consider the first scenario: Mapping term clusters back to documents; a tutorial on the second scenario will be available soon.

Back Mapping Term Clusters to Documents

A document is just a distribution over topics while topics are distributions over words. Thus, across a collection of documents there are topics hidden (latent) and waiting to be uncovered. Back mapping allows us to recover these, precisely.

Combinations of terms that do not amount to topics across the collection are discovered as well. Reasonably, one would expect these to be least relevant across other documents than those distributed across the collection. In addition, one would expect documents traced back to clusters to be the most relevant documents, from the collection and with respect to the topics.

The implications of this for search engine optimization and keyword bidding are quite obvious. Implementation is straightforward. To learn more about it, read Part 1 of the tutorial.

Web Mining Week 9

28 Monday Jan 2008

Posted by egarcia in Homeland Security, Web Mining Course

≈ Leave a comment

Week 9 Agenda

Intelligence Searching for Penetration Testers (PPT Presentation)
Searching for Terrorist Threats and Identity Thefts, the SSN Way (PPT Presentation)
Mining VIN numbers, Email Headers, and other Undocumented Commands (PPT Presentation)

Required Reading Material

Provided during lecture.

The Power of Document Linearization

25 Friday Jan 2008

Posted by egarcia in Marketing Research, SEO Myths

≈ 2 Comments

In http://www.miislita.com/fractals/keyword-density-optimization.html  I explained to the SEO community the concept of document linearization as part of document GAP analysis. Marketers learned what IR graduate students already know: that document linearization (i.e., markup removal) is just one component of document indexing.

Keyword distribution, word distances, phrase matching, etc. are obtained from the text stream that results from linearization, not from the apparent position of text that is rendered by a browser and visually inspected by average end users. Document linearization debunks the common SEO Keyword Density Myth. One thing is the apparent distribution of words as perceived when end users visually scan a document and another thing is the actual word distribution as parsed by a search engine. The futility of computing KD values is quite obvious.

Here is a report of another recent SEO that discovered the power of document linearization:

http://seo-gw.blogspot.com/2008/01/fractal-semantics-linearization.html

The testimonial is worth to read.

The post https://irthoughts.wordpress.com/2007/12/20/from-keyword-density-to-william-tuttes-legacy/  is also relevant these days.

Search for posts on keyword density: https://irthoughts.wordpress.com/?s=keyword+density

Microsoft’s Black Cloud on Yahoo! & SEO Tag Clouds

23 Wednesday Jan 2008

Posted by egarcia in Miscellaneous, SEO Myths

≈ 2 Comments

From time to time rumors spread of the black cloud of Microsoft over Yahoo!; i.e., of Microsoft buying Yahoo!. This time things are less cloudy, especially now that Yahoo! is about to cut jobs.

Early this year, Jeremy Zawodny from Yahoo!, wrote:

“Sure, there would be cultural problems, integration challenges, and many people who’d likely walk. But at the end of the day, Microsoft would end up with a much larger set of online services, a better advertising network, and people who know how to build, brand, and market web stuff that people actually use.”

Talking about clouds:

A student asked me about some SEOs claiming that text tag clouds are a kind of LSI technology.

Pure non sense coming from many SEOs, as usual.

These clouds are easy to construct. No LSI is needed:

1. Sort terms from a document or lookup list by frequencies.
2. Normalize frequencies to run between the 0,1 interval.
3. Use normalized frequencies as parameters to be passed as font sizes.

For pizzaz, store terms into array to be sorted or randomized and or use some CSS.

We can do the same with hit counts assigned to blog categories, links, etc. No special technology is needed.

Association & Scalar Clusters Tutorial – Part 1

22 Tuesday Jan 2008

Posted by egarcia in Data Mining, Latent Semantic Indexing, Web Mining Course

≈ Leave a comment

I am writing a tutorial series on Cluster Analysis. It is my pleasure to announce that the
Association and Scalar Clusters Tutorial – Part 1: Back Mapping Term Clusters to Documents was uploaded few days ago.

Online publication was announced in advanced to subscribers of the IR Watch – The Newsletter, so they already have an edge over regular readers and visitors of Mi Islita

Abstract follows:

In this tutorial you will learn how to extract association and scalar clusters from a term-document matrix. A “reaction” equation approach is used to break down the classification problem to a sequence of steps. From the initial matrix, two similarity matrices are constructed, and from these association and scalar clusters are identified. A back mapping technique is then used to classify documents based on their degree of pertinence to the clusters. Matched documents are treated as distributions over topics. Applications to topic discovery, term disambiguation, and document classification are discussed.

During last night lecture (Web Mining Course), I applied the back mapping technique to scalar clusters generated from LSI. The technique provides additional information and reasons as to how and why documents score as observed after implementing SVD. A clear connection with Fuzzy Set Theory was made.

Students taking the Web Mining Course will find this tutorial quite handy.

Web Mining Week 8

21 Monday Jan 2008

Posted by egarcia in Latent Semantic Indexing, Web Mining Course

≈ Leave a comment

Week 8 Agenda

 Take-Home Work 3 and Web Mining Course FAQs
LSI and Scalar Cluster Analysis: An EXCEL Spreadsheet Approach (PPT presentation)
LSI and Fuzzy Sets = Fuzzy LSI
Introduction to Intelligence Searches (PPT presentation)
Bonus: My IPAM Lost Pictures at the 2006 Document Indexing Workshop

Required Reading Material

http://www.miislita.com/information-retrieval-tutorial/singular-value-decomposition-fast-track-tutorial.pdf
http://www.miislita.com/information-retrieval-tutorial/latent-semantic-indexing-fast-track-tutorial.pdf 
http://www.miislita.com/information-retrieval-tutorial/lsi-keyword-research-fast-track-tutorial.pdf

Finding Topic-Specific Posts

18 Friday Jan 2008

Posted by egarcia in Latent Semantic Indexing, SEO Myths

≈ Leave a comment

This post is for those interested in finding all my posts on a particular topic; for example, LSI SEO Myths or Keyword Density Myths. These are two topics I have debunked many times. In an attempt at promoting their images as “seo experts”, vested interests, or purely for money reasons, spammers and unethical SEOs are still hanging around these.

To find all post on a specific topic do this: (1) click on a category link or (2) use this blog search box.

For those lazy enough, here are some posts on LSI SEO myths : 

https://irthoughts.wordpress.com/2007/12/11/perpetuating-lsi-misconceptions/

https://irthoughts.wordpress.com/2007/09/03/lsi-according-to-an-seomoz-glossary/

https://irthoughts.wordpress.com/2007/08/29/a-call-to-expose-seo-liers/

https://irthoughts.wordpress.com/2007/07/19/seos-and-still-their-lsi-misconceptions/

https://irthoughts.wordpress.com/2007/07/09/a-call-to-seos-claiming-to-sell-lsi/

https://irthoughts.wordpress.com/2007/06/06/lsi-blog-posts-and-seos/

https://irthoughts.wordpress.com/2007/06/02/when-seos-are-caught-in-lies/

https://irthoughts.wordpress.com/2007/05/11/zoom-in-this-theme-the-lsi-myth/

https://irthoughts.wordpress.com/2007/05/06/seos-blogging-lsi-non-sense/

https://irthoughts.wordpress.com/2007/05/03/two-seo-blogonomies/

https://irthoughts.wordpress.com/2007/05/03/there-is-no-such-thing-as-lsi-friendly-documents/

https://irthoughts.wordpress.com/2007/05/03/latest-seo-incoherences-lsi/

Global Term Weights based on Entropies

16 Wednesday Jan 2008

Posted by egarcia in Latent Semantic Indexing, Vector Space Models, Web Mining Course

≈ Leave a comment

A grad student taking the Web Mining, Search Engines, and Business Intelligence course asked me to clarify global weights G defined as entropies.

Global weights based on entropies are frequently combined with local and normalization weights into overall weights.  These are then used to populate a term-doc matrix. The matrix can be used with term vector models to rank documents. The same matrix can be decomposed with SVD (LSI) and used to rank documents.

The following set of equations define the global entropy weight of term i in a collection of just 3 documents (N=3). I am providing two extreme cases:

Global Entropy Weights

Evidently,

G = 0 if the term is equally mentioned in all documents of the collection.
G = 1 if the term is present in just one document.

Any other combination of frequencies yields G values somewhere between 0 and 1. Thus, the model gives higher weights to terms that appear fewer times in a small number of documents, while lowering the weights of terms that are frequently used across the collection.

Note that the convention is to default p log p values when a condition is met; e.g., p log p = 0 if p = 0 or 1.

Web Mining Week 7

14 Monday Jan 2008

Posted by egarcia in Latent Semantic Indexing, SEO Myths, Web Mining Course

≈ Leave a comment

Week 7 Agenda

Review of Association and Scalar Clusters
Review of Vector Space Models
LSI & SVD: Demystifying LSI SEO Myths (OJOBuscador Congress, Madrid; PDF Presentation)
LSI & Keyword Research (PDF Presentation)
SVD Noise Filtering: Principal Component Analysis (PCA)

Required Reading Material

Tutorial Series
This is part one of a five-part tutorial series:
http://www.miislita.com/information-retrieval-tutorial/svd-lsi-tutorial-1-understanding.html

Fast Tracks
These are quick tutorials, with to-the-point calculations:
http://www.miislita.com/information-retrieval-tutorial/singular-value-decomposition-fast-track-tutorial.pdf
http://www.miislita.com/information-retrieval-tutorial/latent-semantic-indexing-fast-track-tutorial.pdf
http://www.miislita.com/information-retrieval-tutorial/lsi-keyword-research-fast-track-tutorial.pdf

Blog Posts
These are IR blog posts designed to fight back against misinformation promoted by unethical SEOs and Spammers:
https://irthoughts.wordpress.com/2007/07/09/a-call-to-seos-claiming-to-sell-lsi/
https://irthoughts.wordpress.com/page/1/?s=lsi
https://irthoughts.wordpress.com/page/2/?s=lsi

Blog Category
This is a blog category pointing to a collage of posts that demystify SEO non sense about LSI. Some are about topics that overlap with LSI:
https://irthoughts.wordpress.com/category/latent-semantic-indexing/   

Web Mining and Search Engines Architecture Courses

11 Friday Jan 2008

Posted by egarcia in Web Mining Course

≈ Leave a comment

Winter back to school.

Here is the schedule for the Web Mining, Search Engines, and Business Intelligence graduate course for the next weeks.

Jan 14 – LSI and SVD: A hands-on approach. Covers SEO LSI Myths

Jan 21 – Intelligence Searching: Ethical hacking and penetration testing with search engines

Jan 28 – Spam Intelligence: Ethical Spamming, spamdexing, and Adversarial IR strategies

Feb 4 – On-Topic Analysis and Co-Occurrence Theory

Feb 11 – TBA

Next Spring I will be teaching the advanced graduate course Search Engines Architecture. 

This is a hands-on course where students will spend most of the time in the Software Testing Lab. We will build crawlers, dbas, parsers, search interfaces, etc. Students doing or interested in working on projects/theses with me are encouraged to take the course.

← Older posts
January 2008
M T W T F S S
« Dec   Feb »
 123456
78910111213
14151617181920
21222324252627
28293031  

Favorite Sites

  • Minerazzi.com

Pages

  • About IR Thoughts
  • Minerazzi Tools
  • Minerazzi Tutorials

Categories

  • 4D Printing
  • AIRWeb Course
  • Algorithms
  • Amazon Alexa
  • Android
  • Arithmetic Geometry
  • Best Match Models (BM)
  • Big Data
  • BioDesign
  • bioinformatics
  • Blogroll
  • calculators
  • Cancer
  • Cancer Research
  • Causality & Determinism
  • Chaos
  • chemical mining
  • Chemist Biographies
  • chemistry
  • Chemometrics
  • Clinical Trials
  • Cloud Computing
  • Conferences
  • Correlation Coefficients
  • Cortana
  • Crawlers
  • Curated Collections
  • Data Conversion
  • Data Mining
  • Data Structures
  • Deep Neural Networks
  • directories
  • Directories
  • Docker
  • Dynamics
  • Favorite Sites
  • Feed Tools
  • Fisher Transformations
  • Fractal Geometry
  • Fractal Patterns
  • google
  • Graduate Courses
  • Hacking
  • Homeland Security
  • Human-Computer Interaction
  • Image Compression
  • information retrieval
  • Internet Engineering
  • Internet Standards
  • inverted index
  • ir
  • IR Quizzes
  • IR Tools
  • IR Tutorials
  • Kubernetes
  • Latent Semantic Indexing
  • Legacy Posts
  • LIGO
  • Machine Learning
  • Marketing Research
  • Mathematics
  • Medical Cannabis
  • meta-analysis
  • Mind Retrieval
  • miner
  • minerazzi
  • Miscellaneous
  • National Laboratories
  • New Information Retrieval Paradigms
  • News
  • Newsletters
  • Nonlinear Dynamics
  • One-to-Many (O2M)
  • Open Source Projects
  • PageRank
  • Patents
  • PCA
  • People Searches
  • Perfectoid Spaces
  • PHP
  • Physiology
  • Poems
  • political networks
  • Predatory Conferences
  • Predatory Journals
  • Programming
  • Public Databases
  • Public Records
  • Quack Science
  • Quantum Computing
  • Quantum Information Retrieval
  • Quantum Searches
  • Quantum Theory
  • Queries
  • Ranking Results
  • Research Centers
  • RSS/Atom Feeds
  • Scripts
  • search engines
  • Search Engines Architecture Course
  • Search Modes
  • self-weighting
  • SEO Myths
  • Sitemaps
  • social mining
  • social pulse parser
  • Software
  • Spam
  • Statistics and Mathematics
  • SVD
  • Technology Inventions
  • Theoretical Physics
  • Theses
  • twitter
  • URLs Mining
  • Vector Space Models
  • Voice Assistants
  • Web Mining
  • Web Mining Course
  • Web Security
  • Xamarin

Recent Posts

  • On the Myth of d Orbitals Hybridization
  • The Bond Order Calculator: Updates
  • Semantic Similarity of Healthcare Data
  • Going the multidisciplinary way
  • CUNY Computational Chemistry Tools
  • Why I chose to be a multidisciplinary scientist?
  • A Simple Example of Phonetic Similarity vs. Text Similarity
  • A Simple News Hub
  • Zillman’s 2019 Directory of Directories
  • More on Perfectoid Spaces
  • Lymphomas Miner
  • Extracting Topic-Specific Wikipedia Links
  • Programming Languages Miner
  • Beware of Chemistry Heuristics
  • New IANA Miners

Archives

  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007

Algorithms calculators chemical mining Conferences Data Mining Hacking Homeland Security Human-Computer Interaction IR Tools IR Tutorials Latent Semantic Indexing Machine Learning Marketing Research Mathematics miner minerazzi Miscellaneous New Information Retrieval Paradigms News Newsletters Programming Queries Scripts Search Engines Architecture Course SEO Myths Software Spam Statistics and Mathematics Vector Space Models Web Mining Course

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy