This is a great tool from Google
I will try to see how the feed flattener can benefit from it.
This is a great tool from Google
I will try to see how the feed flattener can benefit from it.
The figure was generated with our Chaos Game Explorer tool, using the algorithm described at
and as presented in Barnsley’s books (Fractals Everywhere, 1988; The Desktop Fractal Design HandBook, 1989).
The game was played N = 100,000 times by randomly placing a point within an n-gon (polygon with n vertices), using different combinations of vertices (n) and scale ratios (r), and by coloring in white the emerging patterns. Some combinations produce patterns somehow resembling ancient calendars, medallions, rings,… from different ancient cultures.
For the above figure, I used n = 12 and r = 0.30.
Running the algorithm by coding the pixels in different colors reveals that the patterns are just the result of partially overlapping the same n-gon across many scales of observations. Did ancient cultures know about this technique?
Just for fun, you may want to try with other values, then run searches in Google Images for ancient calendars, medallions, rings, etc and compare results. Share your images and let me know if you found something interesting. I’m documenting results.
Our newest built: The Programming Cheat Sheets Miner.
Our if you wish, recrawl the miner search results and build your own curated collection of cheat sheets.
Two of our tools, Web Feed Flattener and Feed URLs Extractor, were updated and now accept files with the .xml extension so we changed their names to indicate this. These tools are available at
These updates take the tools to a whole new level. Now you can flatten the tree structure of files like sitemaps.xml and similar files and extract URLs. Just submit a target web address and you are good to go.
I know there are tools out there that can scrape .xml files in order to extract specific pieces of data like URLs, but found them too cumbersome. A major drawback of said design alternatives is that frequently one must know in advance how the document tree was constructed, with all of its tags and nuances, before coding a tool. To top off, if the author of the file changes or edits tags, probably the tool won’t work as expected.
Our approach is different and very flexible. The key here is the flattening of the document tree structure embedded in XML files without even having to know how it was designed or edited. Document tree flattening will unveil this information before you can say: “Give me some soup!”
Of course, we assume that the document tree has no orphan or broken tags (and better, pass validation) which is something to be expected from trusted sources. If it is not valid, well, there are ways of fixing it or ignore the offenders.
With the proposed technique, we can mine all sort of .xml files and build customized tools on top of the flattened results, like derivative tools for mining sitemaps, inventories, raw data, recipes, etc… No need to know in advance anything about the document tree, resource to additional scripting technologies, software, or reinvent the wheel.
Right now we can mine sitemaps all over the Web, including sitemaps hosted at Google, W3C, company sites, etc, and then recrawl the output to grow a microindex. See “Suggested Exercises” sections of the tools for interesting examples. This is a value-added approach for our Maps2Miners ongoing project.
Considering that there are government agencies and organizations facilitating data in .xml format for developers to mine, flattening .xml files and build on top of these is one of those “ah-ha!” ideas.
The URL Cleaner (http://www.minerazzi.com/tools/url-cleaner/muc.php) is our most recent tool.
Clean URLs from search engine result pages and websites, including Google, Bing, Yahoo, Yandex, Wikipedia, and others.
Ctrl + Ato select the entire page text, right-click the selected text, and copy all available URLs at once.Regardless of how URLs are collected (with or without web scrapers), the end result might be a list of dirty, ugly records with obscure attribute-value pairs appended by the search engine.Sometimes the list of URLs include entries with:
What is computed?
The tool is compatible with individual sites and might be so with other search engines. Whenever possible, we are open to add support for other search engines as suggested by users.
Since these features are enabled by default, if a run produces no results it means that either all URLs are fully contaminated or there are no URLs to edit.
The URL Query Parser is our most recent tool for mining URLs. It is available at
What is a URL query?
A URL query is the trailing text after the question mark (?) found in a URL. It consists of attribute-value pairs delimited by ampersands (&). These are also called name-value, key-value, or field-value pairs.
What this tool does
This tool parses URL queries and extracts its name-value pairs.
The tool helps users identify and filter URL queries from a collection or build collections consisting exclusively of URL queries.
With minor modifications, the tool can be converted into a massive URL cleaner. We are currently building another tool that does this, precisely. In this way we may be able to clean up URLs found in Google and Bing search result pages and safely use them in data mining studies.
What is computed
The latter is the default mode. Since in this mode results are alphabetically sorted, users can easily identify the most common or popular name-value pairs.
Implications to Web Security
This tool can be used by those interested in mining URL queries or conducting studies relevant to Web Security. Why? Please keep reading.
URL queries are used to transmit small pieces of data in the form of name-value pairs. The transmission can be of three types: (a) between web pages, (b) between a web page and a database, or (c) between databases. Real-world applications include access to web services, social profiling, and cloud computing, among others (Kantarcioglu, 2013).
In addition, URL queries are frequently used as vehicles for transmitting session parameters, form data, tracking mechanisms, user names, email addresses, and other data considered sensitive by users.
In a 2014 study, West & Aviv, from Verisign and the US Naval Academy, analyzed over 892 million user-submitted URLs containing 1.3 billion name-value pairs. They found over a quarter-billion plain text pairs involving referral tracking, with more than 10 million pairs potentially revealing some form of demographic, identity-based, or geographical information. Extreme cases involved the facilitation of password authentication credentials, email addresses, and user names (West & Aviv, 2014).
Thus, the development of tools designed to mining URL queries is something relevant to Web Security.
Fractals Miner: Fractal Patterns and Growth Phenomena – Theory, Experiments, & more. Available now at
Research the fractal geometry literature. Use the images tool below a result to view beautiful patterns or recrawl search results to build your own curated collection.
Note: Image below was created with Manglar, an experimental tool under development.
Some developers build form-based graphical user interfaces (GUIs) that give users the illusion of mapping the value of an input field to all other fields. Typical examples are conversion unit tools and other types of converters used in science and business oriented sites. This is frequently done by coding in the background M number of fields M number of times, with most fields hidden or dynamically coded. These M x M fields are then conditionally processed.
As M increases said strategy becomes very inefficient from both the coding and processing standpoint. Modifying these types of GUIs can be messy. For instance, to display a simple unit conversion tool with five conversion units requires the coding of 25 fields. To add an additional conversion unit requires the coding of 6 x 6 = 36 fields. Insane!
To overcome all those drawbacks, we have developed what we call a one-to-many fields mapping algorithm or O2M. The algorithm is quite simple and works as follows. Given a form with M unique text fields, randomly using one as an input field instructs the algorithm to treat the remaining ones as output fields. It does not matter which field is initially used or from where the data comes from (i.e., a user or database). Its value will be mapped to the remaining M – 1 fields. As a whole, an O2M GUI behaves as a many-to-many (M2M) solution. To grasp the concept, try one of our O2M tools at
Do you see the algorithm behind this image? Hint: It corresponds to Manglar, an upcoming tool I’m building for simulating the growth of patterns. I’m still testing it.
Manglar. Coming Soon.
We have recently launched the Bifurcation Diagrams Explorer. This is a tool for examining the behavior of low dimensional nonlinear dynamical systems.
Well, what does that have anything to do with information retrieval (IR)?
If you are an IR guy at the intersection of nonlinear dynamics, you already probably know that bifurcation diagrams are relevant to:
So the implications to social media, search, and data mining are there, if you can grasp the relevant research out there.
I wonder how long it will take for pseudo-scientific marketers/seos to prey on that, as they tried in the past with LSI/LSA, LDA, Vector Theory, and few other IR topics.