This is the second part of a previous post.
Two of our tools, Web Feed Flattener and Feed URLs Extractor, were updated and now accept files with the .xml extension so we changed their names to indicate this. These tools are available at
These updates take the tools to a whole new level. Now you can flatten the tree structure of files like sitemaps.xml and similar files and extract URLs. Just submit a target web address and you are good to go.
I know there are tools out there that can scrape .xml files in order to extract specific pieces of data like URLs, but found them too cumbersome. A major drawback of said design alternatives is that frequently one must know in advance how the document tree was constructed, with all of its tags and nuances, before coding a tool. To top off, if the author of the file changes or edits tags, probably the tool won’t work as expected.
Our approach is different and very flexible. The key here is the flattening of the document tree structure embedded in XML files without even having to know how it was designed or edited. Document tree flattening will unveil this information before you can say: “Give me some soup!”
Of course, we assume that the document tree has no orphan or broken tags (and better, pass validation) which is something to be expected from trusted sources. If it is not valid, well, there are ways of fixing it or ignore the offenders.
With the proposed technique, we can mine all sort of .xml files and build customized tools on top of the flattened results, like derivative tools for mining sitemaps, inventories, raw data, recipes, etc… No need to know in advance anything about the document tree, resource to additional scripting technologies, software, or reinvent the wheel.
Right now we can mine sitemaps all over the Web, including sitemaps hosted at Google, W3C, company sites, etc, and then recrawl the output to grow a microindex. See “Suggested Exercises” sections of the tools for interesting examples. This is a value-added approach for our Maps2Miners ongoing project.
Considering that there are government agencies and organizations facilitating data in .xml format for developers to mine, flattening .xml files and build on top of these is one of those “ah-ha!” ideas.
The URL Cleaner (http://www.minerazzi.com/tools/url-cleaner/muc.php) is our most recent tool.
Clean URLs from search engine result pages and websites, including Google, Bing, Yahoo, Yandex, Wikipedia, and others.
- The Problem
Sometimes collection curators and content developers use web scrapers (Wikipedia, 2018a) to extract URLs from websites and search result pages.If a web scraper is not available or the target search engine reacts against the scraping (Wikipedia, 2018b), URL extraction can still be possible by installing a browser add-on like Copy Selected Links or a similar plugin. Once installed, users can right-click selected text and copy the URL of any links it contains. To copy all links from a page, they just need to press
Ctrl + Ato select the entire page text, right-click the selected text, and copy all available URLs at once.Regardless of how URLs are collected (with or without web scrapers), the end result might be a list of dirty, ugly records with obscure attribute-value pairs appended by the search engine.Sometimes the list of URLs include entries with:
- URLs pointing to social networks. These URLs are often viewed by collection curators as “plastic contamination” in search results suppose to be “organic”. Typical examples are results from Google and similar search engines.
- URLs about self-promotion. The same search engine might include URLs pointing to unrequested content like its own products, services, partners/ads, links to additional content, etc. Typical examples are results from Google and URLs extracted from Wikipedia webpages.
- URLs with special characters. For instance, those defining queries (?), fragment identifiers (#), and hash-bangs (#!), among others (Wikipedia, 2018c; 2018d).
- URLs with some characters encoded.
- 6-22-2018 Update: URLs obfuscated by shortening services: e.g., bit.ly, goo.gl, is.gd, t.co, and many more. Regardless of their merits, shortened URLs can open the door to all sort of problems (Wikipedia, 2018e). These are frequently viewed by collection curators as unnecessary noise.
- The Solution
Would it be nice to have a tool that lets users generate by default a list of clean, sorted, and deduplicated URLs, with options for selectively include/exclude some of the above contaminants? This is precisely what our Minerazzi URL Cleaner (MUC) does.
- Unlike other URL cleaners, MUC cleans multiple URLs at once from search engines and websites, and can be used free of charge. Before proceeding any further, lets explain what MUC is and is not. The tool is a data cleaner and a lightweight version of our popular Editor and Curator tool. It is not a web scraper, URL validator, or URL shortener resolver, but can be used to clean results from these.
- In the next section, we describe some uses for MUC, its features and limitations.
What is computed?
- Searches Support
MUC was designed to edit search results from the following.
- Google, Bing, Yahoo, Yandex, and DuckDuckGo
- 100searchengines, HotBot, Ask, and textise.net
- Google Scholar, and Wikipedia
The tool is compatible with individual sites and might be so with other search engines. Whenever possible, we are open to add support for other search engines as suggested by users.
- Editing Features
The tool implements the following edits by default.
- Social networks
URLs pointing to Linkedin, Facebook, Twitter, Myspace, Instagram, Pinterest, Snapchat, Youtube, Vimeo, and Tumblr are removed.
URLs about the supported search engines and pointing to their products, services, and partners/ads, or any additional content are removed.
- Special characters
Sections of a URL that start with ? # [ ] @ ! $ & ‘ ( ) * , ; = are removed. Trailing forward slashes (/) are also removed.
- Special strings
- Encoded characters
URL %-encoded characters are replaced by their unencoded versions.
- Shorteners (6-22-2018 Update)
URLs obfuscated by shortening services (nearly 600 of these services) are removed.
- One or more of the above edits can be disabled by properly checking the corresponding checkboxes.
Since these features are enabled by default, if a run produces no results it means that either all URLs are fully contaminated or there are no URLs to edit.
- Social networks
- First time users
We recommend first time users install the Copy Selected Links , or a similar add-on, before proceeding any further. Then do a search in Google and, with the add-on installed, clean URLs, first selectively and then at full blast, MUC.
- Tool limitations
Up to 5,000 URLs can be submitted at once. We arbitrarily imposed this limit to (a) provide fast responses, (b) minimize browser crashes, and (c) minimize abuses.
- Last but not least, the tool might fail to remove non English, obfuscated, or encrypted characters.
If you are a chemist, biodesigner, or a researcher working in other fields, eventually you may need to fit a paired data set to a polynomial regression model. You could use software to do that, or build your own solution. This tutorial is aimed at those interested in the latter. Access it now at
Three different methods for implementing polynomial regression are described. Teachers and students might benefit from the tutorial since the calculations can be done with a spreadsheet software like Excel, by writing a computer program, or with a programmable calculator.
Do you see the algorithm behind this image? Hint: It corresponds to Manglar, an upcoming tool I’m building for simulating the growth of patterns. I’m still testing it.
Manglar. Coming Soon.
The p-Values Calculator is a new Minerazzi tool that is available now at
Submitting a Student’s t value and degrees of freedom returns a p-value. This is a great tool for Student’s t hypothesis testing.
The tool works by numerically approximating the CDF (Cumulative Distribution Function). This is the integral of the PDF (Probability Distribution Function) of the Student’s t Distribution. The theory behind these calculations along with valuable references are given in the tool’s page.
The reverse process, computing t from a p-value is possible by inverting the CDF to compute the Quantile Function (QF), also known as the inverse CDF. Our (soon to be released) t-p Transformations tool computes both the CDF (t-to-p) and QF (p-to-t).
Recent available tools relevant to physiology and chemistry, with some interesting exercises.
Body Mass Index (BMI)
Corpulence Index (CI)
Cell Electrode Potentials
Standard Electrode Potentials
The Domain Extractor is a new Minerazzi tool, available now at
The tool extracts domains and subdomains from up to 10,000 URLs at once. Larger sets are resized to conform to this limit. This is done to avoid browser crashes.
From the input set, the Domain Extractor returns a set consisting of domains and subdomains. The results are deduplicated and sorted in alphabetical order
The tool comes handy when one wants to extract chunks consisting of 10,000 domains from databases or other sources.
It can be conveniently used in combinations of other of our tools, like
The FQU Bot
Simple, light, but a powerful toy/tool: The Domain Extractor can be used as part of a crawling strategy: Once domains and subdomains are extracted, the chunks of URLs can be sent to a queue for crawlers to revisit them.
Another application consists in querying a search engine, extract URLs from its results page and then process them through the tool.
There might be other applications, but the above can give you an idea of how handy the tool can be.
We have added a new algorithm to the MUST tool, available at
The tool now automatically detects bogus http status code responses. These types of response codes are frequently designed, though not always, to game crawlers and automated header request tools; i.e., to believe that a resource is not accessible.
For instance, test the following with our tool and as given:
Quantum Computing is a new miner, available now at
Find resources relevant to quantum computing, searches, retrieval, and information assurance.
Access from introductory to advanced research papers and how-to articles. This 2017, move beyond classic IR and computing stuff and forward to new research paradigms like quantum information retrieval, quantum searches, quantputers, and their implications to encryption and information security.
During the last 20 years, quantum computing has mature and is now in the fast lane.
We already have quantum computers, quantum programming languages, and quantum pagerank algorithms. We even have quantum hackers and crackers.
So university computer science departments may want to start embracing quantum-oriented research projects and affine technologies. Same goes for private companies and marketing research companies.
So the challenge for this 2017 and upcoming years is…
“To bit, or not to bit, that is the qubit:”