Peter Scholze and Perfectoid Spaces: A math genius among us.


, , ,

I’m reading with great interest biographic notes and work of Peter Scholze who at the age of 30 is one of the youngest Fields Medal Award Laureates. He has already won most of the top awards in Mathematics.

He currently is a Max Planck Institute for Mathematics director and a Hausdorff Chair at the University of Bonn. Super impressive!

Scholze’s key innovation is a class of fractal structures that he calls perfectoid spaces (2011 PhD thesis) which has far-reaching ramifications in the field of Arithmetic Geometry.

To help others learn about his awesome research work, the following links were indexed in the Math Bios ( miner. (2011 PhD thesis).

A miner on its own class and focused on perfectoid spaces is more than meritorious, I believe. Don’t you think so?

PS. Here is an introductory note by Jared Weinstein on perfectoid spaces:

And here is a discussion:

I decided to go ahead and build the perfectoid spaces miner.




Cosine Similarity Tutorial (citations)

Cosine similarity is one of those basic resemblance measures with many practical applications, and relevant to many research problems.

However, its meaning in the context of uncorrelated and orthogonal variables, as its connection with the non-additivity nature of correlation coefficients are often overlooked.

Happy to see the research papers below are still citing Minerazzi’s Cosine Similarity Tutorial, revamped a few years ago.

Continuous Real-Time Vehicle Driver Authentication Using Convolutional Neural Network Based Face Recognition.


Factors Contributing to Elevated Concentrations of Mercury and PCBs in Fish in the Inland Lakes of Michigan’s Upper Peninsula and Lake Superior.

Plagiarism Detection Tool for AMHARIC Text.

Extract reordering rules of sentence structure using neuro-fuzzy machine learning system.

Verification of upper Citarum River discharge prediction using climate forecast system version 2 (CFSv2) output.

Building Machine Learning System with Deep Neural Network for Text Processing.

Deep neural based name entity recognizer and classifier for English language.

Machine translation using deep learning: An overview.

An Analytical Method for Probabilistic Modeling of the Steady-State Behavior of Secondary Residential System

Big data analytic untuk pembuatan rekomendasi koleksi film personal menggunakan Mlib. Apache Spark.

Use of Data Warehousing to Analyze Customer Complaint Data of CFPB of USA.

On Fractals and Ancient Art Work Part 2


, , ,

Fractals and Ancient Art Work Part 2. The fractals below resemble ancient art work and were generated with the Chaos Game Explorer tool by playing the game 100,000 times using a polygon with n vertices and a scaling ratio r.

Showing below are:

(a) a calendar-like pattern with faces watching you (n = 13, r = 0.30)
(b) a mandala-like pattern (n = 20, r = 0.30)
(c) a collar-like (or plate-like) pattern (n = 40, r = 0.20).


Pixels were color coded in white. Multi-coloring the pixels reveals these simply are the result of partially overlapping a given pattern across many scales.

Again: Did ancient cultures know about this way of generating art work? Feel free to try the tool with other parameter values, compare results by searching in Google Images for the resembled image, and share results.

Let me know if you find something interesting. I’m documenting results.

This is the second part of a previous post.

Beating a dead horse, again.


, , ,

Happy to see that Bruce J. Ladewski’s PhD thesis, Expanding a Path Analytic Model of Quality Management to Include the Management of Safety, at

cited the Self-Weighting Model Tutorial Part 1

and stated what we all know: that correlation coefficients are not additive.

I don’t understand why some search marketers still believe the contrary. Stay away from dumb analytic from dumb SEOs, their myths, and nonsense.

Well: what can I say? Beating a dead horse,…again.

On Fractals and Ancient Art Work


, , , , ,

The figure was generated with our Chaos Game Explorer tool, using the algorithm described at

and as presented in Barnsley’s books (Fractals Everywhere, 1988; The Desktop Fractal Design HandBook, 1989).

The game was played N = 100,000 times by randomly placing a point within an n-gon (polygon with n vertices), using different combinations of vertices (n) and scale ratios (r), and by coloring in white the emerging patterns. Some combinations produce patterns somehow resembling ancient calendars, medallions, rings,… from different ancient cultures.

For the above figure, I used n = 12 and r = 0.30.

Running the algorithm by coding the pixels in different colors reveals that the patterns are just the result of partially overlapping the same n-gon across many scales of observations. Did ancient cultures know about this technique?

Just for fun, you may want to try with other values, then run searches in Google Images for ancient calendars, medallions, rings, etc and compare results. Share your images and let me know if you found something interesting. I’m documenting results.



ChemBios Miner


, ,

ChemBios is our newest miner (

Find biographies of famous chemists from ancient to modern times, including all Chemistry Nobel Prize Laureates.

Build your own curated collection of chemist bios by recrawling this miner search result links. The miner also lets you build a collection driven by Wikipedia vast repository by recrawling links from said online encyclopedia.

Chemometrics Miner


, , , , ,

Chemometrics, our newest miner.


Find tools, techniques, and tutorials for extracting information from chemical systems.

Recrawl search results and build your own curated collection on chemometrics, cheminformatics, and chemical data mining.

Access news relevant to chemometry.


Programming Cheat Sheets Miner


, , , , , , , , ,

Our newest built: The Programming Cheat Sheets Miner.

Easily access hard-to-find cheat sheets, guidelines, and shortcuts about all kind of programming languages, including Python, PHP, JavaScript, Java, Julia, and many more. Code less.

Our if you wish, recrawl the miner search results and build your own curated collection of cheat sheets.

Document tree flattening as an exploration technique for data mining .xml files (sitemaps, feeds, inventories, raw data, etc)


, , , , , , , ,

Two of our tools, Web Feed Flattener and Feed URLs Extractor, were updated and now accept files with the .xml extension so we changed their names to indicate this. These tools are available at

These updates take the tools to a whole new level. Now you can flatten the tree structure of files like sitemaps.xml and similar files and extract URLs. Just submit a  target web address and you are good to go.

I know there are tools out there that can scrape .xml files in order to extract specific pieces of data like URLs, but found them too cumbersome. A major drawback of said design alternatives is that frequently one must know in advance how the document tree was constructed, with all of its tags and nuances, before coding a tool. To top off, if the author of the file changes or edits tags, probably the tool won’t work as expected.

Our approach is different and very flexible. The key here is the flattening of the document tree structure embedded in XML files without even having to know how it was designed or edited. Document tree flattening will unveil this information before you can say: “Give me some soup!”

Of course, we assume that the document tree has no orphan or broken tags (and better, pass validation) which is something to be expected from trusted sources. If it is not valid, well, there are ways of fixing it or ignore the offenders.

With the proposed technique, we can mine all sort of .xml files and build customized tools on top of the flattened results, like derivative tools for mining sitemaps, inventories, raw data, recipes, etc… No need to know in advance anything about the document tree, resource to additional scripting technologies, software, or reinvent the wheel.

Right now we can mine sitemaps all over the Web, including sitemaps hosted at Google, W3C, company sites, etc, and then recrawl the output to grow a microindex. See “Suggested Exercises” sections of the tools for interesting examples. This is a value-added approach for our Maps2Miners ongoing project.

Considering that there are government agencies and organizations facilitating data in .xml format for developers to mine, flattening .xml files and build on top of these is one of those “ah-ha!” ideas.


URL Cleaner: Clean URLs from search results and websites


, , , , ,

The URL Cleaner ( is our most recent tool.

Clean URLs from search engine result pages and websites, including Google, Bing, Yahoo, Yandex, Wikipedia, and others.


  • The Problem
    Sometimes collection curators and content developers use web scrapers (Wikipedia, 2018a) to extract URLs from websites and search result pages.If a web scraper is not available or the target search engine reacts against the scraping (Wikipedia, 2018b), URL extraction can still be possible by installing a browser add-on like Copy Selected Links or a similar plugin. Once installed, users can right-click selected text and copy the URL of any links it contains. To copy all links from a page, they just need to press Ctrl + A to select the entire page text, right-click the selected text, and copy all available URLs at once.Regardless of how URLs are collected (with or without web scrapers), the end result might be a list of dirty, ugly records with obscure attribute-value pairs appended by the search engine.Sometimes the list of URLs include entries with:

      • URLs pointing to social networks. These URLs are often viewed by collection curators as “plastic contamination” in search results suppose to be “organic”. Typical examples are results from Google and similar search engines.
      • URLs about self-promotion. The same search engine might include URLs pointing to unrequested content like its own products, services, partners/ads, links to additional content, etc. Typical examples are results from Google and URLs extracted from Wikipedia webpages.
      • URLs with special characters. For instance, those defining queries (?), fragment identifiers (#), and hash-bangs (#!), among others (Wikipedia, 2018c; 2018d).
      • URLs with some characters encoded.
      • URLs containing mailto:, javascript:, or data:
      • 6-22-2018 Update: URLs obfuscated by shortening services: e.g.,,,,, and many more. Regardless of their merits, shortened URLs can open the door to all sort of problems (Wikipedia, 2018e). These are frequently viewed by collection curators as unnecessary noise.
  • The Solution
    Would it be nice to have a tool that lets users generate by default a list of clean, sorted, and deduplicated URLs, with options for selectively include/exclude some of the above contaminants? This is precisely what our Minerazzi URL Cleaner (MUC) does.
  • Unlike other URL cleaners, MUC cleans multiple URLs at once from search engines and websites, and can be used free of charge. Before proceeding any further, lets explain what MUC is and is not. The tool is a data cleaner and a lightweight version of our popular Editor and Curator tool. It is not a web scraper, URL validator, or URL shortener resolver, but can be used to clean results from these.
  • In the next section, we describe some uses for MUC, its features and limitations.

What is computed?

  • Searches Support
    MUC was designed to edit search results from the following.

    • Google, Bing, Yahoo, Yandex, and DuckDuckGo
    • 100searchengines, HotBot, Ask, and
    • Google Scholar, and Wikipedia

    The tool is compatible with individual sites and might be so with other search engines. Whenever possible, we are open to add support for other search engines as suggested by users.

  • Editing Features
    The tool implements the following edits by default.

    • Social networks
      URLs pointing to Linkedin, Facebook, Twitter, Myspace, Instagram, Pinterest, Snapchat, Youtube, Vimeo, and Tumblr are removed.
    • Self-promotions
      URLs about the supported search engines and pointing to their products, services, and partners/ads, or any additional content are removed.
    • Special characters
      Sections of a URL that start with ? # [ ] @ ! $ & ‘ ( ) * , ; = are removed. Trailing forward slashes (/) are also removed.
    • Special strings
      URLs with mailto:, javascript:, data: are removed.
    • Encoded characters
      URL %-encoded characters are replaced by their unencoded versions.
    • Shorteners (6-22-2018 Update)
      URLs obfuscated by shortening services (nearly 600 of these services) are removed.
    • One or more of the above edits can be disabled by properly checking the corresponding checkboxes.

    Since these features are enabled by default, if a run produces no results it means that either all URLs are fully contaminated or there are no URLs to edit.

  • First time users
    We recommend first time users install the Copy Selected Links , or a similar add-on, before proceeding any further. Then do a search in Google and, with the add-on installed, clean URLs, first selectively and then at full blast, MUC.
  • Tool limitations
    Up to 5,000 URLs can be submitted at once. We arbitrarily imposed this limit to (a) provide fast responses, (b) minimize browser crashes, and (c) minimize abuses.
  • Last but not least, the tool might fail to remove non English, obfuscated, or encrypted characters.