On Fractals and Ancient Art Work

Tags

, , , , ,

The figure was generated with our Chaos Game Explorer tool, using the algorithm described at

http://www.minerazzi.com/tools/chaos-game/chaos-game.php

and as presented in Barnsley’s books (Fractals Everywhere, 1988; The Desktop Fractal Design HandBook, 1989).

The game was played N = 100,000 times by randomly placing a point within an n-gon (polygon with n vertices), using different combinations of vertices (n) and scale ratios (r), and by coloring in white the emerging patterns. Some combinations produce patterns somehow resembling ancient calendars, medallions, rings,… from different ancient cultures.

For the above figure, I used n = 12 and r = 0.30.

Running the algorithm by coding the pixels in different colors reveals that the patterns are just the result of partially overlapping the same n-gon across many scales of observations. Did ancient cultures know about this technique?

Just for fun, you may want to try with other values, then run searches in Google Images for ancient calendars, medallions, rings, etc and compare results. Share your images and let me know if you found something interesting. I’m documenting results.

 

 

Advertisements

ChemBios Miner

Tags

, ,

ChemBios is our newest miner (http://www.minerazzi.com/chembios).

Find biographies of famous chemists from ancient to modern times, including all Chemistry Nobel Prize Laureates.

Build your own curated collection of chemist bios by recrawling this miner search result links. The miner also lets you build a collection driven by Wikipedia vast repository by recrawling links from said online encyclopedia.

Chemometrics Miner

Tags

, , , , ,

Chemometrics, our newest miner.

 

Find tools, techniques, and tutorials for extracting information from chemical systems.

Recrawl search results and build your own curated collection on chemometrics, cheminformatics, and chemical data mining.

Access news relevant to chemometry.

 

Programming Cheat Sheets Miner

Tags

, , , , , , , , ,

Our newest built: The Programming Cheat Sheets Miner.

http://www.minerazzi.com/cheatsheets/

Easily access hard-to-find cheat sheets, guidelines, and shortcuts about all kind of programming languages, including Python, PHP, JavaScript, Java, Julia, and many more. Code less.

Our if you wish, recrawl the miner search results and build your own curated collection of cheat sheets.

Document tree flattening as an exploration technique for data mining .xml files (sitemaps, feeds, inventories, raw data, etc)

Tags

, , , , , , , ,

Two of our tools, Web Feed Flattener and Feed URLs Extractor, were updated and now accept files with the .xml extension so we changed their names to indicate this. These tools are available at

http://www.minerazzi.com/tools/flattener/feed-flattener.php

http://www.minerazzi.com/tools/feed-urls/extractor.php

These updates take the tools to a whole new level. Now you can flatten the tree structure of files like sitemaps.xml and similar files and extract URLs. Just submit a  target web address and you are good to go.

I know there are tools out there that can scrape .xml files in order to extract specific pieces of data like URLs, but found them too cumbersome. A major drawback of said design alternatives is that frequently one must know in advance how the document tree was constructed, with all of its tags and nuances, before coding a tool. To top off, if the author of the file changes or edits tags, probably the tool won’t work as expected.

Our approach is different and very flexible. The key here is the flattening of the document tree structure embedded in XML files without even having to know how it was designed or edited. Document tree flattening will unveil this information before you can say: “Give me some soup!”

Of course, we assume that the document tree has no orphan or broken tags (and better, pass validation) which is something to be expected from trusted sources. If it is not valid, well, there are ways of fixing it or ignore the offenders.

With the proposed technique, we can mine all sort of .xml files and build customized tools on top of the flattened results, like derivative tools for mining sitemaps, inventories, raw data, recipes, etc… No need to know in advance anything about the document tree, resource to additional scripting technologies, software, or reinvent the wheel.

Right now we can mine sitemaps all over the Web, including sitemaps hosted at Google, W3C, company sites, etc, and then recrawl the output to grow a microindex. See “Suggested Exercises” sections of the tools for interesting examples. This is a value-added approach for our Maps2Miners ongoing project.

Considering that there are government agencies and organizations facilitating data in .xml format for developers to mine, flattening .xml files and build on top of these is one of those “ah-ha!” ideas.

 

URL Cleaner: Clean URLs from search results and websites

Tags

, , , , ,

The URL Cleaner (http://www.minerazzi.com/tools/url-cleaner/muc.php) is our most recent tool.

Clean URLs from search engine result pages and websites, including Google, Bing, Yahoo, Yandex, Wikipedia, and others.

Introduction

  • The Problem
    Sometimes collection curators and content developers use web scrapers (Wikipedia, 2018a) to extract URLs from websites and search result pages.If a web scraper is not available or the target search engine reacts against the scraping (Wikipedia, 2018b), URL extraction can still be possible by installing a browser add-on like Copy Selected Links or a similar plugin. Once installed, users can right-click selected text and copy the URL of any links it contains. To copy all links from a page, they just need to press Ctrl + A to select the entire page text, right-click the selected text, and copy all available URLs at once.Regardless of how URLs are collected (with or without web scrapers), the end result might be a list of dirty, ugly records with obscure attribute-value pairs appended by the search engine.Sometimes the list of URLs include entries with:

      • URLs pointing to social networks. These URLs are often viewed by collection curators as “plastic contamination” in search results suppose to be “organic”. Typical examples are results from Google and similar search engines.
      • URLs about self-promotion. The same search engine might include URLs pointing to unrequested content like its own products, services, partners/ads, links to additional content, etc. Typical examples are results from Google and URLs extracted from Wikipedia webpages.
      • URLs with special characters. For instance, those defining queries (?), fragment identifiers (#), and hash-bangs (#!), among others (Wikipedia, 2018c; 2018d).
      • URLs with some characters encoded.
      • URLs containing mailto:, javascript:, or data:
      • 6-22-2018 Update: URLs obfuscated by shortening services: e.g., bit.ly, goo.gl, is.gd, t.co, and many more. Regardless of their merits, shortened URLs can open the door to all sort of problems (Wikipedia, 2018e). These are frequently viewed by collection curators as unnecessary noise.
  • The Solution
    Would it be nice to have a tool that lets users generate by default a list of clean, sorted, and deduplicated URLs, with options for selectively include/exclude some of the above contaminants? This is precisely what our Minerazzi URL Cleaner (MUC) does.
  • Unlike other URL cleaners, MUC cleans multiple URLs at once from search engines and websites, and can be used free of charge. Before proceeding any further, lets explain what MUC is and is not. The tool is a data cleaner and a lightweight version of our popular Editor and Curator tool. It is not a web scraper, URL validator, or URL shortener resolver, but can be used to clean results from these.
  • In the next section, we describe some uses for MUC, its features and limitations.

What is computed?

  • Searches Support
    MUC was designed to edit search results from the following.

    • Google, Bing, Yahoo, Yandex, and DuckDuckGo
    • 100searchengines, HotBot, Ask, and textise.net
    • Google Scholar, and Wikipedia

    The tool is compatible with individual sites and might be so with other search engines. Whenever possible, we are open to add support for other search engines as suggested by users.

  • Editing Features
    The tool implements the following edits by default.

    • Social networks
      URLs pointing to Linkedin, Facebook, Twitter, Myspace, Instagram, Pinterest, Snapchat, Youtube, Vimeo, and Tumblr are removed.
    • Self-promotions
      URLs about the supported search engines and pointing to their products, services, and partners/ads, or any additional content are removed.
    • Special characters
      Sections of a URL that start with ? # [ ] @ ! $ & ‘ ( ) * , ; = are removed. Trailing forward slashes (/) are also removed.
    • Special strings
      URLs with mailto:, javascript:, data: are removed.
    • Encoded characters
      URL %-encoded characters are replaced by their unencoded versions.
    • Shorteners (6-22-2018 Update)
      URLs obfuscated by shortening services (nearly 600 of these services) are removed.
    • One or more of the above edits can be disabled by properly checking the corresponding checkboxes.

    Since these features are enabled by default, if a run produces no results it means that either all URLs are fully contaminated or there are no URLs to edit.

  • First time users
    We recommend first time users install the Copy Selected Links , or a similar add-on, before proceeding any further. Then do a search in Google and, with the add-on installed, clean URLs, first selectively and then at full blast, MUC.
  • Tool limitations
    Up to 5,000 URLs can be submitted at once. We arbitrarily imposed this limit to (a) provide fast responses, (b) minimize browser crashes, and (c) minimize abuses.
  • Last but not least, the tool might fail to remove non English, obfuscated, or encrypted characters.

 

The URL Query Parser

The URL Query Parser is our most recent tool for mining URLs. It is available at

http://www.minerazzi.com/tools/url-query/parser.php

What is a URL query?

A URL query is the trailing text after the question mark (?) found in a URL. It consists of attribute-value pairs delimited by ampersands (&). These are also called name-value, key-value, or field-value pairs.

What this tool does

This tool parses URL queries and extracts its name-value pairs.

The tool helps users identify and filter URL queries from a collection or build collections consisting exclusively of URL queries.

With minor modifications, the tool can be converted into a massive URL cleaner. We are currently building another tool that does this, precisely. In this way we may be able to clean up URLs found in Google and Bing search result pages and safely use them in data mining studies.

What is computed

  • Up to 5,000 URLs can be parsed. If no query is found in a URL, that record is ignored.
  • We have arbitrarily imposed the 5,000 limit for several reasons: to (a) provide fast responses, (b) minimize browser crashes, and (c) minimize abuses.
  • Users can opt between two query result modes:
    • individual results (useful for comparing individual URL queries).
    • combined results (useful for comparing specific name-value pairs).

    The latter is the default mode. Since in this mode results are alphabetically sorted, users can easily identify the most common or popular name-value pairs.

Implications to Web Security

This tool can be used by those interested in mining URL queries or conducting studies relevant to Web Security. Why? Please keep reading.

URL queries are used to transmit small pieces of data in the form of name-value pairs. The transmission can be of three types: (a) between web pages, (b) between a web page and a database, or (c) between databases. Real-world applications include access to web services, social profiling, and cloud computing, among others (Kantarcioglu, 2013).

In addition, URL queries are frequently used as vehicles for transmitting session parameters, form data, tracking mechanisms, user names, email addresses, and other data considered sensitive by users.

In a 2014 study, West & Aviv, from Verisign and the US Naval Academy, analyzed over 892 million user-submitted URLs containing 1.3 billion name-value pairs. They found over a quarter-billion plain text pairs involving referral tracking, with more than 10 million pairs potentially revealing some form of demographic, identity-based, or geographical information. Extreme cases involved the facilitation of password authentication credentials, email addresses, and user names (West & Aviv, 2014).

Thus, the development of tools designed to mining URL queries is something relevant to Web Security.

Suggested Exercises

  • Do a search in several search engines or public databases. Collect a set of URL queries and submit this set to our tool. Compare results.
  • Analyze a set of URL queries obtained from a public forum, social networks, or groups (e.g. Google Groups).
  • For this exercise you need to install a browser add-on to facilitate collection of URLs. In Firefox for instance you can install an add-on plugin that lets you selects multiple links and copy their URLs. Do a search in Google or similar search engines and with said add-on collect search result URLs. Submit these URLs to our tool. Compare results. This is a nice way of grabbing intelligence from URL queries relevant to specific search terms. In addition, since Google lets you do advanced field-specific searches (e.g., inurl, intitle, etc), this is a nice way of mining URL queries driven by advanced searches.

References

Document Linearization Tools

Tags

, , , , , ,

Document Linearization (DL) is aimed at creating a pseudo document from which words are extracted; e.g., for vocabulary construction/indexing.

DL involves removal of markup tags, punctuations, stopwords, upper cases, and some times stemming (reduction of words to common roots) and code/decoration filtering (removal of code & style lines). The end result can be used as part of analyses aimed at exploiting semantics and user intent.

Frequently, a pseudo document is represented as a stream of tokens or lowercased terms without any punctuation in it.

In addition to information retrieval specialists, end users can benefit from DL. For instance, collection curators can use DL to identify keywords representative of a refined collection. Search and digital marketers can use DL to find relevant keywords to be placed in ads and optimized content.

We added three tools to all of our Minerazzi miners (productivity search engines) to help users do DL on the fly for a single search result: The Plain Text Extractor, The Tokens Extractor, and the Words Extractor. These are found in the “Crawlers & Extractors” section displayed under each search result of a miner.

To access them, just do a search in any miner at http://www.minerazzi.com and, under a search result, click file icons labeled “text”, “tokens”, or “words”. For instance, try with our most recent RSS/Atom Feeds miner at

http://www.minerazzi.com/feeds

Support for PDF files & graphical analysis will be available in the near future.

 

RSS/Atom Feeds Miner

Tags

, , , , ,

Having problems finding research focused news feeds? Try our RSS/Atom Feeds miner (http://www.minerazzi.com/feeds).

Use it to find search engines, directories, and sites listing topic-specific news feeds. Then recrawl the miner search results and build your own list or collection of news feeds. Supports the popular RSS and Atom formats, among others.

Use the results from this miner in combination with other tools we developed for data mining URLs, or if you prefer submit your own page listing news feeds and improve the web visibility of your news.

This miner comes handy, particularly now that Google News has disabled its RSS subscription features, breaking tons of feeds and, in the process, affecting their presence on the web. (https://www.seroundtable.com/google-news-rss-feed-gone-25795.html).

Theoretical Physics Miner

Tags

, , ,

The Theoretical Physics Miner, available at

http://minerazzi.com/theoretical-physics/

is our most recent search solution.

Use its recrawling capabilities under a given search result to start building your own curated collection.

Use its news section at

http://www.minerazzi.com/theoretical-physics/spp.php

to access all ARXIV and MIT news feeds relevant to theoretical and experimental physics.

The figure below is for illustration purposes. It was generated through affine transformations that include reflection operations within an n-gon. Any resemblance with a black hole at its center is pure coincidence.