Our Minerazzi miners (http://www.minerazzi.com) can help improve productivity in many different scenarios. Just a few to think about:
Those behind an enterprise intranet or shopping cart often need to catalog items, articles, short pieces of data (like customer and client records). Once in a web template format, each of these can be turned into data mining records and automatically classified. This is simpler than doing the classification by hand, one item at a time.
Librarians, teachers, students, and researchers often spend large amount of time curating collections, discovering relevant web documents and classifying these. Our platform helps then to do this straight from the search result pages.
Employees or client records in a format compatible with the miners can be easily classified and pieces of data extracted and mined. Example of such pieces: phone numbers, email addresses, keywords.
Internet marketers, web designers, coders, and developers in general can build curated collections of phone/email addresses, scripts, CSS rules, color palettes, etc to suit their needs (e.g. launching of marketing campaigns, design of creatives, building of apps, etc).
Government administrators or webmasters with many resources dispersed over dissimilar databases can have a centralized collection from which these can be linked and mined.
There are more scenarios to think about.
Each miner comes with dozen of extraction tools. Each one also comes with a News center powered by SPP, our tool for mining the pulse of RSS news across social networks. So far we have RSS focusing on technology, but soon we will be adding miner-specific RSS news as we have done with the CRAN and R-Blogs miners as well as with miislita (http://www.miislita.com) miner.
The platform section at http://www.minerazzi.com/tools) also features dozen of multidisciplinary tools for students, teachers, and researchers (eg. The Hydrocarbons Parser, The Data Set Editor, etc) that improves productivity.
PS. I forget to mention that the problem of maintenance (i.e., removing broken links from curated collections) can be a productivity-draining problem.
Our tool automatically removes broken links on a session-based manner; i.e. if a link goes broken and a user finds it, it will be removed in the next user session or when he/she refreshes the results page. Similarly, once a user does a crawl, the crawled link is either indexed or reindexed, allowing collections to be self-indexable!