The problem with federated searches (aka, parallel, broadcast, or meta searches) is that it is implemented under the assumption that the more it is  searched at one time, the better for the user. This is not necessarily the case. Easy search results does not equate to right results. Submitting a query on a given topic to dissimilar databases with dissimilar content or about dissimilar topics often produces off-topic or irrelevant results. Indeed, easy searching not necessarily translates into a relevance experience.

And there is also the question of how to rank the results. Two strategies are frequently used: (a) data appending and (b) data fusion.

Some of the early forms of federated search engines for the Web used to do (a) and were soon called meta search engines. These search tools simply returned a long list of results by appending the top N ranked results from each databases in a tandem fashion. Obviously this strategy failed to recognized the top M relevant results from this huge list and soon was phased out in favor of (b).

In (b), arithmetic or weighted averages from the top N results from the different databases are computed. The problem with this approach is that is very subjective. In order to compute an arithmetic or weighted relevance score, who decides how much weight should be assigned to a given ranked result from a given database? No matter which weighting criteria are used, at the end it is still a subjective score and one that not necessarily improves the end-user search experience. Just the opposite.

This leads to what I call “The Search Paradox”: Information gateways as information roadblocks.

To learn more about this, visit this old link: