Comparison with Query Similarity Based Source Selection Sample Clauses

Comparison with Query Similarity Based Source Selection. Our first set of experiments compare precision of TSR(0.9) with query similarity based measures i.e. XXXX and Google Base discussed above. The results are illustrated in Figure 3(a). Note that the improve- ment in precision for TSR is significant as the precision improves approximately 85% over all competitors, in- cluding Google Base. This considerable improvement in precision is not surprising in the light of prior re- search on agreement based source selection with query based measures [10]. A per topic-class analysis of test queries, Fig- ure 3(b), reveals that TSR(0.9) significantly out- performs the relevance-based source selection models for all topic-classes. As a note on the seemingly low precision values, these are mean relevance of the top-5 results. Many of the queries used have less than five possible relevant answers (e.g. a book title query may have only paperback and hard cover for the book as relevant answers). But since we count the top-5 re- sults always, the mean precision is bound to be low. For example, if a method returns one relevant answer on in top-5 for all queries, the top-5 precision value will be only 20%. We get better values since some queries have more than one relevant results in top-5 (e.g. Blu-Ray and DVD of a movie).
AutoNDA by SimpleDocs

Related to Comparison with Query Similarity Based Source Selection

  • Target Population The Grantee shall ensure that diversion programs and services provided under this grant are designed to serve juvenile offenders who are at risk of commitment to Department.

Time is Money Join Law Insider Premium to draft better contracts faster.