INEX 2010 Ad Hoc Track


Participating organizations will compare the retrieval effectiveness of their Focused Retrieval systems to others. In doing so they will contribute to the construction of the test collection. The test collection will provide participants a means for future comparative experiments.

Test collection

The ad hoc track will use the INEX 2009 Wikipedia collection (without images). Each participating group will be asked to create a set of candidate topics, representative of a range of real user needs. Both Content Only (CO) and Content And Structure (CAS) variants of the information need are requested.


The main retrieval task performed at INEX is the ad hoc retrieval from XML documents. The ad hoc track continues the investigations of previous years into the effect of structure in the query and the documents. INEX continues the comparison of element retrieval to passage retrieval this year.

Element Retrieval

In the Content Only (CO) sub-task, queries only contain content-related conditions and not structure-related conditions. It is the task of the search engine to identify relevant elements of a suitable size.

A user may decide to add structural hints to their query to reduce the number of returned elements resulting from a CO query. This CAS query forms the basis of the Content Only + Structure (CO+S) sub-task. It is the task of the search engine to identify relevant elements by either strictly or vaguely interpreting the additional structural constraints.

Passage Retrieval

It is not yet clear that the best answer to return to a user is an element. In the passage retrieval sub-task it is the task of the search engine to identify relevant passages of text that satisfy the user's information needs. The queries will be the same as those used in the element retrieval tasks so it will also be possible to compare element retrieval algorithms to passage retrieval algorithms. Both CO and CO+S sub-tasks will be run.

Use Cases

Two different use cases have been used at INEX. In focused retrieval the user prefers a single element that is relevant to the query even though it may contain some non-specific content (elements must not overlap). With in-context retrieval the user is interested in elements within highly relevant articles - they want to see what parts of the document will best satisfy their information need.

Relevance assessments

Relevance assessment will be conducted by participating groups using the INEX assessment system. Each participating organization will judge about three topics. Where possible these topics are those originally submitted by the participating group. Assessment takes one person about two days per topic.


The evaluation of the retrieval effectiveness of the XML retrieval engines used by the participants will be based on the constructed INEX test collection and uniform scoring techniques. Since its launch in 2002, INEX has been challenged by the issue of how to measure an XML information access system's effectiveness. Several metrics have been used including XCG, HiXEval and PRUM. Investigation into scoring methods is ongoing, however, in order to compare the results of different sub-tasks, it is planned to use, if possible, a single set of measures for all tasks. As this set must compare passage and element retrieval, a set of measures based on the highlighting of relevant content, is likely to be adopted as the official metric.