Last Updated: 3 July 2025
TREC RAGTIME is a TREC shared task to study and benchmark report generation from news (both English and multi-lingual).
Key features of the track are its focus on multi-faceted reports (going beyond factoid QA), and a citation-based evaluation (providing supporting evidence of claims made in the report).
It also benchmarks Cross-Language (CLIR) and Multi-lingual (MLIR) retrieval as supporting subtasks.
Languages: English, Arabic, Chinese, and Russian
Dry run topics are released! Please register at TREC to access the file. You have to register in order to download the file and submit your run. Dry run Topics or HERE
Search Service is up! Please also register at TREC to gain access to the enpoint. Short API documentation for our search service.
We have released our topics for the main submission! You can find them at TREC or HERE. Login with the passphrase when you signed up at TREC. (Hint: X_______v)
1. Multilingual Report Generation
Given a set of documents in multiple languages and a “report request” that describes the information need, generate a report in English that summarizes the information.
2. Monolingual English Report Generation
This task is similar to Multilingual Report Generation, but only English documents are used.
3. Retrieval (Cross-Language and Multi-lingual)
Given a set of documents in multiple languages and a “report request” that describes the information need, retrieve documents that help satisfy this information need.
TREC NeuCLIR (Neural CLIR) was the predecessor of TREC RAGTIME. In 2024, NeuCLIR included a pilot study of report generation. You can find more about NeuCLIR at the TREC NeuCLIR website.
In alphabetical order: