Conversational Question Answering (QA) is one of the core applications for retrieval-based chatbots. In conversational QA, the task is to answer a series of contextually-dependent questions like they may occur in natural human-to-human conversations.
The challenge is based on the QReCC dataset introduced at NAACL’21. The dataset contains 14K conversations with 81K question-answer pairs and is publicly available. The document collection with the pre-processed passages should be downloaded from Zenodo.
The challenge is hosted on TIRA. Participants can upload their submission as a single JSON file. Alternatively, participants can upload their code and run the evaluation on the VMs provided by the platform to ensure reproducibility of the results. See our GitHub repository for further instructions and a sample code used for the baseline submission.
The evaluation is performed on the test split of the QReCC dataset. We use the ground truth annotations in the initial phase, and will update them with alternative answer spans and passages by pooling and crowdsourcing the relevance judgements over the results submitted by the challenge participants (similar to the TREC evaluation setup).
- Submission deadline:
July 3, 2022Extended: July 8, 2022
- Results announcement: July 10, 2022
- Workshop presentations: July 15, 2022
You need access to TIRA to be able to upload your submissions. Request access to TIRA by filling out this short form.