Conversational Question Answering (QA) is one of the core applications for retrieval-based chatbots. In conversational QA, the task is to answer a series of contextually-dependent questions like they may occur in natural human-to-human conversations.
The challenge is based on the QReCC dataset introduced at NAACL’21. The dataset contains 14K conversations with 81K question-answer pairs and is publicly available. The document collection with the pre-processed passages should be downloaded from Zenodo.
The challenge is hosted on TIRA. Participants are encouraged to upload their code and run the evaluation on the VMs provided by the platform to ensure reproducibility of the results. It is also possible to upload the submission as a single JSON file as an alternative. See our GitHub repository for further instructions and a sample code used for the baseline submission.
The evaluation is performed on the test split of the QReCC dataset. We use the ground truth annotations in the initial phase, and will update them with alternative answer spans and passages by pooling and crowdsourcing the relevance judgements over the results submitted by the challenge participants (similar to the TREC evaluation setup).
- Submission deadline: September 8, 2021
- Results announcement: September 30, 2021
- Workshop presentations: October 8, 2021