« Back to publications

LongEval: Longitudinal Evaluation of Model Performance at CLEF 2024

Rabab Alkhalifa, Hsuvas Borkakoty, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa-Anke, Tobias Fink, David Iommi, Gabriela Gonzalez-Saez, Petra GalušÄáková, Lorraine Goeuriot, Maria Liakata, Xiaomo Liu, Harish Tayyar Madabushi, Pablo Medina-Alias, Philippe Mulhem, Florina Piroi, Martin Popel, Christophe Servan, Arkaitz Zubiaga

ECIR. 2024.

Download PDF file
This paper introduces the planned second LongEval Lab, part of the CLEF 2024 conference. The aim of the lab's two tasks is to give researchers test data for addressing temporal effectiveness persistence challenges in both information retrieval and text classification, motivated by the fact that model performance degrades as the test data becomes temporally distant from the training data. LongEval distinguishes itself from traditional IR and classification tasks by emphasizing the evaluation of models designed to mitigate performance drop over time using evolving data. The second LongEval edition will further engage the IR community and NLP researchers in addressing the crucial challenge of temporal persistence in models, exploring the factors that enable or hinder it, and identifying potential solutions along with their limitations.
@inproceedings{alkhalifa2023longeval,
  title={LongEval: Longitudinal Evaluation of Model Performance at CLEF 2023},
  author={Alkhalifa, Rabab and Bilal, Iman and Borkakoty, Hsuvas and Camacho-Collados, Jose and Deveaud, Romain and El-Ebshihy, Alaa and Espinosa-Anke, Luis and Gonzalez-Saez, Gabriela and Galu{\v{s}}{\v{c}}{\'a}kov{\'a}, Petra and Goeuriot, Lorraine and others},
  booktitle={European Conference on Information Retrieval},
  pages={499--505},
  year={2023},
  organization={Springer}
}