« Back to publications

Exploiting Class Labels to Boost Performance on Embedding-based Text Classification

Arkaitz Zubiaga

CIKM. 2020.

Download PDF fileAccess publication
Text classification is one of the most frequent tasks for processing textual data, facilitating among others research from large-scale datasets. Embeddings of different kinds have recently become the de facto standard as features used for text classification. These embeddings have the capacity to capture meanings of words inferred from occurrences in large external collections. While they are built out of external collections, they are unaware of the distributional characteristics of words in the classification dataset at hand, including most importantly the distribution of words across classes in training data. To make the most of these embeddings as features and to boost the performance of classifiers using them, we introduce a weighting scheme, Term Frequency-Category Ratio (TF-CR), which can weight high-frequency, category-exclusive words higher when computing word embeddings. Our experiments on eight datasets show the effectiveness of TF-CR, leading to improved performance scores over the well-known weighting schemes TF-IDF and KLD as well as over the absence of a weighting scheme in most cases.
  title={Exploiting class labels to boost performance on embedding-based text classification},
  author={Zubiaga, Arkaitz},
  booktitle={Proceedings of the 29th ACM international conference on information \& knowledge management},