« Back to publications

Natural Language Processing in the Era of Large Language Models

Arkaitz Zubiaga

Frontiers in Artificial Intelligence. 2024.

Download PDF fileAccess publication
Since their inception in the 1980s, language models (LMs) have been around for more than four decades as a means for statistically modeling the properties observed from natural language (Rosenfeld, 2000). Given a collection of texts as input, a language model computes statistical properties of language from those texts, such as frequencies and probabilities of words and surrounding context, which can then be used for different purposes including natural language understanding (NLU), generation (NLG), reasoning (NLR) and, more broadly, processing (NLP) (Dong et al., 2019). Such statistical approach to modeling natural language has sparked debate for decades between those who argue that language can be modeled through the observation and probabilistic representation of patterns, and those who argue that such an approach is rudimentary and that proper understanding of language needs grounding in linguistic theories (Mitchell and Krakauer, 2023).
@article{zubiaga6natural,
  title={Natural Language Processing in the Era of Large Language Models},
  author={Zubiaga, Arkaitz},
  journal={Frontiers in Artificial Intelligence},
  volume={6},
  pages={1350306},
  publisher={Frontiers}
}