« Back to publications

Ethical and Technical Challenges of AI in Tackling Hate Speech

Diogo Cortiz, Arkaitz Zubiaga

International Review of Information Ethics. 2021.

Download PDF fileAccess publication
In this paper, we discuss some of the ethical and technical challenges of using Artificial Intelligence for online content moderation. As a case study, we used an AI model developed to detect hate speech on social networks, a concept for which varying definitions are given in the scientific literature and consensus is lacking. We argue that while AI can play a central role in dealing with information overload on social media, it could cause risks of violating freedom of expression (if the project is not well conducted). We present some ethical and technical challenges involved in the entire pipeline of an AI project - from data collection to model evaluation - that hinder the large-scale use of hate speech detection algorithms. Finally, we argue that AI can assist with the detection of hate speech in social media, provided that the final judgment about the content has to be made through a process with human involvement.
  title={Ethical and technical challenges of AI in tackling hate speech},
  author={Cortiz, Diogo and Zubiaga, Arkaitz},
  journal={The International Review of Information Ethics},