Abigail Oppong


2025

pdf bib
AfriHate: A Multilingual Collection of Hate Speech and Abusive Language Datasets for African Languages
Shamsuddeen Hassan Muhammad | Idris Abdulmumin | Abinew Ali Ayele | David Ifeoluwa Adelani | Ibrahim Said Ahmad | Saminu Mohammad Aliyu | Paul Röttger | Abigail Oppong | Andiswa Bukula | Chiamaka Ijeoma Chukwuneke | Ebrahim Chekol Jibril | Elyas Abdi Ismail | Esubalew Alemneh | Hagos Tesfahun Gebremichael | Lukman Jibril Aliyu | Meriem Beloucif | Oumaima Hourrane | Rooweither Mabuya | Salomey Osei | Samuel Rutunda | Tadesse Destaw Belay | Tadesse Kebede Guge | Tesfa Tegegne Asfaw | Lilian Diana Awuor Wanzare | Nelson Odhiambo Onyango | Seid Muhie Yimam | Nedjma Ousidhoum
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Hate speech and abusive language are global phenomena that need socio-cultural background knowledge to be understood, identified, and moderated. However, in many regions of the Global South, there have been several documented occurrences of (1) absence of moderation and (2) censorship due to the reliance on keyword spotting out of context. Further, high-profile individuals have frequently been at the center of the moderation process, while large and targeted hate speech campaigns against minorities have been overlooked.These limitations are mainly due to the lack of high-quality data in the local languages and the failure to include local communities in the collection, annotation, and moderation processes. To address this issue, we present AfriHate: a multilingual collection of hate speech and abusive language datasets in 15 African languages. Each instance in AfriHate is a tweet annotated by native speakers familiar with the regional culture. We report the challenges related to the construction of the datasets and present various classification baseline results with and without using LLMs. We find that model performance highly depends on the language and that multilingual models can help boost performance in low-resource settings.

2022

pdf bib
AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
Bonaventure F. P. Dossou | Atnafu Lambebo Tonja | Oreen Yousuf | Salomey Osei | Abigail Oppong | Iyanuoluwa Shode | Oluwabusayo Olufunke Awoyomi | Chris Emezue
Proceedings of the Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)

In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available for African Languages. Active learning is a semi-supervised learning algorithm, in which a model consistently and dynamically learns to identify the most beneficial samples to train itself on, in order to achieve better optimization and performance on downstream tasks. Furthermore, active learning effectively and practically addresses real-world data scarcity. Despite all its benefits, active learning, in the context of NLP and especially multilingual language models pretraining, has received little consideration. In this paper, we present AfroLM, a multilingual language model pretrained from scratch on 23 African languages (the largest effort to date) using our novel self-active learning framework. Pretrained on a dataset significantly (14x) smaller than existing baselines, AfroLM outperforms many multilingual pretrained language models (AfriBERTa, XLMR-base, mBERT) on various NLP downstream tasks (NER, text classification, and sentiment analysis). Additional out-of-domain sentiment analysis experiments show that AfroLM is able to generalize well across various domains. We release the code source, and our datasets used in our framework at https://github.com/bonaventuredossou/MLM_AL.
OSZAR »