Multi-Task Learning with Sentiment, Emotion, and Target Detection to Recognize Hate Speech and Offensive Language
Citation
Plaza-Del-Arco, F. M., Halat, S., Padó, S., & Klinger, R. (2021). Multi-task learning with sentiment, emotion, and target detection to recognize hate speech and offensive language. Paper presented at the CEUR Workshop Proceedings, , 3159 297-318.Abstract
The recognition of hate speech and offensive language (HOF) is commonly formulated as a classification task which asks models to decide if a text contains HOF. This task is challenging because of the large variety of explicit and implicit ways to verbally attack a target person or group. In this paper, we investigate whether HOF detection can profit by taking into account the relationships between HOF and similar concepts: (a) HOF is related to sentiment analysis because hate speech is typically a negative statement and expresses a negative opinion; (b) it is related to emotion analysis, as expressed hate points to the author experiencing (or pretending to experience) anger while the addressees experience (or are intended to experience) fear. (c) Finally, one constituting element of HOF is the (explicit or implicit) mention of a targeted person or group. On this basis, we hypothesize that HOF detection shows improvements when being modeled jointly with these concepts, in a multi-task learning setup. We base our experiments on existing data sets for each of these concepts (sentiment, emotion, target of HOF) and evaluate our models as a participant (as team IMS-SINAI) in the HASOC FIRE 2021 English Subtask 1A: “Subtask 1A: Identifying Hate, offensive and profane content from the post”. Based on model-selection experiments in which we consider multiple available resources and submissions to the shared task, we find that the combination of the CrowdFlower emotion corpus, the SemEval 2016 Sentiment Corpus, and the OffensEval 2019 target detection data leads to an F1 =.7947 in a multi-head multi-task learning model based on BERT, in comparison to .7895 of a plain BERT model. On the HASOC 2019 test data, this result is more substantial with an increase by 2pp in F1 (from 0.78 F1 to 0.8 F1) and a considerable increase in recall. Across both data sets (2019, 2021), the recall is particularly increased for the class of HOF (6pp for the 2019 data and 3pp for the 2021 data), showing that MTL with emotion, sentiment, and target identification is an appropriate approach for early warning systems that might be deployed in social media platforms.