Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2022

Abstract

Hateful meme classification is a challenging multimodal task that requires complex reasoning and contextual background knowledge. Ideally, we could leverage an explicit external knowledge base to supplement contextual and cultural information in hateful memes. However, there is no known explicit external knowledge base that could provide such hate speech contextual information. To address this gap, we propose PromptHate, a simple yet effective prompt-based model that prompts pre-trained language models (PLMs) for hateful meme classification. Specifically, we construct simple prompts and provide a few in-context examples to exploit the implicit knowledge in the pretrained RoBERTa language model for hateful meme classification. We conduct extensive experiments on two publicly available hateful and offensive meme datasets. Our experimental results show that PromptHate is able to achieve a high AUC of 90.96, outperforming state-ofthe-art baselines on the hateful meme classification task. We also perform fine-grained analyses and case studies on various prompt settings and demonstrate the effectiveness of the prompts on hateful meme classification.

Discipline

Artificial Intelligence and Robotics | Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Data Science and Engineering; Information Systems and Management; Intelligent Systems and Optimization

Publication

Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

City or Country

Abu Dhabi, UAE

Share

COinS