Leveraging profanity for insincere content detection - A neural network approach
Abstract
Community driven social media sites are rich sources of knowledge and entertainment and at the same vulnerable to the flames or toxic content that can be dangerous to various users of these platforms as well as to the society. Therefore, it is crucial to identify and remove such content to have a better and safe online experience. Manually eliminating flames is tedious and hence many research works focus on machine learning or deep learning models for automated methods. In this paper, we primarily focus on detecting the insincere content using neural network-based learning methods. We also integrated the profanity features as profanity is correlated with honesty according to psychology research. We tested our model on the questions datasets from CQA platform to detect the insincere content. Our integrated neural network model enabled us to achieve a high performance of F1-score, 94.01%, compared to the standard machine learning algorithms