Enhanced BERT-BiLSTM Model with Attention Mechanism for Robust Offensive Language Detection
Authors: Pranjali Pachpor, Nitesh Gupta, Anurag Shrivastava
DOI: 10.87349/JBUPT/281031
Page No: 1-5
Abstract
Offensive language detection is a vital challenge in natural language processing (NLP), particularly on online platforms where harmful content spreads rapidly. This paper introduces a robust hybrid deep learning model that combines BERT-based contextual embeddings with Bidirectional LSTM (BiLSTM) and an attention mechanism to capture both semantic depth and sequential dependencies. The proposed framework emphasizes key offensive triggers within text, enabling precise detection. Experimental results demonstrate superior performance, with 95.36% accuracy, 94.87%, outperforming baseline BERT and existing models. The approach generalizes well across diverse datasets, handles imbalanced data efficiently, and provides a scalable solution for automated offensive language moderation, offering a significant step toward safer and more responsible online communication.



