IMPROVED SENTIMENT ANALYSIS USING A CUSTOMIZED DISTILBERT NLP CONFIGURATION
Henry Gao Student, Bur Oak Secondary School
ABSTRACT
Social media and other communication platforms have become extremely popular in last few years. User comments made on such portals contain valuable information about users’ attitudes. With the advent of Natural Language Processing (NLP) algorithms, it is now possible to analyze the polarity in such comments at scale. Sentiment analysis models use machine learning which can be divided into two kinds, supervised and unsupervised learning. Supervised models train models on pre-labeled datasets and predict the labels from a subset of the data. Unsupervised sentiment analysis tools utilize pre-trained models, using them to cluster unlabeled data. Recent research indicates Bidirectional Encoder Representations from Transformers (BERT) is useful in sentiment analysis tasks. Using a restaurant review dataset specific to highlighting different sentiments of positivity and negativity, this paper presents results from a comparative study of NLP techniques for sentiment analysis. The superiority of a fine-tuned distilBERT model is proven through a systematic experimental approach. Sentiment polarity coupled with adjustable neural network configurations makes distilBERT based models more sensitive to sentiment. Compared to conventional LSTM (Long Short-Term Memory) NLP approaches with accuracies of 78%, distilBERT achieves an accuracy of 92.4%, much better than 72.3%, the result of VADER (Valence Aware Dictionary and sEntiment Reasoner).
KEYWORDS
Deep Learning, Natural Language Processing, BERT, distilBERT
Full Text: https://airccse.com/adeij/papers/3221adeij06.pdf
Volume Link: https://airccse.com/adeij/vol3.html
No comments:
Post a Comment