Insights from User Opinions for Informed Decision-Making

Comp 4531 Deep Learning: Midterm presentation

1

Agenda

Introduce the problem

Describe Input and Outputs

Some EDA

Data cleaning

Discuss results of using non neural models

Discuss results of using neural network model

Sentiment Classification

Text classification based on user feedback, reviews, sentiments, etc. on a topic, product, or experience

Non-neural classification models are not sensitive to the order of words.

Type of encoding affects the model performance

LSTM inherently captures dependencies and patterns in the words.

The ordering of words can affect the sentiment.

How might such a trained model enable organizations to make informed decisions and better enhance user experience?

What is the tone of language used in an airline review versus say a movie review or a product review.?

What about the mention of proper nouns or hyperlinks in sentiments, how do these change the sentiment?

Problem

Purpose

Guiding Research Question

Inputs/Outputs

The data source for this project was a large movie review data set from IMDB.

More information about the data and source https://ai.stanford.edu/~amaas/data/sentiment/

The data set contains 50000 varying lengths of sentiment text and polarity of either positive or negative

Predict the polarity of a new unseen sentiment text.

Input

Output

Variable Type    Range    Encoding    Example
Input Variable (Text)            
Text (input sentence)    Variable length    Tokenization and Padding    “The movie was fantastic!”
Output Variable (Sentiment)            
Sentiment Label    negative or positive    One-Hot Encoding or Integer Encoding    positive: 1 negative: 0
Distribution of text length before and after removing stop words

Data Cleaning: Reduces feature size

First removed any nulls in the data, but luckily this data set didn’t have any, 50k non-null records !!!

Filtered out stop words like the, a, that etc.

Remove hyperlinks, line break tags, and numbers in the text

Lemmatized the words to their root word

Convert all words to lowercase

Drop nulls

Stop words

Extra information

Lemmutizaton

Normalize case

Non neural network methods

Gaussian Native Bayes

Multinomial Native Bayes

Decision Tree

Enumerate each word using the basic Tokenizer which maps words to integers based on their frequency of occurrence.

restricted the vocabulary size to 10000 for the tokenizer.

Any word not in the top 10000 was encoded with a constant to represent the out-of-vocabulary text

The target variable was the sentiment label.

All 3 models came up short with an accuracy score of around 50 percent.

Improving on baseline models

Gaussian Native Bayes

Multinomial Native Bayes

Decision Tree

By changing the type of vectorizer from tokenizer to TF-IDF the baseline models showed better results.

The higher the TF-IDF score, the more important the term is to the document relative to the corpus.

This made a huge difference in performance on all 3 models suggesting that the type of vectorization affects the model performance.

Multinomial NB stood out at a whopping 86 percent accuracy according to the classification report and

Gaussian NB coming in 2nd with a score of 78 %.

Decision Tree was at approx. 73 % accuracy.

Neural Network

LSTM model

Sequential model incorporating an Embedding, two LSTM layers and an output layer

Used a sentence size of 80, longer sentences were truncated, and shorter ones were zero-padded.

Embedding layer transformed input word vectors into dense vectors of 50 dimensions.

2 LSTM layers of 64 and 32 respectively for capturing sequential patterns in the data.

Incorporated dropout (20%) in all 2 LSTM layers for regularization, preventing overfitting.

Neural Network

Performance

Training accuracy of 88% percent 84% accuracy on test data

Predictions on custom test data

[“Worst movie I have seen or will ever watch”,”great comedy go see it with a friend”,”Do not watch that movie, it is horrible”, “Its the best movie made in its genre”]

Predictions: [“negative”,”positive”,”negative”,”positive”]

Limitations of model

Performance

Scalability

Languages

Some potential limitations or guiding research questions that can be envisioned in the LSTM model or any deep learning model for that matter are

Longer training time, is it worth the effort from a practical point of view?

How scalable is this model in real-time scenarios?

How does the model train with other languages? No matter the language, vocabulary size and sentence size matter.

Can it recognize sarcasm or other satirical comments?

Can the injection of proper nouns be recognized in LSTM models as a potential covariate?

Insights from User Opinions for Informed Decision-Making

We offer the best custom writing paper services. We have answered this question before and we can also do it for you.

GET STARTED TODAY AND GET A 20% DISCOUNT coupon code DISC20

Leave a Comment