The study of Data Science has seen an exponential rise in the last few years, and one of its subfield which is growing tremendously is Natural Language Processing.
In this article, we would first get a brief intuition about NLP, and then implement one of the use cases of Natural Language Processing i.e., text classification in Python.
What is Natural Language Processing?
NLP or Natural Language Processing is the study of extracting meaningful information from raw textual data. Due to the variety of data generating sources, the majority of our data is unclean and comes in the form of natural language. Those natural and unstructured data carries a lot of hidden information which when analyzed could help the business to grow in other dimensions.
For an e-commerce website, their entire business is based on its customer base. Thus to ensure customer get the maximum benefits, it is highly recommended that they analyze the logs data and extract the customer’s search patterns. This way it could ensure the company is ahead of its competitors in the market.
Natural Language Process is one such method and Python has several libraries like NLTK, Spacy, CoreNLP for dealing with textual data. There are also various pre-trained models which could be used for specific NLP tasks but that is beyond the scope of this article.
Text Classification in Python
One of the applications of Natural Language Processing is text classification. It is the process by which any raw text could be classified into several categories like good/bad, positive/negative, spam/not spam, and so on. Even a news article could be classified into various categories with this method.
In this article, we would classify a message into spam or not spam as our text classification dataset using Python. There are a total of 5574 labeled messages and we need to separate spam and the ham message. Below are the code snippets and the descriptions of each block used to build the text classification model.
- The first step for any Data Science problem is importing the necessary libraries.
Apart from the traditional libraries like Pandas, NumPy, and so on, we have also imported the LSTM or Long Short Term Memory which is a part of the Recursive Neural Network used in Deep Learning. It is one of the most popular technique in Deep Learning which is used across a variety of applications such as speech recognition, time series analysis, etc. We would use the architecture of Long Short Term Memory Network to classify messages as spam or ham.
- The read_csv method of pandas is used to look into the first five rows of our data.
- The columns Unnamed: 2, Unnamed: 3 and Unnamed: 4 would not have any influence on our output model and hence we would drop the columns for further processing.
- Now, we are left with a labeled data of two columns – one with the ‘spam’ and ‘ham’ label and other is the textual data. Let’s visualize the dataset to see how many spam and ham are present in it. We would use the count plot functionality of the seaborn module in Python. Seaborn is built on top of Matplotlib but has a wider range of styling and interactive features.
The countplot chart –
- As expected, there are more ham messages which are almost five times that of spam. In the next step, we would create vectors of our features and the target variable. The reason why we create vectors is that machine cannot interpret textual data and thus it needs to be converted into numbers. The sklearn module of Python has a LabelEncoder() method which encodes categorical data and assigns more weights to the greater number.
- The model is learned from our training set and is evaluated on the test data. We have used 85% of our initial data for the training purpose and left the remaining 15 % for testing.
- Data Pre-processing is the most time-consuming but important part of a Machine Learning project. Some of the pre-processing techniques used in text analysis are tokenizing, normalization, and so on.
- Once the data is pre-processed, it needs to be fed to our model to train. We would define a Recursive Neural Network to fit in the LSTM architecture.
- The model is compiled with loss function as binary_crossentropy and the metrics of evaluation as accuracy.
- The training set is fit into the model.
- This would be our final model because of its accuracy on the validation set. The model is tested on the test data.
- The loss and the accuracy of the test data.
There are several text classification algorithms and in this context, we have used the LSTM network using Python to separate a spam message from a ham.
Understanding, and manipulating raw data is gradually becoming a part of every organization. Thus it is necessary to know the nitty-gritty of Natural Language Processing and apply its fundamentals to several use cases such as the one shown in this blog.
Explore our popular Data science courses –
Code as you learn in dedicated Jupyter notebook in our Data science tutorials –