Before tokenizing text, it is important to understand the NLTK package and its usage in Python. The concept of tokenization also needs to be understood. Let us begin by understanding usage of NLTK and its significance.
NLTK stands for Natural Language Tool Kit, which is considered to be the most powerful NLP libraries. NLP (Natural Language Processing) is a technique that helps in manipulation and working with text or speech with the help of software and devices. It draws patterns based on the context in which statements are being presented.
NLTK is a package in Python that helps in dealing with data that is in the form of text. It has multiple libraries, and this includes text-processing libraries which are meant to perform classification, stemming, tokenization, tagging, parsing and semantic reasoning.
We know that machines convert any data provided to it to the form of 1’s and 0’s. When a statement is provided as input to a machine, it converts every word in the sentence to a word vector based on the surrounding words.
It can be used with Python versions 2.7, 3.5, 3.6 and 3.7 for now. It can be installed by typing the following command in the command line:
pip install nltk
To check if ‘nltk’ module has been successfully installed, go to your IDE and type the following line:
import nltk
If this line gets executed without any errors, it means the ‘nltk’ package was installed successfully.
Terminologies associated with NLP
It is the process of splitting up sentence into a list of words, and these list of words are known as ‘tokens’. There are different ways of tokenizing data. Some of them have been discussed below:
This is the process of tokenizing sentences of a paragraph into separate statements. Let us look at how this works in Python. The ‘sent_tokenize’ function is used to tokenize a sentence. It uses the ‘PunktSentenceTokenizer’ instance that is found in the ‘nltk.tokenize.punkt’ module. This module would have been previously trained on data, and hence knows how to determine the beginning and end of a sentence, distinguishing between characters and punctuations.
from nltk.tokenize import sent_tokenize text = "Hello everyone. Welcome to NLP and the NLTK module introduction" sent_tokenize(text)
Output:
[‘Hello everyone. Welcome to NLP and the NLTK module introduction’] Word tokenization This refers to tokenizing or splitting words of a sentence. from nltk.tokenize import sent_tokenize text = "Hello everyone. Welcome to NLP and the NLTK module introduction" word_tokenize(text)
Output:
[‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’]
This refers to tokenizing or splitting words of a sentence.
from nltk.tokenize import sent_tokenize text = "Hello everyone. Welcome to NLP and the NLTK module introduction" word_tokenize(text)
Output:
[‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’]
In this post, we understood the significance of NLTK, NLP and how words and sentences can be tokenized in Python.
After reading your article, I was amazed. I know that you explain it very well. And I hope that other readers will also experience how I feel after reading your article. Thanks for sharing.
Good and informative article.
I enjoyed reading your articles. This is truly a great read for me. Keep up the good work!
Awesome blog. I enjoyed reading this article. This is truly a great read for me. Keep up the good work!
Thanks for sharing this article!! Machine learning is a branch of artificial intelligence (AI) and computer science that focus on the uses of data and algorithms. I came to know a lot of information from this article.
Leave a Reply
Your email address will not be published. Required fields are marked *