top
upGrad KnowledgeHut SkillFest Sale!

Search

Machine Learning Tutorial

Before tokenizing text, it is important to understand the NLTK package and its usage in Python. The concept of tokenization also needs to be understood. Let us begin by understanding usage of NLTK and its significance. What is NLTK? NLTK stands for Natural Language Tool Kit, which is considered to be the most powerful NLP libraries. NLP (Natural Language Processing) is a technique that helps in manipulation and working with text or speech with the help of software and devices. It draws patterns based on the context in which statements are being presented. NLTK is a package in Python that helps in dealing with data that is in the form of text. It has multiple libraries, and this includes text-processing libraries which are meant to perform classification, stemming, tokenization, tagging, parsing and semantic reasoning. We know that machines convert any data provided to it to the form of 1’s and 0’s. When a statement is provided as input to a machine, it converts every word in the sentence to a word vector based on the surrounding words. Installation of NLTK It can be used with Python versions 2.7, 3.5, 3.6 and 3.7 for now. It can be installed by typing the following command in the command line: pip install nltk To check if ‘nltk’ module has been successfully installed, go to your IDE and type the following line: import nltk If this line gets executed without any errors, it means the ‘nltk’ package was installed successfully. Terminologies associated with NLP Corpus:This refers to the dataset or the text data which is used to perform NLP tasks on. The singularform of this word is ‘corpora’. Lexicon:It can be understood as the list of stems and affixes, which hold information about the words ofthe language which is being used. Token:The result of tokenization- a set of string of continuous characters or integers. What is tokenization? It is the process of splitting up sentence into a list of words, and these list of words are known as ‘tokens’. There are different ways of tokenizing data. Some of them have been discussed below: Sentence tokenization This is the process of tokenizing sentences of a paragraph into separate statements. Let us look at how this works in Python. The ‘sent_tokenize’ function is used to tokenize a sentence. It uses the ‘PunktSentenceTokenizer’ instance that is found in the ‘nltk.tokenize.punkt’ module. This module would have been previously trained on data, and hence knows how to determine the beginning and end of a sentence, distinguishing between characters and punctuations. from nltk.tokenize import sent_tokenize  text = "Hello everyone. Welcome to NLP and the NLTK module introduction"  sent_tokenize(text) Output: [‘Hello everyone. Welcome to NLP and the NLTK module introduction’]  Word tokenization  This refers to tokenizing or splitting words of a sentence.  from nltk.tokenize import sent_tokenize  text = "Hello everyone. Welcome to NLP and the NLTK module introduction"  word_tokenize(text) Output: [‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’] Word tokenization This refers to tokenizing or splitting words of a sentence. from nltk.tokenize import sent_tokenize text = "Hello everyone. Welcome to NLP and the NLTK module introduction" word_tokenize(text) Output: [‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’] Conclusion In this post, we understood the significance of NLTK, NLP and how words and sentences can be tokenized in Python. 
logo

Machine Learning Tutorial

Tokenize text using NLTK in Python

Before tokenizing text, it is important to understand the NLTK package and its usage in Python. The concept of tokenization also needs to be understood. Let us begin by understanding usage of NLTK and its significance. 

What is NLTK? 

NLTK stands for Natural Language Tool Kit, which is considered to be the most powerful NLP libraries. NLP (Natural Language Processing) is a technique that helps in manipulation and working with text or speech with the help of software and devices. It draws patterns based on the context in which statements are being presented. 

NLTK is a package in Python that helps in dealing with data that is in the form of text. It has multiple libraries, and this includes text-processing libraries which are meant to perform classification, stemming, tokenization, tagging, parsing and semantic reasoning. 

We know that machines convert any data provided to it to the form of 1’s and 0’s. When a statement is provided as input to a machine, it converts every word in the sentence to a word vector based on the surrounding words. 

Installation of NLTK 

It can be used with Python versions 2.7, 3.5, 3.6 and 3.7 for now. It can be installed by typing the following command in the command line: 

pip install nltk 

To check if ‘nltk’ module has been successfully installed, go to your IDE and type the following line: 

import nltk 

If this line gets executed without any errors, it means the ‘nltk’ package was installed successfully. 

Terminologies associated with NLP 

  • Corpus:This refers to the dataset or the text data which is used to perform NLP tasks on. The singularform of this word is ‘corpora’. 
  • Lexicon:It can be understood as the list of stems and affixes, which hold information about the words ofthe language which is being used. 
  • Token:The result of tokenization- a set of string of continuous characters or integers. 

What is tokenization? 

It is the process of splitting up sentence into a list of words, and these list of words are known as ‘tokens’. There are different ways of tokenizing data. Some of them have been discussed below: 

Sentence tokenization 

This is the process of tokenizing sentences of a paragraph into separate statements. Let us look at how this works in Python. The ‘sent_tokenize’ function is used to tokenize a sentence. It uses the ‘PunktSentenceTokenizer’ instance that is found in the ‘nltk.tokenize.punkt’ module. This module would have been previously trained on data, and hence knows how to determine the beginning and end of a sentence, distinguishing between characters and punctuations. 

from nltk.tokenize import sent_tokenize 
text = "Hello everyone. Welcome to NLP and the NLTK module introduction" 
sent_tokenize(text) 

Output: 

[‘Hello everyone. Welcome to NLP and the NLTK module introduction’] 
Word tokenization 
This refers to tokenizing or splitting words of a sentence. 
from nltk.tokenize import sent_tokenize 
text = "Hello everyone. Welcome to NLP and the NLTK module introduction" 
word_tokenize(text) 

Output: 

[‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’] 

Word tokenization 

This refers to tokenizing or splitting words of a sentence. 

from nltk.tokenize import sent_tokenize 
text = "Hello everyone. Welcome to NLP and the NLTK module introduction" 
word_tokenize(text) 

Output: 

[‘Hello’, ‘everyone.’, ‘Welcome’, ‘to’, ‘NLP’, ‘and’, ‘the’, ‘NLTK’, ‘module’, ‘introduction’] 

Conclusion 

In this post, we understood the significance of NLTK, NLP and how words and sentences can be tokenized in Python. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Comments

Vinu

After reading your article, I was amazed. I know that you explain it very well. And I hope that other readers will also experience how I feel after reading your article. Thanks for sharing.

Johnson M

Good and informative article.

Vinu

I enjoyed reading your articles. This is truly a great read for me. Keep up the good work!

Vinu

Awesome blog. I enjoyed reading this article. This is truly a great read for me. Keep up the good work!

best data science courses in India

Thanks for sharing this article!! Machine learning is a branch of artificial intelligence (AI) and computer science that focus on the uses of data and algorithms. I came to know a lot of information from this article.