word tokenization is the process of splitting a large sample of text into words. this is a requirement in natural language processing tasks where each word needs to be captured and subjected to further analysis like classifying and counting them for a particular sentiment etc. the natural language tool kit(nltk) is a library used to achieve this. install nltk before proceeding with the python program for word tokenization.
conda install -c anaconda nltk
next we use the word_tokenize method to split the paragraph into individual words.
import nltk word_data = "it originated from the idea that there are readers who prefer learning new skills from the comforts of their drawing rooms" nltk_tokens = nltk.word_tokenize(word_data) print (nltk_tokens)
when we execute the above code, it produces the following result.
['it', 'originated', 'from', 'the', 'idea', 'that', 'there', 'are', 'readers', 'who', 'prefer', 'learning', 'new', 'skills', 'from', 'the', 'comforts', 'of', 'their', 'drawing', 'rooms']
tokenizing sentences
we can also tokenize the sentences in a paragraph like we tokenized the words. we use the method sent_tokenize to achieve this. below is an example.
import nltk sentence_data = "sun rises in the east. sun sets in the west." nltk_tokens = nltk.sent_tokenize(sentence_data) print (nltk_tokens)
when we execute the above code, it produces the following result.
['sun rises in the east.', 'sun sets in the west.']