In this particular project, we are going to work on the inaugural corpora from the nltk in
Python. We will be looking at the following speeches of the Presidents of the United States
of America:
1. President Franklin D. Roosevelt in 1941
2. President John F. Kennedy in 1961
3. President Richard Nixon in 1973
Find the number of characters, words and sentences for the mentioned documents.
Remove all the stopwords from all the three speeches.
Which word occurs the most number of times in his inaugural address for each
president? Mention the top three words. (after removing the stopwords)
Plot the word cloud of each of the speeches of the variable. (after removing the
stopwords) –
Code Snippet to extract the three speeches:
"
import nltk
nltk.download('inaugural')
from nltk.corpus import inaugural
inaugural.fileids()
inaugural.raw('1941-Roosevelt.txt')
inaugural.raw('1961-Kennedy.txt')
inaugural.raw('1973-Nixon.txt')
"
Introduction:
NLTK will provide you with everything from splitting paragraphs to sentences, splitting words,
identifying the part of speech, highlighting themes, and even helping your machine
understand what the text is about.
Q1. Find the number of characters, words and sentences for the mentioned documents.
Answer : We are importing the nltk library to use the inaugural.fileds()
1|Page
, After importing the text file, we would first count the total number of characters in each file
separately. Below is the code to count the char from each file. With the output along with
the screenshot.
# number of Characters in each file
# Number of words in each text file:
Below we are counting the total number of words from each file separately.
Here we are using the split() to split up the words based on space between each word and
we are counting the total number of words by using the len() function.
Output :
2|Page