Assignment3
Unique No:
DUE 10 September 2025
,COS4861
Assignment 3 (2025)
Working Towards Encoding Systems in NLP Due: 10 September 2025
Question 1 — Theory (12)
1.1 What is a corpus, and how does it differ from other data types? (2) A corpus
refers to a carefully compiled collection of natural language material, which may include
written texts or transcribed spoken language. Its defining characteristic is that it
preserves the linguistic structure of the data—tokens, sentences, documents, genres,
and metadata such as authorship, date, and register. This makes it distinct from
ordinary datasets like spreadsheets or sensor outputs, which are primarily numerical. A
corpus allows researchers to analyse language-specific features such as word
frequencies, syntactic patterns, and semantic usage. In this assignment, the dataset
used is a small English corpus on smoothing algorithms.
1.2 Technical term for splitting a corpus into paragraphs/sentences/words (1) The
process of dividing text into smaller linguistic units is known as tokenization (splitting
into words) and sentence segmentation (marking sentence boundaries). These are
crucial preprocessing steps in natural language processing (NLP).
1.3 Define N-grams and give peer-reviewed references (2) An N-gram is a
contiguous sequence of N linguistic items—such as words, characters, or subwords—
drawn from a given text. N-gram language models compute conditional probabilities of
the form:
𝑖−1
𝑃(𝑤𝑖 ∣ 𝑤 𝑖−𝑁+1 )
where the likelihood of a word is estimated given the preceding sequence. Early
foundational research, such as Brown et al. (1992), introduced class-based N-gram
models, while Chen & Goodman (1999) provided a systematic evaluation of smoothing
methods, highlighting the critical role of N-grams in statistical language modelling.
, 1.4 Data sparseness in N-gram models; what is smoothing? Name two algorithms
(7) Language data is inherently sparse because the number of possible word
sequences is vast, and many valid N-grams will not occur in a training set. Using
Maximum Likelihood Estimation (MLE), unseen N-grams are assigned a probability of
zero, while frequent N-grams dominate the distribution. This leads to the data sparsity
problem, reducing the reliability of predictions.
Smoothing is a strategy that redistributes probability mass to unseen or rare N-grams,
thereby avoiding zero-probability estimates and improving generalisation.
Two widely used smoothing techniques are:
Katz Back-off: Discounts the counts of observed N-grams and, when higher-
order contexts are missing, “backs off” to lower-order N-gram estimates.
Modified Kneser–Ney: Combines absolute discounting with continuation
probabilities, making it one of the most robust and widely adopted smoothing
methods.
An additional well-known method is Good–Turing discounting, which re-estimates
probabilities of rare events to better account for unseen sequences.
Question 2 — Applications & Code Concepts (13)
2.1 How MLE causes data sparseness issues in unsmoothed N-grams (3) In MLE,
the probability of a word given a context is calculated as:
𝐶(ℎ, 𝑤𝑖 )
𝑃(𝑤𝑖 ∣ ℎ) =
𝐶(ℎ)
where 𝐶(ℎ, 𝑤𝑖 ) is the count of the history–word pair and 𝐶(ℎ) is the count of the history.
If a valid word combination does not appear in the training corpus (𝐶(ℎ, 𝑤𝑖 ) = 0 ), its
probability is set to zero. Because natural language has a long-tail distribution with
many rare events, this results in frequent zero probabilities, reducing predictive
accuracy and inflating perplexity.