100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September

Rating
-
Sold
-
Pages
27
Grade
A+
Uploaded on
29-08-2025
Written in
2025/2026

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September 2025 100% solved answers.Stop starting from scratch.Download your copy today and get a head start.

Institution
Module










Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Module

Document information

Uploaded on
August 29, 2025
Number of pages
27
Written in
2025/2026
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

COS4861
Assignment3
Unique No:
DUE 10 September 2025

,COS4861

Assignment 3 (2025)

Working Towards Encoding Systems in NLP Due: 10 September 2025

Question 1 — Theory (12)

1.1 What is a corpus, and how does it differ from other data types? (2) A corpus
refers to a carefully compiled collection of natural language material, which may include
written texts or transcribed spoken language. Its defining characteristic is that it
preserves the linguistic structure of the data—tokens, sentences, documents, genres,
and metadata such as authorship, date, and register. This makes it distinct from
ordinary datasets like spreadsheets or sensor outputs, which are primarily numerical. A
corpus allows researchers to analyse language-specific features such as word
frequencies, syntactic patterns, and semantic usage. In this assignment, the dataset
used is a small English corpus on smoothing algorithms.

1.2 Technical term for splitting a corpus into paragraphs/sentences/words (1) The
process of dividing text into smaller linguistic units is known as tokenization (splitting
into words) and sentence segmentation (marking sentence boundaries). These are
crucial preprocessing steps in natural language processing (NLP).

1.3 Define N-grams and give peer-reviewed references (2) An N-gram is a
contiguous sequence of N linguistic items—such as words, characters, or subwords—
drawn from a given text. N-gram language models compute conditional probabilities of
the form:

𝑖−1
𝑃(𝑤𝑖 ∣ 𝑤 𝑖−𝑁+1 )

where the likelihood of a word is estimated given the preceding sequence. Early
foundational research, such as Brown et al. (1992), introduced class-based N-gram
models, while Chen & Goodman (1999) provided a systematic evaluation of smoothing
methods, highlighting the critical role of N-grams in statistical language modelling.

, 1.4 Data sparseness in N-gram models; what is smoothing? Name two algorithms
(7) Language data is inherently sparse because the number of possible word
sequences is vast, and many valid N-grams will not occur in a training set. Using
Maximum Likelihood Estimation (MLE), unseen N-grams are assigned a probability of
zero, while frequent N-grams dominate the distribution. This leads to the data sparsity
problem, reducing the reliability of predictions.

Smoothing is a strategy that redistributes probability mass to unseen or rare N-grams,
thereby avoiding zero-probability estimates and improving generalisation.

Two widely used smoothing techniques are:

 Katz Back-off: Discounts the counts of observed N-grams and, when higher-
order contexts are missing, “backs off” to lower-order N-gram estimates.
 Modified Kneser–Ney: Combines absolute discounting with continuation
probabilities, making it one of the most robust and widely adopted smoothing
methods.

An additional well-known method is Good–Turing discounting, which re-estimates
probabilities of rare events to better account for unseen sequences.

Question 2 — Applications & Code Concepts (13)

2.1 How MLE causes data sparseness issues in unsmoothed N-grams (3) In MLE,
the probability of a word given a context is calculated as:

𝐶(ℎ, 𝑤𝑖 )
𝑃෠(𝑤𝑖 ∣ ℎ) =
𝐶(ℎ)

where 𝐶(ℎ, 𝑤𝑖 ) is the count of the history–word pair and 𝐶(ℎ) is the count of the history.
If a valid word combination does not appear in the training corpus (𝐶(ℎ, 𝑤𝑖 ) = 0 ), its
probability is set to zero. Because natural language has a long-tail distribution with
many rare events, this results in frequent zero probabilities, reducing predictive
accuracy and inflating perplexity.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
AcademicAnchor University of South Africa (Unisa)
Follow You need to be logged in order to follow users or courses
Sold
47
Member since
2 year
Number of followers
1
Documents
382
Last sold
2 months ago
Academic Anchor

Welcome to AcademicAnchor – Your Trusted Source for High-Quality Assignments, Exam Packs and Study Notes. At AcademicAnchor, we provide expertly written, exam-ready study guides, summaries, and notes designed to help students succeed. Whether you\'re preparing for finals, catching up on lectures, or aiming for top grades, our materials are crafted to save you time and boost your performance. ✔️ Clear & concise notes ✔️ Covers key concepts and exam tips ✔️ Perfect for last-minute revision ✔️ Trusted by hundreds of students Join thousands of learners who use AcademicAnchor to stay grounded in their studies and achieve academic success.

Read more Read less
3.6

11 reviews

5
5
4
1
3
3
2
0
1
2

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these revision notes.

Didn't get what you expected? Choose another document

No problem! You can straightaway pick a different document that better suits what you're after.

Pay as you like, start learning straight away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and smashed it. It really can be that simple.”

Alisha Student

Frequently asked questions