100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4.2 TrustPilot
logo-home
Tentamen (uitwerkingen)

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September

Beoordeling
-
Verkocht
-
Pagina's
27
Cijfer
A+
Geüpload op
29-08-2025
Geschreven in
2025/2026

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September 2025 100% solved answers.Stop starting from scratch.Download your copy today and get a head start.

Instelling
Vak










Oeps! We kunnen je document nu niet laden. Probeer het nog eens of neem contact op met support.

Geschreven voor

Instelling
Vak

Documentinformatie

Geüpload op
29 augustus 2025
Aantal pagina's
27
Geschreven in
2025/2026
Type
Tentamen (uitwerkingen)
Bevat
Vragen en antwoorden

Onderwerpen

Voorbeeld van de inhoud

COS4861
Assignment3
Unique No:
DUE 10 September 2025

,COS4861

Assignment 3 (2025)

Working Towards Encoding Systems in NLP Due: 10 September 2025

Question 1 — Theory (12)

1.1 What is a corpus, and how does it differ from other data types? (2) A corpus
refers to a carefully compiled collection of natural language material, which may include
written texts or transcribed spoken language. Its defining characteristic is that it
preserves the linguistic structure of the data—tokens, sentences, documents, genres,
and metadata such as authorship, date, and register. This makes it distinct from
ordinary datasets like spreadsheets or sensor outputs, which are primarily numerical. A
corpus allows researchers to analyse language-specific features such as word
frequencies, syntactic patterns, and semantic usage. In this assignment, the dataset
used is a small English corpus on smoothing algorithms.

1.2 Technical term for splitting a corpus into paragraphs/sentences/words (1) The
process of dividing text into smaller linguistic units is known as tokenization (splitting
into words) and sentence segmentation (marking sentence boundaries). These are
crucial preprocessing steps in natural language processing (NLP).

1.3 Define N-grams and give peer-reviewed references (2) An N-gram is a
contiguous sequence of N linguistic items—such as words, characters, or subwords—
drawn from a given text. N-gram language models compute conditional probabilities of
the form:

𝑖−1
𝑃(𝑤𝑖 ∣ 𝑤 𝑖−𝑁+1 )

where the likelihood of a word is estimated given the preceding sequence. Early
foundational research, such as Brown et al. (1992), introduced class-based N-gram
models, while Chen & Goodman (1999) provided a systematic evaluation of smoothing
methods, highlighting the critical role of N-grams in statistical language modelling.

, 1.4 Data sparseness in N-gram models; what is smoothing? Name two algorithms
(7) Language data is inherently sparse because the number of possible word
sequences is vast, and many valid N-grams will not occur in a training set. Using
Maximum Likelihood Estimation (MLE), unseen N-grams are assigned a probability of
zero, while frequent N-grams dominate the distribution. This leads to the data sparsity
problem, reducing the reliability of predictions.

Smoothing is a strategy that redistributes probability mass to unseen or rare N-grams,
thereby avoiding zero-probability estimates and improving generalisation.

Two widely used smoothing techniques are:

 Katz Back-off: Discounts the counts of observed N-grams and, when higher-
order contexts are missing, “backs off” to lower-order N-gram estimates.
 Modified Kneser–Ney: Combines absolute discounting with continuation
probabilities, making it one of the most robust and widely adopted smoothing
methods.

An additional well-known method is Good–Turing discounting, which re-estimates
probabilities of rare events to better account for unseen sequences.

Question 2 — Applications & Code Concepts (13)

2.1 How MLE causes data sparseness issues in unsmoothed N-grams (3) In MLE,
the probability of a word given a context is calculated as:

𝐶(ℎ, 𝑤𝑖 )
𝑃෠(𝑤𝑖 ∣ ℎ) =
𝐶(ℎ)

where 𝐶(ℎ, 𝑤𝑖 ) is the count of the history–word pair and 𝐶(ℎ) is the count of the history.
If a valid word combination does not appear in the training corpus (𝐶(ℎ, 𝑤𝑖 ) = 0 ), its
probability is set to zero. Because natural language has a long-tail distribution with
many rare events, this results in frequent zero probabilities, reducing predictive
accuracy and inflating perplexity.

Maak kennis met de verkoper

Seller avatar
De reputatie van een verkoper is gebaseerd op het aantal documenten dat iemand tegen betaling verkocht heeft en de beoordelingen die voor die items ontvangen zijn. Er zijn drie niveau’s te onderscheiden: brons, zilver en goud. Hoe beter de reputatie, hoe meer de kwaliteit van zijn of haar werk te vertrouwen is.
AcademicAnchor University of South Africa (Unisa)
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
47
Lid sinds
2 jaar
Aantal volgers
1
Documenten
382
Laatst verkocht
2 maanden geleden
Academic Anchor

Welcome to AcademicAnchor – Your Trusted Source for High-Quality Assignments, Exam Packs and Study Notes. At AcademicAnchor, we provide expertly written, exam-ready study guides, summaries, and notes designed to help students succeed. Whether you\'re preparing for finals, catching up on lectures, or aiming for top grades, our materials are crafted to save you time and boost your performance. ✔️ Clear & concise notes ✔️ Covers key concepts and exam tips ✔️ Perfect for last-minute revision ✔️ Trusted by hundreds of students Join thousands of learners who use AcademicAnchor to stay grounded in their studies and achieve academic success.

Lees meer Lees minder
3,6

11 beoordelingen

5
5
4
1
3
3
2
0
1
2

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via Bancontact, iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo eenvoudig kan het zijn.”

Alisha Student

Veelgestelde vragen