100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Examen

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September

Puntuación
-
Vendido
-
Páginas
27
Grado
A+
Subido en
29-08-2025
Escrito en
2025/2026

COS4861_Assignment_3_EXPERTLY_DETAILED_ANSWERS_DUE_10_September 2025 100% solved answers.Stop starting from scratch.Download your copy today and get a head start.

Institución
Grado










Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Escuela, estudio y materia

Institución
Grado

Información del documento

Subido en
29 de agosto de 2025
Número de páginas
27
Escrito en
2025/2026
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

Vista previa del contenido

COS4861
Assignment3
Unique No:
DUE 10 September 2025

,COS4861

Assignment 3 (2025)

Working Towards Encoding Systems in NLP Due: 10 September 2025

Question 1 — Theory (12)

1.1 What is a corpus, and how does it differ from other data types? (2) A corpus
refers to a carefully compiled collection of natural language material, which may include
written texts or transcribed spoken language. Its defining characteristic is that it
preserves the linguistic structure of the data—tokens, sentences, documents, genres,
and metadata such as authorship, date, and register. This makes it distinct from
ordinary datasets like spreadsheets or sensor outputs, which are primarily numerical. A
corpus allows researchers to analyse language-specific features such as word
frequencies, syntactic patterns, and semantic usage. In this assignment, the dataset
used is a small English corpus on smoothing algorithms.

1.2 Technical term for splitting a corpus into paragraphs/sentences/words (1) The
process of dividing text into smaller linguistic units is known as tokenization (splitting
into words) and sentence segmentation (marking sentence boundaries). These are
crucial preprocessing steps in natural language processing (NLP).

1.3 Define N-grams and give peer-reviewed references (2) An N-gram is a
contiguous sequence of N linguistic items—such as words, characters, or subwords—
drawn from a given text. N-gram language models compute conditional probabilities of
the form:

𝑖−1
𝑃(𝑤𝑖 ∣ 𝑤 𝑖−𝑁+1 )

where the likelihood of a word is estimated given the preceding sequence. Early
foundational research, such as Brown et al. (1992), introduced class-based N-gram
models, while Chen & Goodman (1999) provided a systematic evaluation of smoothing
methods, highlighting the critical role of N-grams in statistical language modelling.

, 1.4 Data sparseness in N-gram models; what is smoothing? Name two algorithms
(7) Language data is inherently sparse because the number of possible word
sequences is vast, and many valid N-grams will not occur in a training set. Using
Maximum Likelihood Estimation (MLE), unseen N-grams are assigned a probability of
zero, while frequent N-grams dominate the distribution. This leads to the data sparsity
problem, reducing the reliability of predictions.

Smoothing is a strategy that redistributes probability mass to unseen or rare N-grams,
thereby avoiding zero-probability estimates and improving generalisation.

Two widely used smoothing techniques are:

 Katz Back-off: Discounts the counts of observed N-grams and, when higher-
order contexts are missing, “backs off” to lower-order N-gram estimates.
 Modified Kneser–Ney: Combines absolute discounting with continuation
probabilities, making it one of the most robust and widely adopted smoothing
methods.

An additional well-known method is Good–Turing discounting, which re-estimates
probabilities of rare events to better account for unseen sequences.

Question 2 — Applications & Code Concepts (13)

2.1 How MLE causes data sparseness issues in unsmoothed N-grams (3) In MLE,
the probability of a word given a context is calculated as:

𝐶(ℎ, 𝑤𝑖 )
𝑃෠(𝑤𝑖 ∣ ℎ) =
𝐶(ℎ)

where 𝐶(ℎ, 𝑤𝑖 ) is the count of the history–word pair and 𝐶(ℎ) is the count of the history.
If a valid word combination does not appear in the training corpus (𝐶(ℎ, 𝑤𝑖 ) = 0 ), its
probability is set to zero. Because natural language has a long-tail distribution with
many rare events, this results in frequent zero probabilities, reducing predictive
accuracy and inflating perplexity.
$2.76
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
AcademicAnchor University of South Africa (Unisa)
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
47
Miembro desde
2 año
Número de seguidores
1
Documentos
382
Última venta
2 meses hace
Academic Anchor

Welcome to AcademicAnchor – Your Trusted Source for High-Quality Assignments, Exam Packs and Study Notes. At AcademicAnchor, we provide expertly written, exam-ready study guides, summaries, and notes designed to help students succeed. Whether you\'re preparing for finals, catching up on lectures, or aiming for top grades, our materials are crafted to save you time and boost your performance. ✔️ Clear & concise notes ✔️ Covers key concepts and exam tips ✔️ Perfect for last-minute revision ✔️ Trusted by hundreds of students Join thousands of learners who use AcademicAnchor to stay grounded in their studies and achieve academic success.

Lee mas Leer menos
3.6

11 reseñas

5
5
4
1
3
3
2
0
1
2

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes