What is the role of temperature in the decoding process of a Large Language Model (LLM)?
-
Answer✅
To adjust the sharpness of probability distribution over vocabulary when selecting the next
word
Quiz_________________?
Which statement accurately reflects the differences between these approaches in terms of
the number of parameters modified and the type of data used? -
Answer✅
Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter
Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.
Quiz_________________?
What is prompt engineering in the context of Large Language Models (LLMs)? -
Answer✅
Iteratively refining the ask to elicit a desired response
Quiz_________________?
What does the term "hallucination" refer to in the context of Language Large Models
(LLMs)? -
1
, Answer✅
The phenomenon where the model generates factually incorrect information or unrelated
content as if it were true
Quiz_________________?
What does in-context learning in Large Language Models involve? -
Answer✅
Conditioning the model with task-specific instructions or demonstrations
Quiz_________________?
What happens if a period (.) is used as a stop sequence in text generation? -
Answer✅
The model stops generating text after it reaches the end of the first sentence, even if the
token limit is much higher.
Quiz_________________?
What is the main advantage of using few-shot model prompting to customize a Large
Language Model (LLM)? -
Answer✅
It provides examples in the prompt to guide the LLM to better performance with no training
cost.
Quiz_________________?
What is the purpose of frequency penalties in language model outputs? -
Answer✅
To penalize tokens that have already appeared, based on the number of times they have
been used
Quiz_________________?
2