IBM watsonx
Generative AI Engineer
- Associate
,1.In the context of IBM Watsonx and generative AI models, you are tasked with
designing a model that needs to classify customer support tickets into different
categories. You decide to experiment with both zero-shot and few-shot prompting
techniques.
Which of the following best explains the key difference between zero-shot and few-
shot prompting?
A. Zero-shot prompting does not use any examples in the input prompt, while few-
shot prompting includes a few examples to guide the model.
B. Zero-shot prompting provides the model with a few example tasks to help it
understand the problem, while few-shot prompting provides no examples at all.
C. In zero-shot prompting, the model learns from a large number of examples during
the inference stage, while in few-shot prompting, only a single example is used.
D. Few-shot prompting is used only for training the model, while zero-shot prompting
s
ta
is used only for inference tasks.
es
pu
Answer: A
s
re
y
s
ta
un
eg
pr
2.In prompt engineering, prompt variables are used to make your prompts more
as
m
dynamic and reusable.
ti
úl
Which of the following statements best describes a key benefit of using prompt
s
la
n
variables in IBM Watsonx Generative AI?
co
en
A. Prompt variables eliminate the need to change model parameters every time you
am
ex
generate a new response.
su
B. Prompt variables automatically improve the accuracy of responses by reducing
be
ue
model variance.
pr
-A
C. Prompt variables ensure that the AI's response format will always be consistent,
85
-1
regardless of the input data.
00
10
D. Prompt variables allow a single prompt template to handle multiple data points or
C
ca
scenarios by inserting different values.
ti
ác
Answer: D
pr
de
as
nt
gu
re
3.You are working on a project where the AI model needs to generate personalized
P
customer support responses based on various input fields like customer name, issue
type, and product details. To make the system scalable and flexible, you decide to
use prompt variables in your implementation.
Which of the following statements accurately describe the benefits of using prompt
variables in this scenario? (Select two)
A. Prompt variables improve the model's performance by optimizing its internal
architecture, reducing
computation time for each request.
B. Prompt variables reduce redundancy by allowing dynamic inputs to be injected into
a single prompt template, improving scalability.
,C. Using prompt variables allows the model to dynamically adjust its output based on
context, without requiring multiple task-specific prompts.
D. Prompt variables eliminate the need for fine-tuning the model on specific tasks
since they allow on-the-fly customization of responses.
E. Prompt variables require a complete re-training of the model whenever a new
variable is introduced, which can be time-consuming.
Answer: B,C
4.You are tasked with designing an AI prompt to extract specific data from
unstructured text. You decide to use either a zero-shot or a few-shot prompting
technique with an IBM Watsonx model.
Which of the following statements best describes the key difference between zero-
s
ta
shot and few-shot prompting?
es
pu
A. Zero-shot prompting provides the model with examples, while few-shot prompting
s
re
y
does not.
s
ta
un
B. Zero-shot prompting requires no examples in the prompt, while few-shot prompting
eg
pr
provides the model with one or more examples to clarify the task.
as
m
C. Few-shot prompting is used when the model is trained on supervised learning,
ti
úl
while zero-shot prompting works only with unsupervised models.
s
la
n
D. Zero-shot prompting requires retraining the model with additional data, while few-
co
en
shot prompting uses a pre-trained model without retraining.
am
ex
Answer: B
su
be
ue
pr
-A
5.You are building a chatbot using a generative AI model for a medical advice
85
-1
platform. During testing, you notice that the model occasionally generates medical
00
10
information that contradicts established guidelines. This is an example of a model
C
ca
hallucination.
ti
ác
Which prompt engineering technique would best mitigate the risk of hallucination in
pr
de
this scenario?
as
nt
A. Implementing zero-shot learning techniques
gu
re
B. Providing a list of credible sources in the prompt
P
C. Using more open-ended prompts
D. Increasing the model's temperature parameter
Answer: B
6.Your team has developed an AI model that generates automated legal documents
based on user inputs. The client, a large law firm, wants to deploy this model but has
stringent security, compliance, and auditability requirements due to the sensitive
nature of the data.
What is the most appropriate deployment strategy to meet these specific
, requirements?
A. Deploy the model on a hybrid cloud, with inference done on the client’s on-
premise servers and training done in the public cloud.
B. Deploy the model on a public cloud with built-in encryption and use APIs to
connect to the client’s private data.
C. Deploy the model using a serverless architecture to minimize operational overhead
and maintain compliance.
D. Use a private cloud with role-based access controls (RBAC) and ensure model
activity is logged for auditing purposes.
Answer: D
7.Your team is responsible for deploying a generative AI system that will interact with
s
ta
customers through automated chatbots. To improve the quality and consistency of
es
pu
responses across different queries and customer profiles, the team has developed
s
re
y
several prompt templates. These templates aim to standardize input to the model,
s
ta
un
ensuring that outputs are aligned with business objectives. However, the team is
eg
pr
debating whether using these prompt templates will provide tangible benefits in the
as
m
deployment.
ti
úl
What is the primary benefit of deploying prompt templates in this AI system?
s
la
n
A. Reducing the overall inference time by streamlining the input-output process for the
co
en
model, ensuring faster responses.
am
ex
B. Improving the scalability of the system by allowing the model to handle more
su
diverse inputs without requiring additional fine-tuning.
be
ue
C. Enhancing the model’s ability to generalize across unseen data by training it
pr
-A
specifically on the variations included in the prompt template.
85
-1
D. Enabling more predictable and consistent outputs across different inputs, aligning
00
10
the model's responses more closely with the business goals.
C
ca
Answer: D
ti
ác
pr
de
as
nt
8.You have applied a set of prompt tuning parameters to a language model and
gu
re
collected the following statistics: ROUGE-L score, BLEU score, and memory
P
utilization.
Based on these metrics, how would you prioritize further optimizations to balance the
model’s performance in terms of output relevance and resource efficiency?
A. Maximize BLEU score and reduce memory utilization
B. Reduce memory utilization and maintain BLEU and ROUGE-L scores
C. Focus on improving the ROUGE-L score while increasing memory utilization
D. Increase memory utilization to reduce BLEU and ROUGE-L scores
Answer: B