Exam : AIF-C01
Title : AWS Certified AI
Practitioner
https://www.passcert.com/AIF-C01.html
, Get Most Updated AIF-C01 Exam Dumps for Success
1.An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains
confidential data. The AI practitioner wants to ensure that the custom model does not generate inference
responses based on confidential data.
How should the AI practitioner prevent responses based on confidential data?
A. Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom
model.
B. Mask the confidential data in the inference responses by using dynamic data masking.
C. Encrypt the confidential data in the inference responses by using Amazon SageMaker.
D. Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS
KMS).
Answer: A
Explanation:
When a model is trained on a dataset containing confidential or sensitive data, the model may
inadvertently learn patterns from this data, which could then be reflected in its inference responses. To
ensure that a model does not generate responses based on confidential data, the most effective
approach is to remove the confidential data from the training dataset and then retrain the model.
Explanation of Each Option:
Option A (Correct): "Delete the custom model. Remove the confidential data from the training dataset.
Retrain the custom model. “This option is correct because it directly addresses the core issue: the model
has been trained on confidential data. The only way to ensure that the model does not produce inferences
based on this data is to remove the confidential information from the training dataset and then retrain the
model from scratch. Simply deleting the model and retraining it ensures that no confidential data is
learned or retained by the model. This approach follows the best practices recommended by AWS for
handling sensitive data when using machine learning services like Amazon Bedrock.
Option B: "Mask the confidential data in the inference responses by using dynamic data masking. “This
option is incorrect because dynamic data masking is typically used to mask or obfuscate sensitive data in
a database. It does not address the core problem of the model beingtrained on confidential data. Masking
data in inference responses does not prevent the model from using confidential data it learned during
training.
Option C: "Encrypt the confidential data in the inference responses by using Amazon SageMaker. “This
option is incorrect because encrypting the inference responses does not prevent the model from
generating outputs based on confidential data. Encryption only secures the data at rest or in transit but
does not affect the model's underlying knowledge or training process.
Option D: "Encrypt the confidential data in the custom model by using AWS Key Management Service
(AWS KMS). “This option is incorrect as well because encrypting the data within the model does not
prevent the model from generating responses based on the confidential data it learned during training.
AWS KMS can encrypt data, but it does not modify the learning that the model has already performed.
AWS AI Practitioner
Reference: Data Handling Best Practices in AWS Machine Learning: AWS advises practitioners to
carefully handle training data, especially when it involves sensitive or confidential information. This
includes preprocessing steps like data anonymization or removal of sensitive data before using it to train
machine learning models.
Amazon Bedrock and Model Training Security: Amazon Bedrock provides foundational models and
customization capabilities, but any training involving sensitive data should follow best practices, such as