UPDATED | ACTUAL EXAM QUESTIONS WITH SOLUTIONS | 100%
RATED CORRECT | 100% VERFIED SOLTIONS | ALREADY GRADED
A+
Question #191Topic 1
A company collects temperature, humidity, and atmospheric pressure data in cities across
multiple continents. The average volume of data collected per site each day is 500 GB. Each site
has a high-speed internet connection. The company's weather forecasting applications are based
in a single Region and analyze the data daily.What is the FASTEST way to aggregate data from
all of these global sites?
A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to
directly upload site data to the destination bucket.
B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region
replication to copy objects to the destination bucket.
C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-
Region replication to copy objects to the destination bucket.
D. Upload the data to an Amazon EC2 instance in the closest R - (ANSWER)A
Question #192Topic 1
A company has a custom application running on an Amazon EC instance that:ג€¢ Reads a large
amount of data from Amazon S3ג€¢ Performs a multi-stage analysisג€¢ Writes the results to
Amazon DynamoDBThe application writes a significant number of large, temporary files during
the multi-stage analysis. The process performance depends on the temporary storage
performance.What would be the fastest storage option for holding the temporary files?
A. Multiple Amazon S3 buckets with Transfer Acceleration for storage.
B. Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization.
C. Multiple Amazon EFS volumes using the Network File System version 4.1 (NFSv4.1)
protocol.
D. Multiple instance store volumes with software RAID 0. - (ANSWER)D
,Question #193Topic 1
A leasing company generates and emails PDF statements every month for all its customers. Each
statement is about 400 KB in size. Customers can download their statements from the website for
up to 30 days from when the statements were generated. At the end of their 3-year lease, the
customers are emailed a ZIP file that contains all the statements.What is the MOST cost-effective
storage solution for this situation?
A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to
move the statements to Amazon S3 Glacier storage after 1 day.
B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to
move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to
move the statements to Amazon S3 One Zone-Infrequent Access (S - (ANSWER)D
Question #194Topic 1
A company recently released a new type of internet-connected sensor. The company is expecting
to sell thousands of sensors, which are designed to stream high volumes of data each second to a
central location. A solutions architect must design a solution that ingests and stores data so that
engineering teams can analyze it in near-real time with millisecond responsiveness.Which
solution should the solutions architect recommend?
A. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda
function, which then stores the data in Amazon Redshift.
B. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda
function, which then stores the data in Amazon DynamoDB.
C. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda
function, which then stores the data in Amazon Redshift.
D. Use Amazon Kinesis Data Streams to ingest the d - (ANSWER)D
Question #195Topic 1
A website runs a web application that receives a burst of traffic each day at noon. The users
upload new pictures and content daily, but have been complaining of timeouts. The architecture
uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute
,to initiate upon boot up before responding to user requests.How should a solutions architect
redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration.
B. Configure AWS ElastiCache for Redis to offload direct requests to the servers.
C. Configure an Auto Scaling step scaling policy with an instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin. -
(ANSWER)C
Question #457Topic 1
A company is migrating a large, mission-critical database to AWS. A solutions architect has
decided to use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000
Provisioned IOPS for storage. The solutions architect is using AWS Database Migration Service
(AWS DMS) to perform the data migration. The migration is taking longer than expected, and
the company wants to speed up the process. The companyג€™s network team has ruled out
bandwidth as a limiting factor.Which actions should the solutions architect take to speed up the
migration? (Choose two.)
A. Disable Multi-AZ on the target DB instance.
B. Create a new DMS instance that has a larger instance size.
C. Turn off logging on the target DB instance until the initial load is complete.
D. Restart the DMS task on a new DMS instance with transfer acceleration enabled.
E. Change the storage type on the target DB instance to Amazon - (ANSWER)A,C
Why A and C? Turn off backups and transaction logging When migrating to an Amazon RDS
database, it's a good idea to turn off backups and Multi-AZ on the target until you're ready to cut
over. Similarly, when migrating to systems other than Amazon RDS, turning off any logging on
the target until after cutover is usually a good idea.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPracti
ces.Performance Why not B? The questions establishes that "A solutions architect has decided to
use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000
Provisioned IOPS for storage" Checking the AWS DMS replication instance for migration
available from this link:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html#CHA
P_ReplicationInstance.Types.Deciding we can take dms.r5.24xlarge which can we give us a
maximum of 80,000 IOPS based on this other link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html
, Question #456Topic 1
A company has deployed a multiplayer game for mobile devices. The game requires live location
tracking of players based on latitude and longitude. The data store for the game must support
rapid updates and retrieval of locations.The game uses an Amazon RDS for PostgreSQL DB
instance with read replicas to store the location data. During peak usage periods, the database is
unable to maintain the performance that is needed for reading and writing updates. The
gameג€™s user base is increasing rapidly.What should a solutions architect do to improve the
performance of the data tier?
A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
B. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana.
C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify
the game to use DAX.
D. Deploy an Amazon ElastiCache for Red - (ANSWER)D
The answer is D Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB
instance. Modify the game to use Redis keywords: The game requires live location tracking of
players based
Question #455Topic 1
A company with a single AWS account runs its internet-facing containerized web application on
an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.The EKS cluster is placed in a
private subnet of a VPC. System administrators access the EKS cluster through a bastion host on
a public subnet.A new corporate security policy requires the company to avoid the use of bastion
hosts. The company also must not allow internet connectivity to the EKS cluster.Which solution
meets these requirements MOST cost-effectively?
A. Set up an AWS Direct Connect connection.
B. Create a transit gateway.
C. Establish a VPN connection.
D. Use AWS Storage Gateway. - (ANSWER)B or C ( costier)
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html Accessing a private only
API server If you have disabled public access for your cluster's Kubernetes API server endpoint,
you can only access the API server from within your VPC or a connected network. Here are a
few possible ways to access the Kubernetes API server endpoint: Connected network - Connect