100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

Latest AWS-Solution-Architect-Associate PDF and dumps Download AWS-Solution-Architect-Associate Exam Questions and Answers (2021)

Rating
-
Sold
2
Pages
37
Grade
A+
Uploaded on
02-07-2021
Written in
2020/2021

Get Amazon AWS Certified Solutions Architect - Associate (SAA-C02) Exam certified with Amazon AWS-Solution-Architect-Associate exam dumps and get well prepared with actual AWS-Solution-Architect-Associate exam questions answers. Premium Real AWS-Solution-Architect-Associate Dumps PDF and Amazon AWS Certified Solutions Architect - Associate (SAA-C02) Exam practice test engine, Verified questions answers 2021.

Show more Read less
Institution
Course

















Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Course

Document information

Uploaded on
July 2, 2021
Number of pages
37
Written in
2020/2021
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

Amazon
AWS-Solution-Architect-Associate Exam
Amazon AWS Certified Solutions Architect - Associate (SAA-C02) Exam


Questions & Answers (Demo Version)
https://itexamquestions.com/product/aws-solution-architect-associate-exam-questions/




Buy Full Product Here:

, Amazon
AWS-SOLUTION-ARCHITECT-
ASSOCIATE Exam
AWS Certified Solutions Architect - Associate




Questions & Answers
Demo

,Questions & Answers PDF Page 3




Version: 16.0

Question: 1

A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS
for greater scalability and elasticity The web server currently shares read-only data using a network
distributed file system The app server tier uses a clustering mechanism for discovery and shared
session state that depends on IP multicast The database tier uses shared-storage clustering to
provide database fall over capability, and uses several read slaves for scaling Data on all servers and
the distributed file system directory is backed up weekly to off-site tapes
Which AWS storage and database architecture meets the requirements of the application?

A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment and one or more read replicas. Backup: web servers, app servers, and database
backed up weekly to Glacier using snapshots.
B. Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time.
App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with
multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up
weekly via AMIs, database backed up via DB snapshots.
C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via
AMIs, database backed up via DB snapshots.
D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB
snapshots.

Answer: C

Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB)
Instances, making them a natural fit for production database workloads. When you provision a Multi-
AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously
replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own
physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an
infrastructure failure (for example, instance hardware failure, storage failure, or network disruption),
Amazon RDS performs an automatic failover to the standby, so that you can resume database
operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the
same after a failover, your application can resume database operation without the need for manual
administrative intervention.
Benefits
Enhanced Durability

,Questions & Answers PDF Page 4




Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical
replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the
SQL Server engine use synchronous logical replication to achieve the same result, employing SQL
Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB
Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically
initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a
Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This
operation can take several hours to complete, and any data updates that occurred after the latest
restorable time (typically within the last five minutes) will not be available.
Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for
database workloads. Amazon Aurora automatically replicates your volume six ways, across three
Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to
two copies of data without affecting database write availability and up to three copies without
affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are
continuously scanned for errors and replaced automatically.
Increased Availability
You also benefit from enhanced database availability when running Multi-AZ deployments. If an
Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time
automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two
minutes for other database engines (see the RDS FAQ for details).
The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups.
In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied
first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only
the time required for automatic failover to complete.
Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-
AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from
the standby. However, note that you may still experience elevated latencies for a few minutes during
backups for Multi-AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to
automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three
Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure,
Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.
No Administrative Intervention
DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS
monitors the health of your primary and standbys, and initiates a failover automatically in response
to a variety of failure conditions.
Failover conditions
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-
AZ deployments so that you can resume database operations as quickly as possible without
administrative intervention. Amazon RDS automatically performs a failover in the event of any of the
following:
Loss of availability in primary Availability Zone
Loss of network connectivity to primary
Compute unit failure on primary
Storage failure on primary
Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated
for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an

,Questions & Answers PDF Page 5




automatic failover. As a result, your availability impact is limited only to the time required for
automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover
automatically in response to database operations such as long running queries, deadlocks or
database corruption errors.

Question: 2

Your customer wishes to deploy an enterprise application to AWS which will consist of several web
servers, several application servers and a small (50GB) Oracle database information is stored, both in
the database and the file systems of the various servers. The backup system must support database
recovery whole server and whole disk restores, and individual file restores with a recovery time of no
more than two hours. They have chosen to use RDS Oracle as the database
Which backup architecture will meet these requirements?

A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and
supplement with file-level backup to S3 using traditional enterprise backup software to provide file
level restore
B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement
by copying file system data to S3 to provide file level restore.
C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and
supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to
provide file level restore
D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and
supplement with EBS snapshots for individual volume restore.

Answer: A

Explanation:
Point-In-Time Recovery
In addition to the daily automated backup, Amazon RDS archives database change logs. This enables
you to recover your database to any point in time during the backup retention period, up to the last
five minutes of database usage.
Amazon RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are
stored in a single availability zone. If for any reason a Single-AZ DB instance becomes unusable, you
can use point-in-time recovery to launch a new DB instance with the latest restorable data. For more
information on working with point-in-time recovery, go to Restoring a DB Instance to a Specified
Time.
Note
Multi-AZ deployments store copies of your data in different Availability Zones for greater levels of
data durability. For more information on Multi-AZ deployments, see High Availability (Multi-AZ).

Question: 3

Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software
with a multi-regional deployment on AWS in Japan, Europe and USA, The logistic software has a 3-
tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own
database

,Questions & Answers PDF Page 6




In the HQ region you run an hourly batch process reading data from every region to compute cross-
regional reports that are sent by email to all offices this batch process must be completed as fast as
possible to quickly optimize logistics how do you build the database architecture in order to meet the
requirements’?

A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in
the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS
snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS
snapshots to the HQ region
D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy
data files hourly to the HQ region
E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce
network latency for the batch process

Answer: A

Question: 4

A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web
application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on
an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual
consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model.
The application is exhibiting undesirable behavior because the database is not able to handle the
volume of writes. How can you reduce the load on your on-premises database resources in the most
cost-effective way?

A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the
on-premises database and a Hadoop cluster on AWS.
B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush
the queue to the on-premises database.
C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to
write to the on-premises database.
D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two
databases using Data Pipeline.

Answer: A

Explanation:
Reference:
https://aws.amazon.com/blogs/aws/category/amazon-elastic-map-reduce/

Question: 5

Company B is launching a new game app for mobile devices. Users will log into the game using their
existing social media account to streamline data capture. Company B would like to directly save

,Questions & Answers PDF Page 7




player data and scoring information from the mobile app to a DynamoDS table named Score Data
When a user saves their game the progress data will be stored to the Game state S3 bucket. What is
the best approach for storing data to DynamoDB and S3?

A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data
DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web
services.
B. Use temporary security credentials that assume a role providing access to the Score Data
DynamoDB table and the Game State S3 bucket using web identity federation.
C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile
app with access to the Score Data DynamoDB table and the Game State S3 bucket.
D. Use an IAM user with access credentials assigned a role providing access to the Score Data
DynamoDB table and the Game State S3 bucket for distribution with the mobile app.

Answer: B

Explanation:
Web Identity Federation
Imagine that you are creating a mobile app that accesses AWS resources, such as a game that runs on
a mobile device and stores player and score information using Amazon S3 and DynamoDB.
When you write such an app, you'll make requests to AWS services that must be signed with an AWS
access key. However, we strongly recommend that you do not embed or distribute long-term AWS
credentials with apps that a user downloads to a device, even in an encrypted store. Instead, build
your app so that it requests temporary AWS security credentials dynamically when needed using web
identity federation. The supplied temporary credentials map to an AWS role that has only the
permissions needed to perform the tasks required by the mobile app.
With web identity federation, you don't need to create custom sign-in code or manage your own
user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —
such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP,
receive an authentication token, and then exchange that token for temporary security credentials in
AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an
IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-
term security credentials with your application.
For most scenarios, we recommend that you use Amazon Cognito because it acts as an identity
broker and does much of the federation work for you. For details, see the following section, Using
Amazon Cognito for Mobile Apps.
If you don't use Amazon Cognito, then you must write code that interacts with a web IdP (Login with
Amazon, Facebook, Google, or any other OIDC-compatible IdP) and then calls the
AssumeRoleWithWebIdentity API to trade the authentication token you get from those IdPs for AWS
temporary security credentials. If you have already used this approach for existing apps, you can
continue to use it.
Using Amazon Cognito for Mobile Apps
The preferred way to use web identity federation is to use Amazon Cognito. For example, Adele the
developer is building a game for a mobile device where user data such as scores and profiles is stored
in Amazon S3 and Amazon DynamoDB. Adele could also store this data locally on the device and use
Amazon Cognito to keep it synchronized across devices. She knows that for security and maintenance
reasons, long-term AWS security credentials should not be distributed with the game. She also
knows that the game might have a large number of users. For all of these reasons, she does not want

,Questions & Answers PDF Page 8




to create new user identities in IAM for each player. Instead, she builds the game so that users can
sign in using an identity that they've already established with a well-known identity provider, such as
Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC)-compatible identity provider.
Her game can take advantage of the authentication mechanism from one of these providers to
validate the user's identity.
To enable the mobile app to access her AWS resources, Adele first registers for a developer ID with
her chosen IdPs. She also configures the application with each of these providers. In her AWS
account that contains the Amazon S3 bucket and DynamoDB table for the game, Adele uses Amazon
Cognito to create IAM roles that precisely define permissions that the game needs. If she is using an
OIDC IdP, she also creates an IAM OIDC identity provider entity to establish trust between her AWS
account and the IdP.
In the app's code, Adele calls the sign-in interface for the IdP that she configured previously. The IdP
handles all the details of letting the user sign in, and the app gets an OAuth access token or OIDC ID
token from the provider. Adele's app can trade this authentication information for a set of temporary
security credentials that consist of an AWS access key ID, a secret access key, and a session token.
The app can then use these credentials to access web services offered by AWS. The app is limited to
the permissions that are defined in the role that it assumes.
The following figure shows a simplified flow for how this might work, using Login with Amazon as the
IdP. For Step 2, the app can also use Facebook, Google, or any OIDC-compatible identity provider, but
that's not shown here.
Sample workflow using Amazon Cognito to federate users for a mobile application




A customer starts your app on a mobile device. The app asks the user to sign in.
The app uses Login with Amazon resources to accept the user's credentials.
The app uses Cognito APIs to exchange the Login with Amazon ID token for a Cognito token.
The app requests temporary security credentials from AWS STS, passing the Cognito token.
The temporary security credentials can be used by the app to access any AWS resources required by
the app to operate. The role associated with the temporary security credentials and its assigned
policies determines what can be accessed.
Use the following process to configure your app to use Amazon Cognito to authenticate users and
give your app access to AWS resources. For specific steps to accomplish this scenario, consult the
documentation for Amazon Cognito.

,Questions & Answers PDF Page 9




(Optional) Sign up as a developer with Login with Amazon, Facebook, Google, or any other OpenID
Connect (OIDC)–compatible identity provider and configure one or more apps with the provider. This
step is optional because Amazon Cognito also supports unauthenticated (guest) access for your
users.
Go to Amazon Cognito in the AWS Management Console. Use the Amazon Cognito wizard to create
an identity pool, which is a container that Amazon Cognito uses to keep end user identities organized
for your apps. You can share identity pools between apps. When you set up an identity pool, Amazon
Cognito creates one or two IAM roles (one for authenticated identities, and one for unauthenticated
"guest" identities) that define permissions for Amazon Cognito users.
Download and integrate the AWS SDK for iOS or the AWS SDK for Android with your app, and import
the files required to use Amazon Cognito.
Create an instance of the Amazon Cognito credentials provider, passing the identity pool ID, your
AWS account number, and the Amazon Resource Name (ARN) of the roles that you associated with
the identity pool. The Amazon Cognito wizard in the AWS Management Console provides sample
code to help you get started.
When your app accesses an AWS resource, pass the credentials provider instance to the client object,
which passes temporary security credentials to the client. The permissions for the credentials are
based on the role or roles that you defined earlier.

Question: 6

Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate
a large and undetermined amount of traffic that will create many database writes. To be certain that
you do not drop any writes to a database hosted on AWS. Which service should you use?

A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to
the database.
C. Amazon ElastiCache to store the writes until the writes are committed to the database.
D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write
throughput.

Answer: B

Explanation:
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for
storing messages as they travel between computers. By using Amazon SQS, developers can simply
move data between distributed application components performing different tasks, without losing
messages or requiring each component to be always available. Amazon SQS makes it easy to build a
distributed, decoupled application, working in close conjunction with the Amazon Elastic Compute
Cloud (Amazon EC2) and the other AWS infrastructure web services.
What can I do with Amazon SQS?
Amazon SQS is a web service that gives you access to a message queue that can be used to store
messages while waiting for a computer to process them. This allows you to quickly build message
queuing applications that can be run on any computer on the internet. Since Amazon SQS is highly
scalable and you only pay for what you use, you can start small and grow your application as you
wish, with no compromise on performance or reliability. This lets you focus on building sophisticated
message-based applications, without worrying about how the messages are stored and managed.

, Questions & Answers PDF Page 10




You can use Amazon SQS with software applications in various ways. For example, you can:
Integrate Amazon SQS with other AWS infrastructure web services to make applications more
reliable and flexible.
Use Amazon SQS to create a queue of work where each message is a task that needs to be
completed by a process. One or many computers can read tasks from the queue and perform them.
Build a microservices architecture, using queues to connect your microservices.
Keep notifications of significant events in a business process in an Amazon SQS queue. Each event
can have a corresponding message in a queue, and applications that need to be aware of the event
can read and process the messages.

Question: 7

You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The
EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two
EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is
provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the
instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write
performance Sometime later in order to increase the total random I/O performance of the instance,
you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is
provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance
Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total
random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?

A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume
storage of each of the 6 EBS volumes to 1TB
B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized
instance that provides larger throughput.
C. Small block sizes cause performance degradation, limiting the I'O throughput, configure the
instance device driver and file system to use 64KB blocks to increase throughput.
D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but
increase each Provisioned IOPS EBS volume to 6.000 IOPS.
E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume
to also be a 500GB 4.000 Provisioned IOPS volume.

Answer: E

Question: 8

You have recently joined a startup company building sensors to measure street noise and air quality
in urban areas. The company has been running a pilot deployment of around 100 sensors for 3
months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB
of sensor data per month in the database.
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances
and a PostgreSQL RDS database with 500GB standard storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential
Free
Get access to the full document:
Download

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Get to know the seller
Seller avatar
roserichar

Get to know the seller

Seller avatar
roserichar Exam
Follow You need to be logged in order to follow users or courses
Sold
3
Member since
4 year
Number of followers
3
Documents
8
Last sold
1 year ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions