AWS Certified Developer Associate 2: Multiple Choice Questions And Correct Answers with Rationale,100% Verified
AWS Certified Developer Associate 2: Multiple Choice Questions And Correct Answers with Rationale,100% Verified A Lambda function has been developed with the default settings and is using N. The function makes calls to a DynamoDB table. The code was first tested and executed on an EC2 Instance in the same language and took 300 seconds to execute. When the lambda function is executed , it is not adding the required rows to the DynamoDB table. What needs to be changed in order to ensure that the Lambda function works as desired? A. Ensure that the underlying programming language is changed to python B. Change the timeout for the function C. Change the memory assigned to the function to 1 GB D. Assign an IAM user to the Lambda function Answer - B Explanation : If the lambda function was created with the default settings , it would have the default timeout of 3 seconds as shown below. Since the function executes in a timespan of 300 seconds on an EC2 instance , this value needs to be changed. Option A is incorrect since the programming language is not an issue Option C is incorrect since there is no mention on the amount of memory required in the question Option D is incorrect since IAM roles should be assigned to the Lambda function For more information on configuring Lambda functions , please refer to the below URL You need to setup a RESTful API service in AWS that would be serviced via the following url Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below A. AWS Lambda and AWS API gateway B. AWS S3 and Cloudfront C. AWS EC2 and AWS Elastic Load Balancer D. AWS SQS and Cloudfront Answer - A and C Explanation: AWS Lambda can be used to host the code and the API gateway can be used to access the API's which point to AWS Lambda Alternatively, you can create your own API service, host it on an EC2 Instance and then use the AWS Application Load balancer to do path based routing. Option B is incorrect since AWS S3 is normally is used to host static content Option D is incorrect since AWS SQS is a queuing service For more information on an example with RESTful API's , please refer to the below URL You are developing a mobile based application that needs to make use of an authentication service. There are a set of videos files which need to be accessed via unauthenticated identities. How can you BEST achieve this using AWS? A. Create an IAM user with public access B. Create an IAM group with public access C. Use AWS Cognito with unauthenticated identities enabled. D. Use AWS STS with SAML Answer - C Explanation: This is also mentioned in the AWS Documentation Using Identity Pools (Federated Identities) Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account. To create a new identity pool in the console 1. Sign in to the Amazon Cognito console, choose Manage Federated Identities, and then choose Create new identity pool. 2. Type a name for your identity pool. 3. To enable unauthenticated identities select Enable access to unauthenticated identities from the Unauthenticated identities collapsible section. 4. If desired, configure an authentication provider in the Authentication providers section. Options A and B are incorrect since it's not the right approach to use IAM users or groups for access for mobile based applications Option D is incorrect since SAML is used for federated access. For more information on identity pools , please refer to the below URL You're a developer at a company that needs to deploy an application using Elastic Beanstalk. There is a requirement to place a g file for the environment. In which of the following location should this config file be placed to ensure it is part of the elastic beanstalk environment? A. In the application root folder B. In the config folder C. In the packages folder D. In the .ebextensions folder Answer - D Explanation : The AWS Documentation mentions the following Elastic Beanstalk supports two methods of saving configuration option settings. Configuration files in YAML or JSON format can be included in your application's source code in a directory named .ebextensions and deployed as part of your application source bundle. You create and manage configuration files locally. All other options are incorrect because the AWS documentation specifically mentions that you need to place custom configuration files in the .ebextensions folder An application needs to make use of a messaging system. The messages need to be processed in the order they are received and also no duplicates should be allowed. Which of the following would you use for this purpose? A. Enable FIFO on an existing SQS Standard Queue. B. Add the .fifo extension to the Standard SQS Queue C. Consider using SNS D. Use the FIFO SQS Queues Answer - D Explanation: This is also mentioned in the AWS Documentation FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated, for example: · Ensure that user-entered commands are executed in the right order. · Display the correct product price by sending price modifications in the right order. · Prevent a student from enrolling in a course before registering for an account. Options A and B are incorrect since FIFO queues are already available for this purpose. Option C is incorrect since this is a notification service and not a queuing service For more information on SQS FIFO Queues , please refer to the below URL Which of the following is the right sequence of hooks that get called in AWS CodeDeploy? A. Application Stop->BeforeInstall->After Install->Application Start B. BeforeInstall->After Install-> Application Stop-> Application Start C. BeforeInstall->After Install->Validate Service-> Application Start D. BeforeInstall->Application Stop-> Validate Service-> Application Start Answer - A Explanation : This is also mentioned in the AWS Documentation Because of the order of events given in the AWS Documentation , all other options are invalid. For more information on the hooks order , please refer to the below URL #reference-appspec-file-structure-hooks-run-order As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what's the underlying issue. How can you accomplish this? A. Use AWS Cloudwatch metrics B. Put logging statements in your code C. Set the Lambda function debugging level to verbose D. Use AWS Cloudtrail logs Answer - B Explanation : This is also mentioned in the AWS Documentation You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with Amazon CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function (/aws/lambda/). Option A is incorrect since the metrics will only give the rate at which the function is executing , but not help debug the actual error Option C is incorrect since there is no such option Option D is incorrect since this is only used for API monitoring For more information on monitoring functions , please refer to the below URL You are developing a function that will be hosted in AWS Lambda. The function will be developed in .Net. There are a number of external libraries that are needed for the code to run. Which of the following is the best practice when it comes to working with external dependencies for AWS Lambda? A. Make sure that the dependencies are put in the root folder B. Selectively only include the libraries that are required C. Make sure the libraries are installed in the beginning of the function D. Place the entire SDK dependencies in Amazon S3 Answer - B Explanation : This is also mentioned in the AWS Documentation Minimize your deployment package size to its runtime necessities. This will reduce the amount of time that it takes for your deployment package to be downloaded and unpacked ahead of invocation. For functions authored in Java or .NET Core, avoid uploading the entire AWS SDK library as part of your deployment package. Instead, selectively depend on the modules which pick up components of the SDK you need Option A is incorrect since dependencies don't need to be in the root folder Option C is incorrect since they can run at runtime and don't need to be installed prior Option D is incorrect since using the entire SDK sets is not advisable For more information on best practices for AWS Lambda , please refer to the below URL Your team has a Code Commit repository in your account. You need to give access to a set of developer's in another account access to your Code Commit repository. Which of the following is the most effective way to grant access? A. Create IAM users for each developer and provide access to the repository B. Create an IAM Group , add the IAM users and then provide access to the repository C. Create a cross account role , give the role the privileges. Provide the role ARN to the developers. D. Enable public access for the repository. Answer - C Explanation : This is also mentioned in the AWS Documentation Configure Cross-Account Access to an AWS CodeCommit Repository You can configure access to AWS CodeCommit repositories for IAM users and groups in another AWS account. This is often referred to as cross-account access. This section provides examples and stepby- step instructions for configuring cross-account access for a repository named MySharedDemoRepo in the US East (Ohio) Region in an AWS account (referred to as AccountA) to IAM users who belong to an IAM group named DevelopersWithCrossAccountRepositoryAccess in another AWS account (referred to as AccountB). All other options are incorrect because all of them are not recommended practises for giving access For more information on an example for cross account role access , please refer to the below URL You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this? A. Use AWS Cloudwatch metrics B. Assign a dead letter queue C. Configure SNS notifications D. Use AWS Cloudtrail logs Explanation : Answer - B This is also mentioned in the AWS Documentation Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you're unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS ( queue or an Amazon SNS ( Option A is incorrect since the metrics will only give the rate at which the function is executing , but not help debug the actual error Option C is incorrect since this will only provide notifications but not give the actual events which failed. Option D is incorrect since this is only used for API monitoring For more information on dead letter queues with AWS Lambda , please refer to the below URL You are planning to use AWS Kinesis streams for an application being developed for a company. The company policy mandates that all data is encrypted at rest. How can you accomplish this in the easiest way possible for Kinesis streams? A. Use the SDK for Kinesis to encrypt the data before being stored at rest B. Enable server-side encryption for Kinesis streams C. Enable client-side encryption for Kinesis streams D. Use the AWS CLI to encrypt the data Explanation : Answer - B The easiest way is to use the in-built server-side encryption that is available with Kinesis streams The AWS Documentation mentions the following Server-side encryption is a feature in Amazon Kinesis Data Streams that automatically encrypts data before it's at rest by using an AWS KMS customer master key (CMK) you specify. Data is encrypted before it's written to the Kinesis stream storage layer, and decrypted after it's retrieved from storage. As a result, your data is encrypted at rest within the Kinesis Data Streams service. This allows you to meet strict regulatory requirements and enhance the security of your data. Options A and C are invalid since this would involve too much of effort for encrypting and decrypting to the streams Option D is invalid since this is the same as encrypting the data before it reaches the stream For more information on server-side encryption with streams , please refer to the below URL You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput, you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards? A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard B. Order of data is possible across all shards in a stream C. Order of data is not possible at all in Kinesis streams D. You need to use Kinesis firehose to guarantee the order of data Explanation : Answer - A Kinesis Data Streams lets you order records and read and replay records in the same order to many Kinesis Data Streams applications. To enable write ordering, Kinesis Data Streams expects you to call the PutRecord API to write serially to a shard while using the sequenceNumberForOrdering parameter. Setting this parameter guarantees strictly increasing sequence numbers for puts from the same client and to the same partition key. Option A is correct as it cannot guarantee the ordering of records across multiple shards. Option B,C and D are incorrect because Kinesis Data Streams can order records on a single shard. Each data record has a sequence number that is unique within its shard. Kinesis Data Streams assigns the sequence number after you write to the stream with putRecords or Record. For more information please refer: by-using-amazon-dynamodb-streams/ ( by-using-amazon-dynamodb-streams/) A company is planning on developing an application that is going to make use of a DynamoDB table. The structure of the table is given below Attribute Name Type Description ProductID Number ID of product ReviewID Number Automatically generated GUID Product Name String Name of the product Product DescriptionString Description of the product Which of the following should be chosen as the partition key to ensure the MOST effective distribution of keys? A. Product ID B. Review ID C. Product Name D. Production Description Explanation : Answer - B The most effective one will be the Review ID since you have a uniquely generated GUID for each record. Option A is partially correct. It can be used as the partition key , but the question asks for the MOST effective distribution of keys and that would be the Review ID Options C and D are incorrect since it would not be a best practice to keep these as the partition keys For more information on DynamoDB , please refer to the below URL Your company is planning on using the Simple Storage service to host objects that will be accessed by users. There is a speculation that there would be roughly 6000 GET requests per second. Which of the following could be used to ensure optimal performance? Choose 2 answers from the options given below? A. Use a Cloudfront distribution in front of the S3 bucket B. Use hash key prefixes for the object keys C. Enable versioning for the objects D. Enable Cross Region Replication for the bucket Answer - A and B The AWS Documentation mentions the following on optimal performance for S3 Also you can use Cloudfront to give the objects to the user and cache them at the Edge locations , so that the requests on the bucket are reduced. Option C is only used to prevent accidental deletion of objects Option D is only used for disaster recovery scenarios For more information on performance improvement , please refer to the below URL Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. It is simple to increase your read or write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second. For more details, please check below AWS Docs: Your company currently stores its objects in S3. The current request rate is around 6000 GET requests per second. There is now a mandate for objects to be encrypted at rest. So you enable encryption using KMS. There are now performance issues being encountered. What could be the main reason behind this? A. Amazon S3 will now throttle the requests since they are now being encrypted using KMS B. You need to also enable versioning to ensure optimal performance C. You are now exceeding the throttle limits for KMS API calls D. You need to also enable CORS to ensure optimal performance Explanation : Answer - C This is also mentioned in the AWS Documentation You can make API requests directly or by using an integrated AWS service that makes API requests to AWS KMS on your behalf. The limit applies to both kinds of requests. For example, you might store data in Amazon S3 using server-side encryption with AWS KMS (SSE-KMS) Each time you upload or download an S3 object that's encrypted with SSE-KMS, Amazon S3 makes a GenerateDataKey (for uploads) or Decrypt (for downloads) request to AWS KMS on your behalf. These requests count toward your limit, so AWS KMS throttles the requests if you exceed a combined total of 1200 uploads or downloads per second of S3 objects encrypetd with SSE-KMS. Option A is invalid since S3 will not throttle requests just because encryption is enabled. Options B and D are invalid since these will not help increase performance For more information on KMS limits improvement , please refer to the below URL Your company is planning on using the Simple Storage service to host objects that will be accessed by users. There is a speculation that there would be roughly 6000 GET requests per second. Which of the following is the right way to use object keys for optimal performance? A. exampleawsbucket/-00-00/ B. exampleawsbucket/sample/
Escuela, estudio y materia
- Institución
- AWS Certified Developer Associate
- Grado
- AWS Certified Developer Associate
Información del documento
- Subido en
- 4 de febrero de 2024
- Número de páginas
- 43
- Escrito en
- 2023/2024
- Tipo
- Examen
- Contiene
- Preguntas y respuestas
Temas
-
aws certified developer associate 2 multiple choi