DP-300 2023 with complete solution
Lifecycle Management You need to design a data retention solution for the Twitter feed data records. The solution must meet the customer sentiment analytics requirements. Which Azure Storage functionality should you include in the solution? A. time-based retention B. change feed C. lifecycle management D. soft delete a table the has an IDENTITY property You need to implement the surrogate key for the retail store table. The solution must meet the sales transaction dataset requirements. What should you create? A. a table that has a FOREIGN KEY constraint B. a table the has an IDENTITY property C. a user-defined SEQUENCE object D. a system-versioned temporal table - Total size of all the databases - Number of concurrently peaking databases * peak CPU utilization per database - Total number of databases * average CPU utilization per database You have 20 Azure SQL databases provisioned by using the vCore purchasing model. You plan to create an Azure SQL Database elastic pool and add the 20 databases. Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. total size of all the databases B. geo-replication support C. number of concurrently peaking databases * peak CPU utilization per database D. maximum number of concurrent sessions for all the databases E. total number of databases * average CPU utilization per database PolyBase You have a Microsoft SQL Server 2019 database named DB1 that uses the following database-level and instance-level features. Clustered columnstore indexes Automatic tuning Change tracking PolyBase You plan to migrate DB1 to an Azure SQL database. What feature should be removed or replaced before DB1 can be migrated? A. Clustered columnstore indexes B. PolyBase C. Change tracking D. Automatic tuning Azure Database Migration Service You have a Microsoft SQL Server 2019 instance in an on-premises datacenter. The instance contains a 4-TB database named DB1. You plan to migrate DB1 to an Azure SQL Database managed instance. What should you use to minimize downtime and data loss during the migration? A. distributed availability groups B. database mirroring C. Always On Availability Group D. Azure Database Migration Service Azure Event Hubs Dedicated You are designing a streaming data solution that will ingest variable volumes of data. You need to ensure that you can change the partition count after creation. Which service should you use to ingest the data? A. Azure Event Hubs Standard B. Azure Stream Analytics C. Azure Data Factory D. Azure Event Hubs Dedicated Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics serverless SQL pool. You have an Azure Synapse Analytics Apache Spark pool named Pool1. You plan to load JSON files from an Azure Data Lake Storage Gen2 container into the tables in Pool1. The structure and data types vary by file. You need to load the files into the tables. The solution must maintain the source data types. What should you do? A. Load the data by using PySpark. B. Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics serverless SQL pool. C. Use a Get Metadata activity in Azure Data Factory. D. Use a Conditional Split transformation in an Azure Synapse data flow. REPLICATE You are designing a date dimension table in an Azure Synapse Analytics dedicated SQL pool. The date dimension table will be used by all the fact tables. Which distribution type should you recommend to minimize data movement? A. HASH B. REPLICATE C. ROUND_ROBIN Parquet You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark pool named Pool1. You plan to create a database named DB1 in Pool1. You need to ensure that when tables are created in DB1, the tables are available automatically as external tables to the built-in serverless SQL pool. Which format should you use for the tables in DB1? A. JSON B. CSV C. Parquet D. ORC Azure Stream Analytics You are designing an anomaly detection solution for streaming data from an Azure IoT hub. The solution must meet the following requirements: Send the output to an Azure Synapse. Identify spikes and dips in time series data. Minimize development and configuration effort. Which should you include in the solution? A. Azure SQL Database B. Azure Databricks C. Azure Stream Analytics %<language> You are creating a new notebook in Azure Databricks that will support R as the primary language but will also support Scala and SQL. Which switch should you use to switch between languages? A. [<language>] B. %<language> C. [<language>] D. @<language> append You plan to build a structured streaming solution in Azure Databricks. The solution will count new events in five-minute intervals and report only events that arrive during the interval. The output will be sent to a Delta Lake table. Which output mode should you use? A. complete B. append C. update From the Azure portal, set a mask on the Email column. You have a SQL pool in Azure Synapse that contains a table named dbo.Customers. The table contains a column name Email. You need to prevent nonadministrative users from seeing the full email addresses in the Email column. The users must see values in a format of instead. What should you do? A. From the Azure portal, set a mask on the Email column. B. From the Azure portal, set a sensitivity classification of Confidential for the Email column. C. From Microsoft SQL Server Management Studio, set an email mask on the Email column. D. From Microsoft SQL Server Management Studio, grant the SELECT permission to the users for all the columns in the dbo.Customers table except Email. Create a pool in workspace1. You have an Azure Databricks workspace named workspace1 in the Standard pricing tier. Workspace1 contains an all-purpose cluster named cluster1. You need to reduce the time it takes for cluster1 to start and scale up. The solution must minimize costs. What should you do first? A. Upgrade workspace1 to the Premium pricing tier. B. Configure a global init script for workspace1. C. Create a pool in workspace1. D. Create a cluster policy in workspace1. No Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: In an Azure Synapse Analytics pipeline, you use a Get Metadata activity that retrieves the DateTime of the files. Does this meet the goal? A. Yes B. No Yes Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: You use an Azure Synapse Analytics serverless SQL pool to create an external table that has an additional DateTime column. Does this meet the goal? A. Yes B. No No After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: You use a dedicated SQL pool to create an external table that has an additional DateTime column. Does this meet the goal? A. Yes B. No Azure Private Link You plan to deploy an app that includes an Azure SQL database and an Azure web app. The app has the following requirements: - The web app must be hosted on an Azure virtual network. - The Azure SQL database must be assigned a private IP address. -The Azure SQL database must allow connections only from the virtual network. You need to recommend a solution that meets the requirements. What should you include in the recommendation? A. Azure Private Link B. a network security group (NSG) C. a database-level firewall D. a server-level firewall Business Critical You are planning a solution that will use Azure SQL Database. Usage of the solution will peak from October 1 to January 1 each year. During peak usage, the database will require the following: - 24 cores - 500 GB of storage - 124 GB of memory - More than 50,000 IOPS During periods of off-peak usage, the service tier of Azure SQL Database will be set to Standard. Which service tier should you use during peak usage? A. Business Critical B. Premium C. Hyperscale
Written for
- Institution
- DP-300
- Course
- DP-300
Document information
- Uploaded on
- February 17, 2023
- Number of pages
- 5
- Written in
- 2022/2023
- Type
- Exam (elaborations)
- Contains
- Questions & answers
Subjects
-
dp 300 2023 with complete solution
-
lifecycle management you need to design a data retention solution for the twitter feed data records the solution must meet the customer sentiment analytics requirem