100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

AZ-104 Practice Test with Complete Solutions

Rating
-
Sold
-
Pages
54
Grade
A+
Uploaded on
18-03-2025
Written in
2024/2025

Q1. Your company is planning to use Azure Container Instances to deploy simple cloud applications. You are tasked with determining if multi-container groups hosting multiple container instances meet your solution requirements. You need to identify features and requirements for multi-container groups with each group hosting an application container, a logging container, and a monitoring container. For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement: 1- Multi-container groups support Linux containers only. 2- You can deploy a multi-container group from a Resource Manager template or a YAML file. 3- Container groups can scale up as necessary to create additional container instances as necessary. - ANSWER Answer: 1-Yes. 2-Yes. 3-No. Explanation: 1- Yes. Multi-container groups support Linux containers only. This is a current restriction for multi-container groups. Windows Containers are limited to Azure Container Instances that support deployment of a single container instance only. 2- Yes. You can deploy a multi-container group from a Resource Manager template or a YAML file. It is recommended that you use a Resource Manager template when you need to deploy additional Azure resources when deploying container instances, and this is the preferred method for deploying multi-container groups. 3- No. Container groups and container instances do not support scaling. If additional container groups or container instances are needed, they must be explicitly created. Q2. You create a FileStorage premium storage account and create a premium tier Azure file share. You plan to mount the file share directly on-premises using the Service Message Block (SMB) 3.0 protocol. You need to ensure that your network is configured to support mounting an Azure file share on-premises. You want to minimize the administrator effort necessary to accomplish this. What should you do? A-Create an ExpressRoute circuit. B-Install and configure Azure File Sync. C-Configure TCP port 445 as open in your on-premises internet firewall. D-Configure TCP port 443 as open in your on-premises internet firewall. - ANSWER Answer: C Explanation: You should configure TCP port 445 as open in your on-premises internet firewall. This is the only requirement for mounting an Azure file share as an on-premises SMB file share on your on-premises network. You should not configure TCP port 443 as open in your on-premises internet firewall. This would be a requirement if you were configuring Azure File Sync and not using ExpressRoute. You should not install and configure Azure File Sync. This is not a requirement for mounting a file share on-premises. You would use Azure File Sync if you wanted to cache several Azure file shares on-premises or in cloud VMs. You should not create an ExpressRoute circuit. An ExpressRoute circuit provides a private connection between your on-premises network and the Microsoft cloud. By using ExpressRoute you do not need to configure the on-premises firewall, but this solution requires more administrative effort to implement and maintain. Q3. You deploy a line of business (LOB) application. All resources that are part of the LOB application are deployed in a single resource group. The resources were added in different phases. You need to export the current configuration of the LOB application resources to an Azure Resource Manager (ARM) template. You will later use this template for deploying the LOB application infrastructure in different environments for testing or development purposes. For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement: 1- You need to export the ARM template from the latest deployment. 2- Each deployment contains only the resources that have been added in that deployment. 3- The parameters file contains the values used during the deployment. 4- The template contains the needed scripts for deploying the template. - ANSWER Answer: 1-No. 2-Yes. 3-Yes. 4-No. Explanation 1- No. You do not need to export the ARM template from the latest deployment. In this scenario, the LOB application was deployed in several phases. The latest deployment will export only the latest resources added to the application. If you want to export the ARM template with all the needed resources for the LOB application, you need to export the ARM template from the resource group. 2- Yes. Each deployment contains only the resources that have been added in that deployment. When you export an ARM template from a deployment, the template only contains the resources created during that deployment. 3- Yes. The parameters file contains the values used during the deployment. The parameters file is a JSON file that stores all the parameters used in the ARM template. You can use this file to reuse the template in different deployments, just changing the values of the parameters file. If you use this file in templates created from resource groups, you need to make significant edits to the template before you can effectively use the parameters file. 4- No. The template does not contain the needed scripts for deploying the template. When you download an ARM template from a deployment or a resource group, the downloaded package contains only the ARM template and the parameters file. You can reference Azure CLI scripts or a PowerShell script in the Azure docs linked in the export template pane. Q4. You use taxonomic tags to logically organize resources and to make billing reporting easier. You use Azure PowerShell to append an additional tag on a storage account named corpstorage99. The code is as follows: $r = Get-AzResource --ResourceName "corpstorage99" --ResourceGroupName "prodrg" Set-AzResource --Tag --Resourceld $r.ResourceId --Force The code returns unexpected results. You need to append the additional tag as quickly as possible. What should you do? A-Refactor the code by using the Azure Command-Line Interface (CLI). B-Deploy the tag by using an Azure Resource Manager template. C-Edit the script to call the Add() method after getting the resource to append the new tag. D-Assign the Enforce tag and its value Azure Policy to the resource group. - ANSWER Answer: C Explanation: You should edit the script to call the Add() method after getting the resource to append the new tag as shown in the second line of this refactored Azure PowerShell code: $r = Get-AzResource --ResourceName "corpstorage99" --ResourceGroupName "prodrg" $r.Tags.Add ( " Dept " , "IT") Set-AzResource --Tag $r.Tags --ResourceId $r.Resourceld --Force Unless you call the Add() method, the Set-AzResource cmdlet will overwrite any existing taxonomic tags on the resource. The Add() method preserves existing tags and includes one or more tags to the resource tag list. You should not deploy the tag by using an Azure Resource Manager template. Doing so is unnecessary in this case because the Azure PowerShell is mostly complete as-is. Furthermore, you must find the solution as quickly as possible. You should not assign the Enforce tag and its value Azure Policy to the resource group. Azure Policy is a governance feature that helps businesses enforce compliance in resource creation. In this case, the solution involves too much administrative overhead to be a viable option. Moreover, the scenario makes no mention of the need for governance policy in specific terms. You should not refactor the code by using the Azure Command-Line Interface (CLI). Either Azure PowerShell or Azure CLI can be used to institute this solution. It makes no sense to change the development language given since you have already completed most of the code in PowerShell. Q5. You manage an ASP.Net Core application that runs in an Azure App Service named app1. The app connects to a storage account named storage1 that uses an access key stored in an app setting. Both app1 and storage1 are provisioned in a resource group named rg1. For security reasons, you need to regenerate the storage1 access keys without interrupting the connection with app1. How should you complete the command? To answer, select the appropriate options from the drop-down menus. Key=$(az az storage account keys list -resource-group rg1 -account-name storage1 (1) ) az webapp config appsettings set -resource-group rg1 -name app1 -settings STORAGE_ACCOUNT_KEY=$key az storage account keys renew -resource-group rg1 -account-name storage1 -accountname storage1 (2) Key=$(az az storage account keys list -resource-group rg1 -account-name storage1 (3) ) az webapp config appsettings set -resource-group rg1 -name app1 -settings STORAGE_ACCOUNT_KEY=$key az storage account keys renew --resource-group rg1 -account-name storage1 -accountname storage1 (4) Choose the correct options for (1) (2) (3) (4): A--key primary B--key secondary C--query[0].value D--query[1].value - ANSWER Answer: (1) D (2) A (3) C (4) B Explanation: To retrieve the primary key, use -query[0].value. To retrieve the secondary key, use -query[1].value To generate key, use -key primary or -key secondary Q6. You have a virtual machine (VM) named VM1 in the West Europe region. VM1 has a network interface named NIC1. NIC1 is attached to a VNet named VNet1. VM1 has one managed disk (OS disk). You need to move VM1 to VNet2. VNet2 is located in the West Europe region. Which two actions should you perform? Each correct answer presents part of the solution. A-Delete VM1. B-Create VNet peering between VNet1 and VNet2. C-Create a new VM using the existing disk from VM1. D-Deallocate VM1. - ANSWER Answer: A, C Explanation: You should delete VMI. This is necessary because the VNet of a VM cannot be changed. When deleting the VM, the associated disk will not be deleted. You should then create a new VM using the existing disk from VMI. You should use the same settings, but the VM should be connected to VNet2. You should not deallocate VMI. This will shut down the VM and release the compute resources. The VM should be deleted. Redeploy the VM into the virtual network. The easiest way to redeploy is to delete the VM, but not any disks attached to it, and then re-create the VM using the original disks in the virtual network. Virtual networks and virtual machines in Azure | Microsoft Docs Q7. Your on-premises datacenter has a mixture of servers running Windows Server 2012 R2 Datacenter edition and Windows Server 2016 Datacenter edition. You need to configure Azure Sync Service between the Azure Files service and the servers in the datacenter. Which two activities must you complete to ensure that the service will operate successfully on your servers? Each correct answer presents part of the solution. A-Ensure that the PowerShell version deployed to the servers is at minimum version 5.1. B-Ensure that Active Directory Federation Services (ADFS) is deployed to all servers. C-Disable Internet Explorer Enhanced Security for Admins and Users. D-Ensure that for fileserver clusters, Azure Active Directory Connect is deployed to at least one server in the cluster. E-Disable Internet Explorer Enhanced Security for Admins only. - ANSWER Answer: A, C Explanation: To enable Azure File Sync, you must disable Internet Explorer Enhanced Security for all admin and user accounts. Azure File Sync requires a minimum PowerShell version of 5.1. Windows Server 2016 supports that as the minimum default version, but it may have to be installed on Windows Server 2012 R2 servers. Active Directory Federation Services (ADFS) and Azure Active directory connect do not need to be installed on the file servers in the environment. Azure Active Directory Connect is used to synchronize on-premises identities to Azure Active Directory (Azure AD) and so is needed in the overall environment, but not on the file servers. Q8. You are configuring storage for an Azure Kubernetes Service (AKS) cluster. You want to create a custom StorageClass. You use the kubectl command to apply the following YAML file: kind: StorageC1ass apiVersion: storage. metadata: name: managed-disk-forapp provisioner: reclaimPolicy: Retain parameters: storageaccounttype: Default kind: Managed You need to determine the impact of using this storage class when configuring persistent volumes. For each of the following statements, select Yes if the statement is true. Otherwise, select No Statement: 1- Managed disks use Azure Premium storage. 2- When the pod claiming a disk is deleted, the underlying Azure Disk is maintained. 3-A configured Managed Disk can be shared by multiple pods. - ANSWER Answer: 1- No. 2-Yes. 3-No. Explanation: 1- No. Managed disks configured using this storage group use Standard storage rather than Premium storage. This is specified by the storageaccounttype parameter. For Premium storage, this line would read: storageaccounttype: Premium LRS 2- Yes. When the pod claiming a disk is deleted, the underlying Azure Disk is maintained, retaining its data, and it can be reused. This is because the reclaimPolicy is specified as Retain. 3- No. A configured Managed Disk cannot be shared by multiple pods. Azure Disk storage cannot be shared between multiple pods or nodes. You must use Azure Files to support shared data access instead. Q9. You administer an Azure environment at Company1. You are requested to restrict access for the administrator Admin1 to a portion of Azure Active Directory (Azure AD). You create the administrative unit AdminUnit1 and configure it as shown in the exhibits Administrative unit users, Administrative unit groups, and Administrative unit admin. The configuration of the security group Group1 is shown in the exhibit Security group. You need to identify the Azure AD objects that can be administered by Admin1. Which Azure AD objects should you identify? Exhibit: User Administrator "Admin1" is assigned to be responsible for the scope Administrative unit "AdminUnit1". Security Group "Group1" has 3 direct members: User1, User2, User3 Administrative unit "AdminUnit1" has user User1, User2 added directly. Administrative unit "AdminUnit1" has group Group1 added directly. A-User 1, and User2 only B-User 1, User2, and Group1 only C-Group1 only D-User 1, User2, and User3 only - ANSWER Answer: B Explanation: Admin1 can administer User1, User2, and Group1 only. With Azure administrative units, you can restrict access to any portion of Azure AD. In this way, it is possible to restrict Admin1's administrative access to the user and group objects that Admin1 is responsible for. Administrative units can only contain users and groups. Adding a security group to an administrative unit does not allow the administrative unit administrator to manage properties for individual members of that group. To allow the administrative unit administrator to manage individual members of the group, each group member must be added directly as a user to the administrative unit. In this scenario, Group1 and its members User1 and User2 are added directly to AdminUnit1. Therefore, only these Azure AD objects can be administered by Admin1. Admin1 cannot administer User1, User2, and User3 only. Although Admin1 can modify properties of User1 and User2, User3 is out of the administrative scope of AdminUnit1 and, as such, out of the administrative scope of Admin1. To allow Admin1 to modify User3, this user must be added directly as a user of the AdminUnit1. Admin1 cannot administer Group1 only. Although properties of Group1 can be modified by Admin1, it is not the only Azure AD object that can be modified by Admin1 in this scenario. Admin1 cannot administer User1 and User2 only. Although the properties of User1 and User2 can be modified by Admin1, they are not the only Azure AD objects that can be modified by Admin1 in this scenario. Q10. You need to create an Azure Availability Set in Central US named AS1. You are planning to deploy eight virtual machines (VMs) to AS1 to run an IIS web application. You need to configure AS1. You have the following requirements: • During planned maintenance of the VM hosts, at least six VMs must be available at all time. The VMs must be restarted in groups of two. • The VMS must be physically separated from each other as much as possible. How should you configure the Availability Set? To answer, select the appropriate options from the drop-down menus. Fault domains: ? Update domains: ? - ANSWER Answer: Fault domains: 3, Update domains: 4 Explanation: You should set fault domains to 3. This is the maximum number of fault domains in the Central US region. VMs in the same fault domain share hardware like power sources and physical network switches. VMs in a different fault domain are physically separated. By setting the fault domains to the maximum value, the VMS are physically separated as much as possible. You should set update domains to 4. The VMs will be divided among these four update domains, so each update domain will contain two VMs. Azure performs planned maintenance of the hypervisors for one update domain at a time. In this case, two VMs will be restarted at the same time. Q11. Your company plans to release a new web application. This application is deployed by using an App Service in Azure and will be available to users of the domain. You have already purchased the domain name. You configure the Azure DNS zone and delegate it to Azure DNS. You need to ensure that web application can be accessed by using the domain name. You decide to use PowerShell to accomplish this task. How should you complete the command? To answer, select the appropriate options from the drop-down menus. New-AzDnsRecordSet -Name (1) -RecordType (2) ` -ZoneName "" -ResourceGroupName "APP-RG" -Ttl 600 ` -DnsRecords (New-AzDnsRecordConfig -IPv4Address "<IP address>") New-AzDnsRecordSet -ZoneName -ResourceGroupName APP-RG ` -Name (3) -RecordType (4) -Ttl 600 ` -DnsRecords (New-AzDnsRecordConfig -Value "") Choose the correct options: (1) A-"", B-"", C-"@" (2) A-"A", B-"AAAA", C-"CNAME", D-"TXT" (3) A-"", B-"", C-"@" (4) A-"A", B-"AAAA", C-"CNAME", D-"TXT" - ANSWER Answer: (1) C-"@" (2) A-"A" (3) C-"@" (4) D-"TXT" Explanation: You need to create an A record that points to the IP address of the App Service that hosts the web application. Because you need your application to be accessed by using the domain name, you need to use the special name that represents the root of the domain. You need to use an A record type because the public IP address of the App Service is an IPv4 address. You need to create an additional TXT record that points to . This record is needed by the App Service to verify the custom domain name for the App Service. Because you want your application to be accessed by the domain name, you need to use the special name that represents the root of the domain. You should not use "", "" or "" values for the Name parameter. You need to configure a DNS record for the root of the domain. If you use any of these values, you will get a DNS record similar to . You should not use an AAAA record type. This record type is used for IPv6 addresses. You need to create a record for an IPv4 address. You should not use a CNAME record type. In the first step, you used an IPv4 address. CNAME cannot contain an IPv4 address as the value for the DNS record. This record type only allows fully qualified domain names. Also, you need to create a DNS record to verify the App Service custom domain. You are required to use a TXT record for this verification, not a CNAME. Q12. Your company has an Azure subscription with one virtual network (VNet) named VNet1. Vnet1 includes the subnets and virtual machines (VMS) shown in the Subnets exhibit. You create and associate the network security groups (NSGs) shown in the Security Groups exhibit. You need to determine how the security rules in the NSGs are processed. For each of the following statements, select Yes if the statement is true. Otherwise, select No. Exhibit: Subnet - Connected virtual machines: Subnet1 - VM1, VM2, VM3 Subnet2 - VM4, VM5 Subnet3 - VM6, VM7 Network security group - Connected virtual machines: NGS1 - Subnet1 NGS2 - VM1 NGS3 - Subnet3 Statement: 1-For incoming traffic to VMI, NSG1 applies before NSG2. 2-For traffic between VM1 and VM2, only NSG2 applies. 3-For traffic between VM6 and VM7, NSG3 applies. - ANSWER Answer: 1-Yes. 2-No. 3- Yes. Explanation: 1- Yes. For incoming traffic to VM1, NSG1 applies before NSG2. Incoming traffic rules in NSG1 are processed before the incoming traffic rules for NSG2 because NSGI is associated at the subnet level. For inbound traffic, the rules are processed in the NSG associated with the subnet first, and then the rules in an NSG associated with the network interface. For outgoing traffic, the rules in NSGs associated with the VM network interface are processed before the NSGs associated in the subnet level. 2- No. For traffic between VM1 and VM2, both NSG1 and NSG2 apply. NSG2 applies for any traffic into or out of VM1. NSG1 applies for any traffic between the VMs in Subnet1. 3-Yes. For traffic between VM6 and VM7, NSG3 applies. NSG rules apply for traffic into or out of a subnet and between VMs in a subnet. Q13.You host a line-of-business (LOB) web application in a virtual network (VNet) in Azure. A site-to-site virtual private network (S2S VPN) connection links your onpremises environment with the Azure VNet. You plan to use a network security group (NSG) to restrict inbound traffic into the VNet to the following IPv4 address ranges: • 192.168.2.0/24 • 192.168.4.0/24 • 192.168.8.0/24 Your solution must meet the following technical requirements: • Limit rule scope only to the three IPv4 address ranges. • Minimize the number of NSG rules. • Minimize future administrative maintenance efforts. What should you do? A-Define three NSG rules (one per IPv4 address range). B-Pass the IPv4 address range 192.168.0.0/22 into the NSG rule. C-Define an NSG rule that includes the VirtualNetwork service tag. D-Pass the three IPv4 address ranges into the NSG rule as a comma-separated list. - ANSWER Answer: D Explanation: You should pass the three IPv4 address ranges into the NSG rule as a commaseparated list. NSGs in Azure allows you to specify individual IP addresses or address ranges either individually or as a comma-separated list. This reduces the number of NSG rules you would otherwise have to create to meet your use case. Q14. Your company has an Azure subscription that hosts virtual machines (VMS) that run the company applications. Your company also has three branch offices named BO- 01, BO-02, and BO-03. The company headquarters, called HQ, are distributed between the United States and Europe. All resources hosted in Azure are connected to the same virtual network named VNet01. You configure a site-to-site (S2S) VPN between each offices and VNet01. Some users from different offices report connectivity issues with the applications in VNet01. All users in the offices use Windows 10. You need to troubleshoot these connectivity issues. You decide to use the Network Performance Monitor (NPM). For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement: 1-You need to deploy the NPM on all the office computers. 2-You need to use the ICMP protocol. 3- You need to create custom monitoring rules. 4- You need to allow communication through TCP port 8084 in the company firewalls. 5- The results from the ICMP protocol are more accurate than the results from the TCP protocol. - ANSWER Answer: 1-No. 2-Yes. 3-Yes. 4-No. 5-No. Explanation: 1- No. You should not deploy the NPM agent on all the office computers. When you are troubleshooting network connectivity, you do not need to deploy the agent on each client. You only need to install the agent on at least one computer and on each subnet that you need to analyze. It is also important to properly define the network and subnets in the network security group (NSG) in Azure. 2- Yes. You should use the ICMP protocol. The NPM can use two different protocols, ICMP and TCP, to troubleshoot subnet connectivity. In this situation, need to use ICMP because the client-based Windows computers do not support raw TCP sockets. 3- Yes. You need to create custom monitoring rules. By default, the NPM learns to automatically set network thresholds and alerts from the traffic between your agents and your network. These alerts are based on synthetic transactions from all agents to all agents. In a small network, these default monitoring rules can generate an acceptable amount of information, but for medium to large networks, like the one in this scenario, this can be unmanageable. You should disable the default monitoring rules and create custom rules that fit your specific troubleshooting needs. 4- No. You should not allow communication through TCP port 8084 in the company firewalls. In this scenario, you are using the ICMP protocol to measure agent traffic quality. You do not need to open TCP port 8084 because you are not using the TCP protocol to troubleshoot connectivity issues. 5- No. The results from the ICMP protocol are less accurate than the results from the TCP protocol. The NPM uses ICMP ECHO packages for the synthetic transactions that it uses to troubleshoot agent connectivity. Routers and switches tend to assign less priority to ICMP ECHO packages in favor of other TCP/UDP traffic. If your network is under heavy load, routers and switches can discard or delay the delivery of ICMP ECHO packages. This may lead to less accurate results. Q15. You configure the zone in Azure DNS. You have an A record set named app that points to an App Service that hosts a web application. You need to make this application available by using the domain name. This new domain name needs to point to the public IP address of the App Service. You need to ensure that the DNS record for this new domain name is updated or deleted automatically in case the DNS record is modified or deleted. Which type of record set should you create? A-A record set B-A alias record set C-CNAME record set D-CNAME alias record set - ANSWER Answer: B Explanation: You should create an A alias record set. An A alias record set is a special type of record set that allows you to create an alternative name for a record set in your domain zone or for resources in your subscription. This is different from a CNAME record type because the alias record set will be updated or deleted in case the target record set is modified or deleted. You can only create an A alias record set that points to A, AAAA or CNAME record types in an Azure DNS zone. You should not use a CNAME alias record set. The custom domain name for your web application is represented by an A record set. A CNAME alias record set can only point to another CNAME record set. Moreover, the value returned by a CNAME alias record set is a domain name. You are required to create a DNS record that returns an IPv4 address. This means that need an A alias record set. You should not use an A record set. This record set type will not be automatically updated or deleted if the record is modified or deleted. You should not use a CNAME record set. This record set type will not be automatically updated or modified if the record is modified or deleted. You are also required to create a DNS record that returns an IPv4 address. This means that need an A alias record set. Q16. You plan to configure block blob object replication between storage accounts in two different regions. You need to ensure that Azure Storage features are configured to support object storage replication. You want to minimize the configuration changes that you make. How should you configure Azure Storage features? To answer, select the configuration settings from the drop-down menus. Choose the correct options: Change feed: ? Blob versioning: ? A-Source account only B-Destination account only C-Both the source and destination account - ANSWER Answer: Change feed: A-Source account only, Blob versioning: C-Both the source and destination account. Explanation: You should configure the change feed feature for the source account only. The change feed provides transaction log support for changes made to blobs and blob metadata in your source storage account. As an ordered, guaranteed, durable, immutable, read-only log of changes, the change feed enables robust block blob object replication from the source to the destination storage account. You should enable blob versioning in both the source and destination storage accounts. Blob versioning is necessary to automatically maintain previous versions of blob objects. This provides a path to restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. Lack of support for blob versioning in accounts that have a hierarchical namespace is a reason that block object replication is not supported for Azure Data Lake Storage Gen2. Q17. You plan to deploy 90 application gateways to Azure. These gateways will provide public access to go different web applications hosted on virtual machines in Azure. You need to recommend a monitoring solution that can provide a consolidated view into the types of requests being made over the internet to the application gateways. The solution must meet the following requirements: • Be able to query all requests being blocked by the Web Application Firewall (WAF) gateway. • Support analysis of metrics collected for the number of requests being made over a period of up to one month. • Provide insights into the performance of the application gateways in serving various pages to users. What should you recommend for the monitoring solution? A-Azure Log Analytics B-Azure Sentinel C-Azure Application Insights D-Azure Monitor - ANSWER Answer: D Explanation: You should recommend using Azure Monitor. Azure Monitor is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud environments. Azure resources like application gateways that provide diagnostic logging feed various metrics and logs to storage accounts that Azure Monitoring tools can help you query, visualize, and build alerts to take actions mitigating any outages or security events. You should not recommend Application Insights. Application Insights provide an Application Performance Monitoring (ARM) framework that can generate telemetry data, but it will not provide a comprehensive view of WAF logs. Alerting is not built into Application Insights Orchestration. You should not recommend Azure Sentinel. Azure Sentinel primarily focuses on Security Information Event Management (SIEM) and Security Orchestration Automated Response (SOAR) functionality. This tool can be used to collect threat data, investigate, and respond to threats using artificial intelligence algorithms. You should not recommend Azure Log Analytics. Log Analytics provides a platform to run queries against a large amount of logs collected from various sources. You can write Kusto queries to slice data and run analytics to determine patterns and frequencies of events occurring on resources configured to feed information to Log Analytics. You cannot build alerts or carry out performance analysis using this tool. Q18. You have a Windows Server 2019 file server deployed in your on-premises infrastructure. You want to deploy a file server hybrid solution. You decide to use Azure File Sync. For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement: 1- You can use cloud tiering with server endpoints on the system volume. 2- The data tiering free space policy applies to each server endpoint individually. 3-For tiered files, the media file type will be partially downloaded as needed. 4- The free space policy takes precedence over any other policy. 5- You can sync files in a mount point inside a server endpoint. - ANSWER Answer: 1- No. 2-No. 3-Yes. 4-Yes. 5-No. Explanation: 1- No. You cannot use cloud tiering with server endpoints on the system volume. You can create endpoints on the system volume, but those files will not be tiered. This means that all files in the server endpoint will be synced with the configured cloud endpoint. 2- No. The Data tiering free space policy does not apply to each server endpoint individually. You can configure a policy for each server endpoint individually, but the most restrictive free space policy applies to the entire volume. This means that if you configure two server endpoints in the same volume with two distinct policies, for example, 20% and 40%, the 40% of free space policy will be applied. The free space tiering policy forces the sync system to start tiering, or moving data to the cloud, when the free space limit is reached. When the sync system tiers a file, it creates a pointer in the file system, and the actual data is moved to Azure. You can still list the tiered file, but the real data is no longer stored on your local disk. 3- Yes. For tiered files, the media file type will be partially downloaded as needed. When you try to access to a tiered file, it automatically downloads the entire file transparently. The exception is for those file types that can be read even if the data has not been completely downloaded, like media files or zip files. 4- Yes. The free space policy takes precedence over any other policy. You can configure date and free space policies on the same server endpoint, but the free space policy will always have precedence over the date policy. This means that if you configure a 60-day date policy and a free space policy for the same server endpoint, and the volume reaches of free space, the sync system will tier the files that have been unmodified for more time (coolest files), even if they were modified fewer than 60 days ago. 5- No. You cannot sync files in a mount point inside a server endpoint. You can use a mount point as a server endpoint, but cannot have mount points inside a server endpoint. In this case, all files in the server endpoint will be synced except those files stored inside each mount point in the endpoint. Q19. Your company's Azure environment consists of two virtual networks (VNets) with the following topology: • prod-vnet: 9 virtual machines (VMS) • dev-vnet: 9 virtual machines (VMS) The VMS in the prod-vnet should run continuously. The VMS in dev-vnet are used only between 7:00 A.M. and 7:00 P.M. local time. You need to automate the shutdown and startup of the dev-vnet VMS to reduce the organization's monthly Azure costs. Which Azure feature should you use? A-Azure Auto-shutdown B-Azure Change Tracking C-Azure Automation Desired State Configuration (DSC) D-Azure Automation runbook - ANSWER Answer: D Explanation: You should create an Azure Automation runbook. Azure Automation is a management solution that allows to publish PowerShell or Python scripts in Azure and optionally schedule Azure to run them automatically. In this case, the best practice is to write a PowerShell workflow script that automates VM startup and shutdown, and then bind the script to two Azure Automation schedules: one to describe shutdown time, and the other to describe startup time. You should not use Azure Automation Desired State Configuration (DSC). DSC is a PowerShell feature that prevents configuration drift on your Azure and/or on-premises servers. For example, you could deploy a DSC configuration that prevents server services from stopping. You should not use Azure Auto-Shutdown_ This feature, part of Azure DevTest Labs, allows to schedule Azure VMS to shut down at the same time every day or night. However, this feature does not provide for automated VM startup. You should not use Azure change tracking. Change tracking is an IT service management (ITSM) feature that is part of the Azure Automation service and records all configuration changes to your Azure VM resources. Q20. You want to install SQL Server 2019 on an Azure Windows virtual machine (VM). You need to ensure that the VM has a Service Level Agreement (SLA) of percent. Your solution must minimize costs. Which value should you choose for each configuration option? To answer, drag the appropriate value to each configuration property. A value may be used once, more than once or not at all. Configuration: Size - (1) OS disk Type - (2) Data storage type - (3) Choose the correct options: (1) A-Standard D4_v2, B-Standard DS4_v2, C-Standard A4_v2, D-Standard A8_v2 (2) A-Premium SSD, B-Standard SSD, C-Standard HDD (3) A-Premium SSD, B-Standard SSD, C-Standard HDD - ANSWER Answer: (1) BStandard DS4_v2 (2) A-Premium SSD (3) A-Premium SSD Explanation: You should set the size to Standard_DS4 v2. For an Azure single instance VM, an SLA of 99.9 percent connectivity will only be guaranteed if all disks are premium SSD or Ultra Disk. Not all Azure VM sizes support premium storage. The Standard_DS4 v2 VM size supports premium storage. Q21. Your company has an Azure subscription. You create a Recovery Services vault named RSV1. You have a virtual machine (VM) named VM1 that is deployed in the East US region. You create a backup policy for backing up VM1 to RSV1 on a recurring schedule. You are preparing to run your first backup and find the Backup Pre-Check status displays a status of Warning. You need to determine the possible cause of this status. Which condition would result in a Warning status? A-The most recent VM agent is not installed on VM1. B-VM1 has a non-premium storage account. C-VM1 cannot communicate with the Azure Backup service. D-VM1 is an unmanaged Azure VM encrypted with BitLocker encryption keys (BEKs). - ANSWER Answer: A Explanation: One possible reason for a Warning status during the Backup Pre-check is that the most recent VM agent is not installed on VM1. A Warning status indicates that the backup process might fail. The report status provides recommended steps to ensure successful backups. A status of Critical would be reported if VM1 cannot communicate with the Azure Backup service. A Critical status indicates that the current VM configuration will result in a backup failure. A situation where VM1 has a non-premium storage account will not report a Warning status. This is a supported configuration. Having VM1 as an unmanaged Azure VM encrypted with BEK will not result in a warning status. Backups of managed and unmanaged VMS encrypted with BEK are supported by Azure Backup. Q22. You are preparing a private deployment template that will be saved to an Azure Storage account. You need to make sure that access to the template is protected by a shared access signature (SAS) token. How should you complete the command? To answer, select the appropriate parts from the drop-down menus. $templateuri = (1) ` -Container private ` -Blob SAS.json ` -Permission r ` -ExpiryTime (Get-Date).AddHours(2.0) ` New-AzResourceGroup -Name RGI -Location "West US" New-AzResourceGroupDeployment -ResourceGroupName RG1 (2) templateuri Choose the correct options: (1) A-New-AzStorageBlobSASToken, B-New-AzStorageShareSASToken, C-New- AzSTorageContainerStoredAccessPolicy (2) A-TemplateUri, B-TemplateParameterFile, C-TemplateFile - ANSWER Answer: (1) ANew- AzStorageBlobSASToken (2) A-TemplateUri Explanation: You should use the New-AzStorageglobSASToken cmdlet to generate the SAS token for the storage account. The SAS token is valid for a specified time interval. You can also specify permission level. Typically, this would be the read permission. You should not use the New-AzStorageShareSASToken cmdlet. It is used to generate a SAS token for Azure Share. You should not use the New-AzStorageContainerStoredAccessPolicy cmdlet. It is used to create a stored access policy for an Azure storage container. You should use the TemplateUri parameter in the New-AzResourceGroupDeployment cmdlet because the template is stored in an external resource. You should not use the TemplateFile parameter because the template is stored in an external resource. You would use the TemplateFile parameter if you attached the template file from a local computer. You should not use the TemplateParameterFile parameter because the template is stored in an external resource. You would use the TemplateParameterFile parameter to specify values for the template parameters. Q23. Your company's Azure environment consists of the following resources: • 4 virtual networks (VNets) • 48 Windows Server and Linux virtual machines (VMS) • 6 general-purpose storage accounts You need to design a universal monitoring solution that enables you to query across all diagnostic and telemetry data emitted by resources. What should you do first? A-Install the Microsoft Monitoring Agent. B-Create a Log Analytics workspace. C-Activate resource diagnostic settings. D-Enable Network Watcher. - ANSWER Answer: B Explanation: You should create a Log Analytics workspace. Azure Log Analytics is the central resource monitoring platform in Azure. The Log Analytics workspace is the data warehouse to which associated resources send their telemetry data. Azure Log Analytics has its own query language with which you can generate reports that stretch across all your Azure deployments and management solutions. You should not install the Microsoft Monitoring Agent (MMA). This agent is indeed required to associate Windows physical and virtual servers (on-premises and in Azure). However, Log Analytics automatically deploys the MMA to Azure virtual machines when you onboard them to your Log Analytics workspace. You should not enable Network Watcher. Network Watcher is a virtual network diagnostics platform. While can link Network Watcher to Azure Log Analytics, you still need to create the Log Analytics workspace first. You should not activate resource diagnostic settings. Before Microsoft developed Log Analytics, administrators were required to configure diagnostic settings on a perresource level. This is no longer necessary because Microsoft Monitoring Agent configures nodes to send their diagnostics logs to a Log Analytics workspace. Q24. You are asked to configure Azure DNS records for the root domain and add two records to that zone for independently hosted websites on different servers but using the same alias of "www". These servers will round-robin the DNS requests for high availability of the service. The time to live for the records must also be set to 1 hour. You need to configure Azure DNS to support the requirements. How should you complete the Azure PowerShell script? To answer, select the appropriate options from the drop-down menus. (1) -Name "@" -RecordType A -ZoneName "" ` -ResourceGroupName "MyResourceGroup" -Ttl (2) -DnsRecords ` ( (3) -IPv4Address "1.2.3.4") $aRecords @() $aRecords += (4) -IPv4Address "2.3.4.5" $aRecords += (5) -IPv4Address "3.4.5.6" (6) -Name "www" -ZoneName "" ` -ResourceGroupName MyResourceGroup -Ttl (7) -RecordType A -DnsRecords $aRecords Choose the correct options: (1) A-Set-AzDnsRecordConfig, B-New-AzDnsRecordConfig, C-New-AzDnsRecordSet, D-New-AzDnsZone (2) A-1, B-60, C-3600 (3) A-Set-AzDnsRecordConfig, B-New-AzDnsRecordConfig, C-New-AzDnsRecordSet, D-New-AzDnsZone (4) A-Set-AzDnsRecordConfig, B-New-AzDnsRecordConfig, C-New-AzDnsRecordSet, D-New-AzDnsZone (5) A-Set-AzDnsRecordConfig, B-New-AzDnsRecordConfig, C-New-AzDnsRecordSet, D-New-AzDnsZone (6) A-Set-AzDnsRecordConfig, B-New-AzDnsRecordConfig, C-New-AzDnsRecordSet, D-New-AzDnsZone (7) A-1, B-60, C-3600 - ANSWER Answer: (1) C-New-AzDnsRecordSet (2) C-3600 (3) B-New-AzDnsRecordConfig (4) B-New-AzDnsRecordConfig (5) B-New- AzDnsRecordConfig (6) C-New-AzDnsRecordSet (7) C-3600 Explanation: When configuring the root of a new DNS zone, you first have to configure the root element at the apex of the zone. This is done by using the New-AzDnsRecordSet cmdlet with the name "@". This completes the first part of the requirement. The Time To Live (TTL) should be set to 1 hour in both places. With DNS entries this is configured in seconds, so 3600 seconds is used for this value. Following the configuration of the zone apex, need to set two aliases for "www". These are both set using the New-AzRecordConfig cmdlet and assigning the two IP addresses as elements of an array. Finally, you create a New-AzRecordSet using the same zone and declare the records in the record set as the array created earlier. You should not use New-AzDnsZone anywhere in this script since the question already states the zone is created and needs a root record. You should not use Set-AzDnsRecordConfig anvwhere in this script since that cmdlet is used to modify an existing record set, but your aim is to create new ones. Q25. You deploy an application in a resource group named App-RG01 in your Azure subscription. App-RG01 contains the following components: • Two App Services, each with an SSL certificate • A peered virtual network (VNet) • Redis cache deployed in the VNet • Standard Load Balancer You need to move all resources in App-RG01 to a new resource group named App- RG02. For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement: 1- You need to delete the SSL certificate from each App Service before moving it to the new resource group. 2- You can move the Load Balancer only within the same subscription. 3-You need to disable the peer before moving the VNet. 4-You can move the VNet only within the same subscription. - ANSWER Answer: 1-Yes. 2-No. 3-Yes. 4. Yes Explanation: 1- You need to delete the SSL certificate from each App Service before moving it to the new resource group. You cannot move an App Service with an SSL certificate configured. If you want to do that, you need to delete the certificate, move the App Service and then upload the certificate again. 2- You cannot move the Load Balancer within the same subscription. A Standard Load Balancer cannot be moved either within the same subscription or between subscriptions. 3- You need to disable the peer before moving the VNet. When you want to move a VNet with a peer configured, you need to disable it before moving the VNet. When you move a VNet, need to move all of its dependent resources. 4- You can only move the VNet within the same subscription. When you want to move a VNet, you also need to move all of its dependent resources. In this case, you also need to move the Redis cache, which can be moved only within the same subscription.

Show more Read less











Whoops! We can’t load your doc right now. Try again or contact support.

Document information

Uploaded on
March 18, 2025
Number of pages
54
Written in
2024/2025
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

  • az 104

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
Terry75 NURSING
View profile
Follow You need to be logged in order to follow users or courses
Sold
63
Member since
11 months
Number of followers
0
Documents
1862
Last sold
1 week ago

4.5

13 reviews

5
10
4
1
3
1
2
0
1
1

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these revision notes.

Didn't get what you expected? Choose another document

No problem! You can straightaway pick a different document that better suits what you're after.

Pay as you like, start learning straight away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and smashed it. It really can be that simple.”

Alisha Student

Frequently asked questions