Question 86

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID.
You need to recommend a strategy to partition data based on values in CustomerID.
Proposed Solution: Separate data into shards by using horizontal partitioning.
Does the solution meet the goal?
  • Question 87

    You manage a process that performs analysis of daily web traffic logs on an HDInsight cluster. Each of the 250 web servers generates approximately 10megabytes (MB) of log data each day. All log data is stored in a single folder in Microsoft Azure Data Lake Storage Gen 2.
    You need to improve the performance of the process.
    Which two changes should you make? Each correct answer presents a complete solution.
    NOTE: Each correct selection is worth one point.
  • Question 88

    You are designing an application that will have an Azure virtual machine. The virtual machine will access an Azure SQL database. The database will not be accessible from the Internet You need to recommend a solution to provide the required level of access to the database.
    What should you include in the recommendation?
  • Question 89

    You need to design the disaster recovery solution for customer sales data analytics.
    Which three actions should you recommend? Each correct answer presents part of the solution.
    NOTE: Each correct selection is worth one point.
  • Question 90

    You design data engineering solutions for a company.
    You must integrate on-premises SQL Server data into an Azure solution that performs Extract-Transform-Load (ETL) operations have the following requirements:
    * Develop a pipeline that can integrate data and run notebooks.
    * Develop notebooks to transform the data.
    * Load the data into a massively parallel processing database for later analysis.
    You need to recommend a solution.
    What should you recommend? To answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.