Question 11

You are designing a dimension table for a data warehouse. The table will track the value of the dimension attributes over time and preserve the history of the data by adding new rows as the data changes.
Which type of slowly changing dimension (SCD) should use?
  • Question 12

    You have an Azure Data Lake Storage Gen2 account that contains a JSON file for customers. The file contains two attributes named FirstName and LastName.
    You need to copy the data from the JSON file to an Azure Synapse Analytics table by using Azure Databricks. A new column must be created that concatenates the FirstName and LastName values.
    You create the following components:
    A destination table in Azure Synapse
    An Azure Blob storage container
    A service principal
    Which five actions should you perform in sequence next in is Databricks notebook? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

    Question 13

    You need to implement a Type 3 slowly changing dimension (SCD) for product category data in an Azure Synapse Analytics dedicated SQL pool.
    You have a table that was created by using the following Transact-SQL statement.

    Which two columns should you add to the table? Each correct answer presents part of the solution.
    NOTE: Each correct selection is worth one point.
  • Question 14

    You need to collect application metrics, streaming query events, and application log messages for an Azure Databrick cluster.
    Which type of library and workspace should you implement? To answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.

    Question 15

    You have an Azure Data lake Storage account that contains a staging zone.
    You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.
    Solution You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse Dow this meet the goal?