Databricks Certified Data Engineer
Professional Exam Actual Questions
An upstream system has been configured to pass the date for a given batch of data to
the Databricks Jobs API as a parameter. The notebook to be scheduled will use this
parameter to load data with the following code: df =
spark.read.format("parquet").load(f"/mnt/source/(date)")Which code block should be
used to create the date Python variable used in the above code block? - answerE.
dbutils.widgets.text("date", "null")date = dbutils.widgets.get("date")
The Databricks workspace administrator has configured interactive clusters for each of
the data engineering groups. To control costs, clusters are set to terminate after 30
minutes of inactivity. Each user should be able to execute workloads against their
assigned clusters at any time of the day.Assuming users have been added to a
workspace but not granted any permissions, which of the following describes the
minimal permissions a user would need to start and attach to an already configured
cluster. - answerD. "Can Restart" privileges on the required cluster
A junior member of the data engineering team is exploring the language interoperability
of Databricks notebooks. The intended outcome of the below code is to register a view
of all sales that occurred in countries on the continent of Africa that appear in the
geo_lookup table.Before executing the code, running SHOW TABLES on the current
database indicates the database contains only two tables: geo_lookup and sales.
Which statement correctly describes the outcome of executing these command cells in
order in an interactive notebook? - answerCmd 1 will succeed and Cmd 2 will fail.
countries_af will be a Python variable containing a list of strings.
The data engineering team has configured a job to process customer requests to be
forgotten (have their data deleted). All user data that needs to be deleted is stored in
Delta Lake tables using default table settings.The team has decided to process all
deletions from the previous week as a batch job at 1am each Sunday. The total duration
of this job is less than one hour. Every Monday at 3am, a batch job executes a series of
VACUUM commands on all Delta Lake tables throughout the organization.The
compliance officer has recently learned about Delta Lake's time travel functionality.
They are concerned that this might allow continued access to deleted data.Assuming all
delete logic is correctly implemented, which statement correctly addresses this
concern? - answerBecause the default data retention threshold is 7 days, data files
containing deleted records will be retained until the VACUUM job is run 8 days later.
, An upstream system is emitting change data capture (CDC) logs that are being written
to a cloud object storage directory. Each record in the log indicates the change type
(insert, update, or delete) and the values for each field after the change. The source
table has a primary key identified by the field pk_id.For auditing purposes, the data
governance team wishes to maintain a full record of all values that have ever been valid
in the source system. For analytical purposes, only the most recent value for each
record needs to be recorded. The Databricks job to ingest these records occurs once
per hour, but each individual record may have changed multiple times over the course
of an hour.Which solution meets these requirements? - answerIngest all log information
into a bronze table; use MERGE INTO to insert, update, or delete the most recent entry
for each pk_id into a silver table to recreate the current table state.
A table in the Lakehouse named customer_churn_params is used in churn prediction by
the machine learning team. The table contains information about customers derived
from a number of upstream sources. Currently, the data engineering team populates
this table nightly by overwriting the table with the current valid values derived from
upstream data sources.The churn prediction model used by the ML team is fairly stable
in production. The team is only interested in making predictions on records that have
changed in the past 24 hours.
Which approach would simplify the identification of these changed records? -
answerReplace the current overwrite logic with a merge statement to modify only those
records that have changed; write logic to make predictions on the changed records
identified by the change data feed.
Which statement characterizes the general programming model used by Spark
Structured Streaming? - answerStructured Streaming models new data arriving in a
data stream as new rows appended to an unbounded table.
Each configuration below is identical to the extent that each cluster has 400 GB total of
RAM, 160 total cores and only one Executor per VM.Given a job with at least one wide
transformation, which of the following cluster configurations will result in maximum
performance? - answerTotal VMs; 1• 400 GB per Executor• 160 Cores / Executor
A junior data engineer seeks to leverage Delta Lake's Change Data Feed functionality
to create a Type 1 table representing all of the values that have ever been valid for all
rows in a bronze table created with the property delta.enableChangeDataFeed = true.
They plan to execute the following code as a daily job:
from pyspark.sql.functions import col
(spark.read.format("delta")
.option("readChangeFeed", "true")
.option("startingVersion", 0)
.table("bronze")
.filter(col("_change_type").isin(["update_postimage", "insert"]))
.write
Professional Exam Actual Questions
An upstream system has been configured to pass the date for a given batch of data to
the Databricks Jobs API as a parameter. The notebook to be scheduled will use this
parameter to load data with the following code: df =
spark.read.format("parquet").load(f"/mnt/source/(date)")Which code block should be
used to create the date Python variable used in the above code block? - answerE.
dbutils.widgets.text("date", "null")date = dbutils.widgets.get("date")
The Databricks workspace administrator has configured interactive clusters for each of
the data engineering groups. To control costs, clusters are set to terminate after 30
minutes of inactivity. Each user should be able to execute workloads against their
assigned clusters at any time of the day.Assuming users have been added to a
workspace but not granted any permissions, which of the following describes the
minimal permissions a user would need to start and attach to an already configured
cluster. - answerD. "Can Restart" privileges on the required cluster
A junior member of the data engineering team is exploring the language interoperability
of Databricks notebooks. The intended outcome of the below code is to register a view
of all sales that occurred in countries on the continent of Africa that appear in the
geo_lookup table.Before executing the code, running SHOW TABLES on the current
database indicates the database contains only two tables: geo_lookup and sales.
Which statement correctly describes the outcome of executing these command cells in
order in an interactive notebook? - answerCmd 1 will succeed and Cmd 2 will fail.
countries_af will be a Python variable containing a list of strings.
The data engineering team has configured a job to process customer requests to be
forgotten (have their data deleted). All user data that needs to be deleted is stored in
Delta Lake tables using default table settings.The team has decided to process all
deletions from the previous week as a batch job at 1am each Sunday. The total duration
of this job is less than one hour. Every Monday at 3am, a batch job executes a series of
VACUUM commands on all Delta Lake tables throughout the organization.The
compliance officer has recently learned about Delta Lake's time travel functionality.
They are concerned that this might allow continued access to deleted data.Assuming all
delete logic is correctly implemented, which statement correctly addresses this
concern? - answerBecause the default data retention threshold is 7 days, data files
containing deleted records will be retained until the VACUUM job is run 8 days later.
, An upstream system is emitting change data capture (CDC) logs that are being written
to a cloud object storage directory. Each record in the log indicates the change type
(insert, update, or delete) and the values for each field after the change. The source
table has a primary key identified by the field pk_id.For auditing purposes, the data
governance team wishes to maintain a full record of all values that have ever been valid
in the source system. For analytical purposes, only the most recent value for each
record needs to be recorded. The Databricks job to ingest these records occurs once
per hour, but each individual record may have changed multiple times over the course
of an hour.Which solution meets these requirements? - answerIngest all log information
into a bronze table; use MERGE INTO to insert, update, or delete the most recent entry
for each pk_id into a silver table to recreate the current table state.
A table in the Lakehouse named customer_churn_params is used in churn prediction by
the machine learning team. The table contains information about customers derived
from a number of upstream sources. Currently, the data engineering team populates
this table nightly by overwriting the table with the current valid values derived from
upstream data sources.The churn prediction model used by the ML team is fairly stable
in production. The team is only interested in making predictions on records that have
changed in the past 24 hours.
Which approach would simplify the identification of these changed records? -
answerReplace the current overwrite logic with a merge statement to modify only those
records that have changed; write logic to make predictions on the changed records
identified by the change data feed.
Which statement characterizes the general programming model used by Spark
Structured Streaming? - answerStructured Streaming models new data arriving in a
data stream as new rows appended to an unbounded table.
Each configuration below is identical to the extent that each cluster has 400 GB total of
RAM, 160 total cores and only one Executor per VM.Given a job with at least one wide
transformation, which of the following cluster configurations will result in maximum
performance? - answerTotal VMs; 1• 400 GB per Executor• 160 Cores / Executor
A junior data engineer seeks to leverage Delta Lake's Change Data Feed functionality
to create a Type 1 table representing all of the values that have ever been valid for all
rows in a bronze table created with the property delta.enableChangeDataFeed = true.
They plan to execute the following code as a daily job:
from pyspark.sql.functions import col
(spark.read.format("delta")
.option("readChangeFeed", "true")
.option("startingVersion", 0)
.table("bronze")
.filter(col("_change_type").isin(["update_postimage", "insert"]))
.write