Question 221
Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable.
The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data.
They want to improve this performance while minimizing cost. What should they do?
The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data.
They want to improve this performance while minimizing cost. What should they do?
Question 222
You currently have a single on-premises Kafka cluster in a data center in the us-east region that is responsible for ingesting messages from IoT devices globally. Because large parts of globe have poor internet connectivity, messages sometimes batch at the edge, come in all at once, and cause a spike in load on your Kafka cluster.
This is becoming difficult to manage and prohibitively expensive. What is the Google-recommended cloud native architecture for this scenario?
This is becoming difficult to manage and prohibitively expensive. What is the Google-recommended cloud native architecture for this scenario?
Question 223
If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps. Which is NOT a recommended step to follow?
Question 224
Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?
Question 225
You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row Columnar (ORC). All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster's local Hadoop Distributed File System (HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc?
(Choose two.)
(Choose two.)
Premium Bundle
Newest Professional-Data-Engineer Exam PDF Dumps shared by BraindumpsPass.com for Helping Passing Professional-Data-Engineer Exam! BraindumpsPass.com now offer the updated Professional-Data-Engineer exam dumps, the BraindumpsPass.com Professional-Data-Engineer exam questions have been updated and answers have been corrected get the latest BraindumpsPass.com Professional-Data-Engineer pdf dumps with Exam Engine here: