Question 131

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to
BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an
automated daily basis while managing cost.
What should you do?
  • Question 132

    For this question, refer to the Dress4Win case study.
    At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?
  • Question 133

    The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
    Which process should you implement?
  • Question 134

    For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.
    Considering the technical requirements, which components should you use for the ingestion of the data?
  • Question 135

    The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk. What should they change to get better performance from this system?