Question 6
Universal Containers (UC) has over 10 million accounts with an average of 20 opportunities with each account. A Sales Executive at UC needs to generate a daily report for all opportunities in a specific opportunity stage.
Which two key considerations should be made to make sure the performance of the report is not degraded due to large data volume?
Which two key considerations should be made to make sure the performance of the report is not degraded due to large data volume?
Question 7
Company S was recently acquired by Company T. As part of the acquisition, all of the data for the Company S's Salesforce instance (source) must be migrated into the Company T's Salesforce instance (target). Company S has 6 million Case records.
An Architect has been tasked with optimizing the data load time.
What should the Architect consider to achieve this goal?
An Architect has been tasked with optimizing the data load time.
What should the Architect consider to achieve this goal?
Question 8
Universal Containers is planning out their archiving and purging plans going forward for their custom objects Topic__c and Comment__c. Several options are being considered, including analytics snapshots, offsite storage, scheduled purges, etc. Which three questions should be considered when designing an appropriate archiving strategy?
Question 9
Universal Container has implemented Sales Cloud to manage patient and related health records. During a recent security audit of the system it was discovered that same standard and custom fields need to encrypted.
Which solution should a data architect recommend to encrypt existing fields?
Which solution should a data architect recommend to encrypt existing fields?
Question 10
Universal Containers (UC) has a Salesforce org with multiple automated processes defined for group membership processing, UC also has multiple admins on staff that perform manual adjustments to the role hierarchy. The automated tasks and manual tasks overlap daily, and UC is experiencing "lock errors" consistently.
What should a data architect recommend to mitigate these errors?
What should a data architect recommend to mitigate these errors?