A user, analyst_user has been granted the analyst_role, and is deploying a SnowSQL script to run as a background service to extract data from Snowflake. What steps should be taken to allow the IP addresses to be accessed? (Select TWO).
Correct Answer: B,D
To ensure that ananalyst_usercan only access Snowflake from specific IP addresses, the following steps are required: * Option B: This alters the network policy directly linked toanalyst_user. Setting a network policy on the user level is effective and ensures that the specified network restrictions apply directly and exclusively to this user. * Option D: Before a network policy can be set or altered, the appropriate role with permission to manage network policies must be used.SECURITYADMINis typically the role that has privileges to create and manage network policies in Snowflake. Creating a network policy that specifies allowed IP addresses ensures that only requests coming from those IPs can access Snowflake under this policy. After creation, this policy can be linked to specific users or roles as needed. Options A and E mention altering roles or using the wrong role (USERADMINtypically does not manage network security settings), and option C incorrectly attempts to set a network policy directly as an IP address, which is not syntactically or functionally valid. References:Snowflake's security management documentation covering network policies and role-based access controls.
Question 127
A stream stores data with the same columns as the source data but with additional columns. What are those additional columns?
Correct Answer: A,B,D
Question 128
What actions are permitted when using the Snowflake SQL REST API? (Select TWO).
Correct Answer: A,D
Question 129
A company has several sites in different regions from which the company wants to ingest data. Which of the following will enable this type of data ingestion?
Correct Answer: D
This is the correct answer because it allows the company to ingest data from different regions using a storage integration for the external stage. A storage integration is a feature that enables secure and easy access to files in external cloud storage from Snowflake. A storage integration can be used to create an external stage, which is a named location that references the files in the external storage. An external stage can be used to load data into Snowflake tables using the COPY INTO command, or to unload data from Snowflake tables using the COPY INTO LOCATION command. A storage integration can support multiple regions and cloud platforms, as long as the external storage service is compatible with Snowflake12. References: * Snowflake Documentation: Storage Integrations * Snowflake Documentation: External Stages
Question 130
A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze. The Architect has been given the following requirements: 1. Provide access to frequently changing data 2. Keep egress costs to a minimum 3. Maintain low latency How can these requirements be met with the LEAST amount of operational overhead?
Correct Answer: B
: Option A is the best design to meet the requirements because it uses a materialized view on top of an external table against the S3 bucket in AWS Singapore. A materialized view is a database object that contains the results of a query and can be refreshed periodically to reflect changes in the underlying data1. An external table is a table that references data files stored in a cloud storage service, such as Amazon S32. By using a materialized view on top of an external table, the company can provide access to frequently changing data, keep egress costs to a minimum, and maintain low latency. This is because the materialized view will cache the query results in Snowflake, reducing the need to access the external data files and incur network charges. The materialized view will also improve the query performance by avoiding scanning the external data files every time. The materialized view can be refreshed on a schedule or on demand to capture the changes in the external data files1. Option B is not the best design because it uses an external table against the S3 bucket in AWS Singapore and copies the data into transient tables. A transient table is a table that is not subject to the Time Travel and Fail-safe features of Snowflake, and is automatically purged after a period of time3. By using an external table and copying the data into transient tables, the company will incur more egress costs and operational overhead than using a materialized view. This is because the external table will access the external data files every time a query is executed, and the copy operation will also transfer data from S3 to Snowflake. The transient tables will also consume more storage space in Snowflake and require manual maintenance to ensure they are up to date. Option C is not the best design because it copies the data between providers from S3 to Azure Blob storage to collocate, then uses Snowpipe for data ingestion. Snowpipe is a service that automates the loading of data from external sources into Snowflake tables4. By copying the data between providers, the company will incur high egress costs and latency, as well as operational complexity and maintenance of the infrastructure. Snowpipe will also add another layer of processing and storage in Snowflake, which may not be necessary if the external data files are already in a queryable format. Option D is not the best design because it uses AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then uses an external table against the Blob storage. AWS Transfer Family is a service that enables secure and seamless transfer of files over SFTP, FTPS, and FTP to and from Amazon S3 or Amazon EFS5. By using AWS Transfer Family, the company will incur high egress costs and latency, as well as operational complexity and maintenance of the infrastructure. The external table will also access the external data files every time a query is executed, which may affect the query performance.