What criteria does Snowflake use to determine the current role when initiating a session? (Select TWO).
If a role was specified as part of the connection and that role has been granted to the Snowflake user, the specified role becomes the current role.
If no role was specified as part of the connection and a default role has been defined for the Snowflake user, that role becomes the current role.
If no role was specified as part of the connection and a default role has not been set for the Snowflake user, the session will not be initiated and the log in will fail.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, it will be ignored and the default role will become the current role.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, the role is automatically granted and it becomes the current role.
When initiating a session in Snowflake, the system determines the current role based on the user's connection details and role assignments. If a user specifies a role during the connection, and that role is already granted to them, Snowflake sets it as the current role for the session. Alternatively, if no role is specified during the connection, but the user has a default role assigned, Snowflake will use this default role as the current session role. These mechanisms ensure that users operate within their permissions, enhancing security and governance within Snowflake environments.
References:
What activities can a user with the ORGADMIN role perform? (Select TWO).
Create an account for an organization.
Edit the account data for an organization.
Delete the account data for an organization.
View usage information for all accounts in an organization.
Select all the data in tables for all accounts in an organization.
The ORGADMIN role in Snowflake is an organizational-level role that provides administrative capabilities across the entire organization, rather than being limited to a single Snowflake account. Users with this role can:
References:
A user wants to add additional privileges to the system-defined roles for their virtual warehouse. How does Snowflake recommend they accomplish this?
Grant the additional privileges to a custom role.
Grant the additional privileges to the ACCOUNTADMIN role.
Grant the additional privileges to the SYSADMIN role.
Grant the additional privileges to the ORGADMIN role.
Snowflake recommends enhancing the granularity and management of privileges by creating and utilizing custom roles. When additional privileges are needed beyond those provided by the system-defined roles for a virtual warehouse or any other resource, these privileges should be granted to a custom role. This approach allows for more precise control over access rights and the ability to tailor permissions to the specific needs of different user groups or applications within the organization, while also maintaining the integrity and security model of system-defined roles.
References:
Which role has the ability to create a share from a shared database by default?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
By default, the ACCOUNTADMIN role in Snowflake has the ability to create a share from a shared database. This role has the highest level of access within a Snowflake account, including the management of all aspects of the account, such as users, roles, warehouses, and databases, as well as the creation and management of shares for secure data sharing with other Snowflake accounts.
References:
How does Snowflake describe its unique architecture?
A single-cluster shared data architecture using a central data repository and massively parallel processing (MPP)
A multi-duster shared nothing architecture using a soloed data repository and massively parallel processing (MPP)
A single-cluster shared nothing architecture using a sliced data repository and symmetric multiprocessing (SMP)
A multi-cluster shared nothing architecture using a siloed data repository and symmetric multiprocessing (SMP)
Snowflake's unique architecture is described as a multi-cluster, shared data architecture that leverages massively parallel processing (MPP). This architecture separates compute and storage resources, enabling Snowflake to scale them independently. It does not use a single cluster or rely solely on symmetric multiprocessing (SMP); rather, it uses a combination of shared-nothing architecture for compute clusters (virtual warehouses) and a centralized storage layer for data, optimizing for both performance and scalability.
References:
What should be used when creating a CSV file format where the columns are wrapped by single quotes or double quotes?
BINARY_FORMAT
ESCAPE_UNENCLOSED_FIELD
FIELD_OPTIONALLY_ENCLOSED_BY
SKIP BYTE ORDER MARK
When creating a CSV file format in Snowflake and the requirement is to wrap columns by single quotes or double quotes, the FIELD_OPTIONALLY_ENCLOSED_BY parameter should be used in the file format specification. This parameter allows you to define a character (either a single quote or a double quote) that can optionally enclose each field in the CSV file, providing flexibility in handling fields that contain special characters or delimiters as part of their data.
References:
Which Snowflake edition offers the highest level of security for organizations that have the strictest requirements?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
The Virtual Private Snowflake (VPS) edition offers the highest level of security for organizations with the strictest security requirements. This edition provides a dedicated and isolated instance of Snowflake, including enhanced security features and compliance certifications to meet the needs of highly regulated industries or any organization requiring the utmost in data protection and privacy.
References:
Which command should be used to unload all the rows from a table into one or more files in a named stage?
COPY INTO
GET
INSERT INTO
PUT
To unload data from a table into one or more files in a named stage, the COPY INTO <location> command should be used. This command exports the result of a query, such as selecting all rows from a table, into files stored in the specified stage. The COPY INTO command is versatile, supporting various file formats and compression options for efficient data unloading.
References:
A Snowflake user is writing a User-Defined Function (UDF) that includes some unqualified object names.
How will those object names be resolved during execution?
Snowflake will resolve them according to the SEARCH_PATH parameter.
Snowflake will only check the schema the UDF belongs to.
Snowflake will first check the current schema, and then the schema the previous query used
Snowflake will first check the current schema, and them the PUBLIC schema of the current database.
References:
Which view can be used to determine if a table has frequent row updates or deletes?
TABLES
TABLE_STORAGE_METRICS
STORAGE_DAILY_HISTORY
STORAGE USAGE
The TABLE_STORAGE_METRICS view can be used to determine if a table has frequent row updates or deletes. This view provides detailed metrics on the storage utilization of tables within Snowflake, including metrics that reflect the impact of DML operations such as updates and deletes on table storage. For example, metrics related to the number of active and deleted rows can help identify tables that experience high levels of row modifications, indicating frequent updates or deletions.
References:
Which Snowflake data type is used to store JSON key value pairs?
TEXT
BINARY
STRING
VARIANT
The VARIANT data type in Snowflake is used to store JSON key-value pairs along with other semi-structured data formats like AVRO, BSON, and XML. The VARIANT data type allows for flexible and dynamic data structures within a single column, accommodating complex and nested data. This data type is crucial for handling semi-structured data in Snowflake, enabling users to perform SQL operations on JSON objects and arrays directly.
References:
How does Snowflake reorganize data when it is loaded? (Select TWO).
Binary format
Columnar format
Compressed format
Raw format
Zipped format
When data is loaded into Snowflake, it undergoes a reorganization process where the data is stored in a columnar format and compressed. The columnar storage format enables efficient querying and data retrieval, as it allows for reading only the necessary columns for a query, thereby reducing IO operations. Additionally, Snowflake uses advanced compression techniques to minimize storage costs and improve performance. This combination of columnar storage and compression is key to Snowflake's data warehousing capabilities.
References:
What does the worksheet and database explorer feature in Snowsight allow users to do?
Add or remove users from a worksheet.
Move a worksheet to a folder or a dashboard.
Combine multiple worksheets into a single worksheet.
Tag frequently accessed worksheets for ease of access.
The worksheet and database explorer feature in Snowsight allows users to tag frequently accessed worksheets for ease of access. This functionality helps users organize and quickly navigate to the worksheets they use most often, enhancing productivity and streamlining the data exploration and analysis process within Snowsight, Snowflake's web-based query and visualization interface.
References:
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
1 minutes
5 minutes
10 minutes
15 minutes
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
Which Snowflake layer is associated with virtual warehouses?
Cloud services
Query processing
Elastic memory
Database storage
The layer of Snowflake's architecture associated with virtual warehouses is the Query Processing layer. Virtual warehouses in Snowflake are dedicated compute clusters that execute SQL queries against the stored data. This layer is responsible for the entire query execution process, including parsing, optimization, and the actual computation. It operates independently of the storage layer, enabling Snowflake to scale compute and storage resources separately for efficiency and cost-effectiveness.
References:
A Snowflake user wants to temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY.
What should they do?
Use the SECURITYADMIN role.
Use the SYSADMIN role.
Use the USERADMIN role.
Contact Snowflake Support.
To temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the USERADMIN role should be used. This role has the necessary privileges to modify user properties, including setting a temporary bypass for network policies, which can be crucial for enabling access under specific circumstances without permanently altering the network security configuration.
References:
What are characteristics of transient tables in Snowflake? (Select TWO).
Transient tables have a Fail-safe period of 7 days.
Transient tables can be cloned to permanent tables.
Transient tables persist until they are explicitly dropped.
Transient tables can be altered to make them permanent tables.
Transient tables have Time Travel retention periods of 0 or 1 day.
Transient tables in Snowflake are designed for temporary or intermediate workloads with the following characteristics:
References:
Which URL provides access to files in Snowflake without authorization?
File URL
Scoped URL
Pre-signed URL
Scoped file URL
A Pre-signed URL provides access to files stored in Snowflake without requiring authorization at the time of access. This feature allows users to generate a URL with a limited validity period that grants temporary access to a file in a secure manner. It's particularly useful for sharing data with external parties or applications without the need for them to authenticate directly with Snowflake.
References:
Which Snowflow object does not consume and storage costs?
Secure view
Materialized view
Temporary table
Transient table
Temporary tables in Snowflake do not consume storage costs. They are designed for transient data that is needed only for the duration of a session. Data stored in temporary tables is held in the virtual warehouse's cache and does not persist beyond the session's lifetime, thereby not incurring any storage charges.
References:
If a virtual warehouse runs for 61 seconds, shut down, and then restart and runs for 30 seconds, for how many seconds is it billed?
60
91
120
121
Snowflake bills virtual warehouse usage in one-minute increments, rounding up to the nearest minute for any partial minute of compute time used. If a virtual warehouse runs for 61 seconds and then, after being shut down, restarts and runs for an additional 30 seconds, the total time billed would be 120 seconds or 2 minutes. The first 61 seconds are rounded up to 2 minutes, and the subsequent 30 seconds are within a new minute, which is also rounded up to the nearest minute.
References:
Which command can be used to list all the file formats for which a user has access privileges?
LIST
ALTER FILE FORMAT
DESCRIBE FILE FORMAT
SHOW FILE FORMATS
The command to list all the file formats for which a user has access privileges in Snowflake is SHOW FILE FORMATS. This command provides a list of all file formats defined in the user's current session or specified database/schema, along with details such as the name, type, and creation time of each file format. It is a valuable tool for users to understand and manage the file formats available for data loading and unloading operations.
References:
What are the benefits of the replication feature in Snowflake? (Select TWO).
Disaster recovery
Time Travel
Fail-safe
Database failover and fallback
Data security
The replication feature in Snowflake provides several benefits, with disaster recovery and database failover and fallback being two of the primary advantages. Replication allows for the continuous copying of data from one Snowflake account to another, ensuring that a secondary copy of the data is available in case of outages or disasters. This capability supports disaster recovery strategies by allowing operations to quickly switch to the replicated data in a different account or region. Additionally, it facilitates database failover and fallback procedures, ensuring business continuity and minimizing downtime.
References:
Which function should be used to insert JSON format string data inot a VARIANT field?
FLATTEN
CHECK_JSON
PARSE_JSON
TO_VARIANT
To insert JSON formatted string data into a VARIANT field in Snowflake, the correct function to use is PARSE_JSON. The PARSE_JSON function is specifically designed to interpret a JSON formatted string and convert it into a VARIANT type, which is Snowflake's flexible format for handling semi-structured data like JSON, XML, and Avro. This function is essential for loading and querying JSON data within Snowflake, allowing users to store and manage JSON data efficiently while preserving its structure for querying purposes. This function's usage and capabilities are detailed in the Snowflake documentation, providing users with guidance on how to handle semi-structured data effectively within their Snowflake environments.
References:
How are network policies defined in Snowflake?
They are a set of rules that define the network routes within Snowflake.
They are a set of rules that dictate how Snowflake accounts can be used between multiple users.
They are a set of rules that define how data can be transferred between different Snowflake accounts within an organization.
They are a set of rules that control access to Snowflake accounts by specifying the IP addresses or ranges of IP addresses that are allowed to connect
to Snowflake.
Network policies in Snowflake are defined as a set of rules that manage the network-level access to Snowflake accounts. These rules specify which IP addresses or IP ranges are permitted to connect to Snowflake, enhancing the security of Snowflake accounts by preventing unauthorized access. Network policies are an essential aspect of Snowflake's security model, allowing administrators to enforce access controls based on network locations.
References:
Which privilege is required to use the search optimization service in Snowflake?
GRANT SEARCH OPTIMIZATION ON SCHEMA
GRANT SEARCH OPTIMIZATION ON DATABASE
GRANT ADD SEARCH OPTIMIZATION ON SCHEMA
GRANT ADD SEARCH OPTIMIZATION ON DATABASE
To utilize the search optimization service in Snowflake, the correct syntax for granting privileges to a role involves specific commands that include adding search optimization capabilities:
Options A and B do not include the correct verb "ADD," which is necessary for this specific type of grant command in Snowflake. Option D incorrectly mentions the database level, as search optimization privileges are typically configured at the schema level, not the database level.References: Snowflake documentation on the use of GRANT statements for configuring search optimization.
When unloading data, which file format preserves the data values for floating-point number columns?
Avro
CSV
JSON
Parquet
When unloading data, the Parquet file format is known for its efficiency in preserving the data values for floating-point number columns. Parquet is a columnar storage file format that offers high compression ratios and efficient data encoding schemes. It is especially effective for floating-point data, as it maintains high precision and supports efficient querying and analysis.
References:
Which activities are included in the Cloud Services layer? {Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Caching
Cluster depth
Query pruning
Retrieval optimization
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
How does a Snowflake user extract the URL of a directory table on an external stage for further transformation?
Use the SHOW STAGES command.
Use the DESCRIBE STAGE command.
Use the GET_ABSOLUTE_PATH function.
Use the GET_STAGE_LOCATION function.
To extract the URL of a directory table on an external stage for further transformation in Snowflake, the GET_ABSOLUTE_PATH function can be used. This function returns the full path of a file or directory within a specified stage, enabling users to dynamically construct URLs for accessing or processing data stored in external stages.
References:
What is the MAXIMUM number of clusters that can be provisioned with a multi-cluster virtual warehouse?
1
5
10
100
In Snowflake, the maximum number of clusters that can be provisioned within a multi-cluster virtual warehouse is 10. This allows for significant scalability and performance management by enabling Snowflake to handle varying levels of query load by adjusting the number of active clusters within the warehouse.References: Snowflake documentation on virtual warehouses, particularly the scalability options available in multi-cluster configurations.
User1, who has the SYSADMIN role, executed a query on Snowsight. User2, who is in the same Snowflake account, wants to view the result set of the query executed by User1 using the Snowsight query history.
What will happen if User2 tries to access the query history?
If User2 has the sysadmin role they will be able to see the results.
If User2 has the securityadmin role they will be able to see the results.
If User2 has the ACCOUNTADMIN role they will be able to see the results.
User2 will be unable to view the result set of the query executed by User1.
In Snowflake, the query history and the results of queries executed by a user are accessible based on the roles and permissions. If User1 executed a query with the SYSADMIN role, User2 would be able to view the result set of that query executed by User1 only if User2 has the ACCOUNTADMIN role. The ACCOUNTADMIN role has the broadest set of privileges, including the ability to access all aspects of the account's operation, data, and query history, thus enabling User2 to view the results of queries executed by other users.
References:
When using the ALLOW CLIENT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
When using the ALLOW_CLIENT_MFA_CACHING parameter, a cached Multi-Factor Authentication (MFA) token is valid for up to 4 hours. This allows for continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake within this timeframe2.
What tasks can an account administrator perform in the Data Exchange? (Select TWO).
Add and remove members.
Delete data categories.
Approve and deny listing approval requests.
Transfer listing ownership.
Transfer ownership of a provider profile.
An account administrator in the Data Exchange can perform tasks such as adding and removing members and approving or denying listing approval requests. These tasks are part of managing the Data Exchange and ensuring that only authorized listings and members are part of it12.
Which Snowflake view is used to support compliance auditing?
ACCESS_HISTORY
COPY_HISTORY
QUERY_HISTORY
ROW ACCESS POLICIES
The ACCESS_HISTORY view in Snowflake is utilized to support compliance auditing. It provides detailed information on data access within Snowflake, including reads and writes by user queries. This view is essential for regulatory compliance auditing as it offers insights into the usage of tables and columns, and maintains a direct link between the user, the query, and the accessed data1.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What happens to the objects in a reader account when the DROP MANAGED ACCOUNT command is executed?
The objects are dropped.
The objects enter the Fail-safe period.
The objects enter the Time Travel period.
The objects are immediately moved to the provider account.
When the DROP MANAGED ACCOUNT command is executed in Snowflake, it removes the managed account, including all objects created within the account, and access to the account is immediately restricted2.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
How can a Snowflake administrator determine which user has accessed a database object that contains sensitive information?
Review the granted privileges to the database object.
Review the row access policy for the database object.
Query the ACCESS_HlSTORY view in the ACCOUNT_USAGE schema.
Query the REPLICATION USAGE HISTORY view in the ORGANIZATION USAGE schema.
To determine which user has accessed a database object containing sensitive information, a Snowflake administrator can query the ACCESS_HISTORY view in the ACCOUNT_USAGE schema, which provides information about access to database objects3.
A Snowflake account has activated federated authentication.
What will occur when a user with a password that was defined by Snowflake attempts to log in to Snowflake?
The user will be unable to enter a password.
The user will encounter an error, and will not be able to log in.
The user will be able to log into Snowflake successfully.
After entering the username and password, the user will be redirected to an Identity Provider (IdP) login page.
When federated authentication is activated in Snowflake, users authenticate via an external identity provider (IdP) rather than using Snowflake-managed credentials. Therefore, a user with a password defined by Snowflake will be unable to enter a password and must use their IdP credentials to log in.
QUSTION NO: 579
What value provides information about disk usage for operations where intermediate results do not fit in memory in a Query Profile?
A. IO
B. Network
C. Pruning
D. Spilling
Answer: D
In Snowflake, when a query execution requires more memory than what is available, Snowflake handles these situations by spilling the intermediate results to disk. This process is known as "spilling." The Query Profile in Snowflake includes a metric that helps users identify when and how much data spilling occurs during the execution of a query. This information is crucial for optimizing queries as excessive spilling can significantly slow down query performance. The value that provides this information about disk usage due to intermediate results not fitting in memory is appropriately labeled as "Spilling" in the Query Profile.
References:
Which commands can only be executed using SnowSQL? (Select TWO).
COPY INTO
GET
LIST
PUT
REMOVE
The LIST and PUT commands are specific to SnowSQL and cannot be executed in the web interface or other SQL clients. LIST is used to display the contents of a stage, and PUT is used to upload files to a stage. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data?
Standard Edition
Enterprise Edition
Business Critical Edition
Virtual Private Snowflake Edition
The minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data is the Business Critical Edition. This edition offers enhanced security features necessary for compliance with regulations such as HIPAA and HITRUST CSF4.
What are key characteristics of virtual warehouses in Snowflake? (Select TWO).
Warehouses that are multi-cluster can have nodes of different sizes.
Warehouses can be started and stopped at any time.
Warehouses can be resized at any time, even while running.
Warehouses are billed on a per-minute usage basis.
Warehouses can only be used for querying and cannot be used for data loading.
Virtual warehouses in Snowflake can be started and stopped at any time, providing flexibility in managing compute resources. They can also be resized at any time, even while running, to accommodate varying workloads910. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How is unstructured data retrieved from data storage?
SQL functions like the GET command can be used to copy the unstructured data to a location on the client.
SQL functions can be used to create different types of URLs pointing to the unstructured data. These URLs can be used to download the data to a client.
SQL functions can be used to retrieve the data from the query results cache. When the query results are output to a client, the unstructured data will be output to the client as files.
SQL functions can call on different web extensions designed to display different types of files as a web page. The web extensions will allow the files to be downloaded to the client.
Unstructured data stored in Snowflake can be retrieved by using SQL functions to generate URLs that point to the data. These URLs can then be used to download the data directly to a client
What is the relationship between a Query Profile and a virtual warehouse?
A Query Profile can help users right-size virtual warehouses.
A Query Profile defines the hardware specifications of the virtual warehouse.
A Query Profile can help determine the number of virtual warehouses available.
A Query Profile automatically scales the virtual warehouse based on the query complexity.
A Query Profile provides detailed execution information for a query, which can be used to analyze the performance and behavior of queries. This information can help users optimize and right-size their virtual warehouses for better efficiency. References: [COF-C02] SnowPro Core Certification Exam Study Guide
When enabling access to unstructured data, which URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens?
File URL
Scoped URL
Relative URL
Pre-Signed URL
A Scoped URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens. It provides a secure way to share access to files stored in Snowflake
How can a Snowflake user traverse semi-structured data?
Insert a colon (:) between the VARIANT column name and any first-level element.
Insert a colon (:) between the VARIANT column name and any second-level element. C. Insert a double colon (: :) between the VARIANT column name and any first-level element.
Insert a double colon (: :) between the VARIANT column name and any second-level element.
To traverse semi-structured data in Snowflake, a user can insert a colon (:) between the VARIANT column name and any first-level element. This path syntax is used to retrieve elements in a VARIANT column4.
What type of query will benefit from the query acceleration service?
Queries without filters or aggregation
Queries with large scans and selective filters
Queries where the GROUP BY has high cardinality
Queries of tables that have search optimization service enabled
The query acceleration service in Snowflake is designed to benefit queries that involve large scans and selective filters. This service can offload portions of the query processing work to shared compute resources, which can handle these types of workloads more efficiently by performing more work in parallel and reducing the wall-clock time spent in scanning and filtering2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does the LATERAL modifier for the FLATTEN function do?
Casts the values of the flattened data
Extracts the path of the flattened data
Joins information outside the object with the flattened data
Retrieves a single instance of a repeating element in the flattened data
The LATERAL modifier for the FLATTEN function allows joining information outside the object (such as other columns in the source table) with the flattened data, creating a lateral view that correlates with the preceding tables in the FROM clause2345. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does SnowCD help Snowflake users to do?
Copy data into files.
Manage different databases and schemas.
Troubleshoot network connections to Snowflake.
Write SELECT queries to retrieve data from external tables.
SnowCD is a connectivity diagnostic tool that helps users troubleshoot network connections to Snowflake. It performs a series of checks to evaluate the network connection and provides suggestions for resolving any issues4.
A tag object has been assigned to a table (TABLE_A) in a schema within a Snowflake database.
Which CREATE object statement will automatically assign the TABLE_A tag to a target object?
CREATE TABLE
CREATE VIEW
CREATE TABLE
CREATE MATERIALIZED VIEW
When a tag object is assigned to a table, using the statement CREATE TABLE <table_name> AS SELECT * FROM TABLE_A will automatically assign the TABLE_A tag to the newly created table2.
Which Snowflake table objects can be shared with other accounts? (Select TWO).
Temporary tables
Permanent tables
Transient tables
External tables
User-Defined Table Functions (UDTFs)
In Snowflake, permanent tables and external tables can be shared with other accounts using Secure Data Sharing. Temporary tables, transient tables, and UDTFs are not shareable objects
What does a masking policy consist of in Snowflake?
A single data type, with one or more conditions, and one or more masking functions
A single data type, with only one condition, and only one masking function
Multiple data types, with only one condition, and one or more masking functions
Multiple data types, with one or more conditions, and one or more masking functions
A masking policy in Snowflake consists of a single data type, with one or more conditions, and one or more masking functions. These components define how the data is masked based on the specified conditions3.
Which Snowflake function will parse a JSON-null into a SQL-null?
TO_CHAR
TO_VARIANT
TO_VARCHAR
STRIP NULL VALUE
The STRIP_NULL_VALUE function in Snowflake is used to convert a JSON null value into a SQL NULL value1.
Who can activate and enforce a network policy for all users in a Snowflake account? (Select TWO).
A user with an USERADMIN or higher role
A user with a SECURITYADMIN or higher role
A role that has been granted the ATTACH POLICY privilege
A role that has the NETWORK_POLlCY account parameter set
A role that has the ownership of the network policy
In Snowflake, a user with the SECURITYADMIN role or higher can activate and enforce a network policy for all users in an account. Additionally, a role that has ownership of the network policy can also activate and enforce it
Which parameter can be set at the account level to set the minimum number of days for which Snowflake retains historical data in Time Travel?
DATA_RETENTION_TIME_IN_DAYS
MAX_DATA_EXTENSION_TIME_IN_DAYS
MIN_DATA_RETENTION_TIME_IN_DAYS
MAX CONCURRENCY LEVEL
The parameter DATA_RETENTION_TIME_IN_DAYS can be set at the account level to define the minimum number of days Snowflake retains historical data for Time Travel1.
What does a Notify & Suspend action for a resource monitor do?
Send an alert notification to all account users who have notifications enabled.
Send an alert notification to all virtual warehouse users when thresholds over 100% have been met.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses.
The Notify & Suspend action for a resource monitor in Snowflake sends a notification to all account administrators who have notifications enabled and suspends all assigned warehouses. However, the suspension only occurs after all currently running statements in the warehouses have been completed1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What step can reduce data spilling in Snowflake?
Using a larger virtual warehouse
Increasing the virtual warehouse maximum timeout limit
Increasing the amount of remote storage for the virtual warehouse
Using a common table expression (CTE) instead of a temporary table
To reduce data spilling in Snowflake, using a larger virtual warehouse is effective because it provides more memory and local disk space, which can accommodate larger data operations and minimize the need to spill data to disk or remote storage1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
The temporary table will take precedence over the permanent table.
The permanent table will take precedence over the temporary table.
An error will say there cannot be two tables with the same name in a schema.
The table that was created most recently will take precedence over the older table.
In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
A column named "Data" contains VARIANT data and stores values as follows:
How will Snowflake extract the employee's name from the column data?
Data:employee.name
DATA:employee.name
data:Employee.name
data:employee.name
In Snowflake, to extract a specific value from a VARIANT column, you use the column name followed by a colon and then the key. The keys are case-sensitive. Therefore, to extract the employee’s name from the “Data” column, the correct syntax is data:employee.name.
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
File URL
Staged URL
Scoped URL
Pre-signed URL
A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the files without needing to authenticate into Snowflake5.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command is used to start configuring Snowflake for Single Sign-On (SSO)?
CREATE SESSION POLICY
CREATE NETWORK RULE
CREATE SECURITY INTEGRATION
CREATE PASSWORD POLICY
To start configuring Snowflake for Single Sign-On (SSO), the CREATE SECURITY INTEGRATION command is used. This command sets up a security integration object in Snowflake, which is necessary for enabling SSO with external identity providers using SAML 2.01.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What information is found within the Statistic output in the Query Profile Overview?
Operator tree
Table pruning
Most expensive nodes
Nodes by execution time
The Statistic output in the Query Profile Overview of Snowflake provides detailed insights into the performance of different parts of the query. Specifically, it highlights the "Most expensive nodes," which are the operations or steps within the query execution that consume the most resources, such as CPU and memory. Identifying these nodes helps in pinpointing performance bottlenecks and optimizing query execution by focusing efforts on the most resource-intensive parts of the query.
References:
QUSTION NO: 582
How do secure views compare to non-secure views in Snowflake?
A. Secure views execute slowly compared to non-secure views.
B. Non-secure views are preferred over secure views when sharing data.
C. Secure views are similar to materialized views in that they are the most performant.
D. There are no performance differences between secure and non-secure views.
Answer: D
Secure views and non-secure views in Snowflake are differentiated primarily by their handling of data access and security rather than performance characteristics. A secure view enforces row-level security and ensures that the view definition is hidden from the users. However, in terms of performance, secure views do not inherently execute slower or faster than non-secure views. The performance of both types of views depends more on other factors such as underlying table design, query complexity, and system workload rather than the security features embedded in the views themselves.
References:
QUSTION NO: 583
When using SnowSQL, which configuration options are required when unloading data from a SQL query run on a local machine? {Select TWO).
A. echo
B. quiet
C. output_file
D. output_format
E. force_put_overwrite
Answer: C, D
When unloading data from SnowSQL (Snowflake's command-line client), to a file on a local machine, you need to specify certain configuration options to determine how and where the data should be outputted. The correct configuration options required are:
These options are specified in the SnowSQL configuration file or directly in the SnowSQL command line. The configuration file allows users to set defaults and customize their usage of SnowSQL, including output preferences for unloading data.
References:
QUSTION NO: 584
How can a Snowflake user post-process the result of SHOW FILE FORMATS?
A. Use the RESULT_SCAN function.
B. Create a CURSOR for the command.
C. Put it in the FROM clause in brackets.
D. Assign the command to RESULTSET.
Answer: A
first run SHOW FILE FORMATS
then SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID(-1)))
https://docs.snowflake.com/en/sql-reference/functions/result_scan#usage-notes
QUSTION NO: 585
Which file function gives a user or application access to download unstructured data from a Snowflake stage?
A. BUILD_SCOPED_FILE_URL
B. BUILD_STAGE_FILE_URL
C. GET_PRESIGNED_URL
D. GET STAGE LOCATION
Answer: C
The function that provides access to download unstructured data from a Snowflake stage is:
Example usage:
SELECT GET_PRESIGNED_URL('stage_name', 'file_path');
This function simplifies the process of securely sharing or accessing files stored in Snowflake stages with external systems or users.
References:
QUSTION NO: 586
When should a multi-cluster virtual warehouse be used in Snowflake?
A. When queuing is delaying query execution on the warehouse
B. When there is significant disk spilling shown on the Query Profile
C. When dynamic vertical scaling is being used in the warehouse
D. When there are no concurrent queries running on the warehouse
Answer: A
A multi-cluster virtual warehouse in Snowflake is designed to handle high concurrency and workload demands by allowing multiple clusters of compute resources to operate simultaneously. The correct scenario to use a multi-cluster virtual warehouse is:
This is especially useful in scenarios with fluctuating workloads or where it's critical to maintain low response times for a large number of concurrent queries.
References:
QUSTION NO: 587
A JSON object is loaded into a column named data using a Snowflake variant datatype. The root node of the object is BIKE. The child attribute for this root node is BIKEID.
Which statement will allow the user to access BIKEID?
A. select data:BIKEID
B. select data.BIKE.BIKEID
C. select data:BIKE.BIKEID
D. select data:BIKE:BIKEID
Answer: C
In Snowflake, when accessing elements within a JSON object stored in a variant column, the correct syntax involves using a colon (:) to navigate the JSON structure. The BIKEID attribute, which is a child of the BIKE root node in the JSON object, is accessed using data:BIKE.BIKEID. This syntax correctly references the path through the JSON object, utilizing the colon for JSON field access and dot notation to traverse the hierarchy within the variant structure.References: Snowflake documentation on accessing semi-structured data, which outlines how to use the colon and dot notations for navigating JSON structures stored in variant columns.
QUSTION NO: 588
Which Snowflake tool is recommended for data batch processing?
A. SnowCD
B. SnowSQL
C. Snowsight
D. The Snowflake API
Answer: B
For data batch processing in Snowflake, the recommended tool is:
SnowSQL provides a flexible and powerful way to interact with Snowflake, supporting operations such as loading and unloading data, executing complex queries, and managing Snowflake objects from the command line or through scripts.
References:
QUSTION NO: 589
How does the Snowflake search optimization service improve query performance?
A. It improves the performance of range searches.
B. It defines different clustering keys on the same source table.
C. It improves the performance of all queries running against a given table.
D. It improves the performance of equality searches.
Answer: D
The Snowflake Search Optimization Service is designed to enhance the performance of specific types of queries on large tables. The correct answer is:
This optimization is particularly beneficial for large tables where traditional scans might be inefficient for equality searches. By using the Search Optimization Service, Snowflake can leverage the search indexes to quickly locate the rows that match the search criteria without scanning the entire table.
References:
QUSTION NO: 590
What compute resource is used when loading data using Snowpipe?
A. Snowpipe uses virtual warehouses provided by the user.
B. Snowpipe uses an Apache Kafka server for its compute resources.
C. Snowpipe uses compute resources provided by Snowflake.
D. Snowpipe uses cloud platform compute resources provided by the user.
Answer: C
Snowpipe is Snowflake's continuous data ingestion service that allows for loading data as soon as it's available in a cloud storage stage. Snowpipe uses compute resources managed by Snowflake, separate from the virtual warehouses that users create for querying data. This means that Snowpipe operations do not consume the computational credits of user-created virtual warehouses, offering an efficient and cost-effective way to continuously load data into Snowflake.
References:
QUSTION NO: 591
What is one of the characteristics of data shares?
A. Data shares support full DML operations.
B. Data shares work by copying data to consumer accounts.
C. Data shares utilize secure views for sharing view objects.
D. Data shares are cloud agnostic and can cross regions by default.
Answer: C
Data sharing in Snowflake allows for live, read-only access to data across different Snowflake accounts without the need to copy or transfer the data. One of the characteristics of data shares is the ability to use secure views. Secure views are used within data shares to restrict the access of shared data, ensuring that consumers can only see the data that the provider intends to share, thereby preserving privacy and security.
References:
QUSTION NO: 592
Which DDL/DML operation is allowed on an inbound data share?
A. ALTER TA3LE
B. INSERT INTO
C. MERGE
D. SELECT
Answer: D
In Snowflake, an inbound data share refers to the data shared with an account by another account. The only DDL/DML operation allowed on an inbound data share is SELECT. This restriction ensures that the shared data remains read-only for the consuming account, maintaining the integrity and ownership of the data by the sharing account.
References:
QUSTION NO: 593
In Snowflake, the use of federated authentication enables which Single Sign-On (SSO) workflow activities? (Select TWO).
A. Authorizing users
B. Initiating user sessions
C. Logging into Snowflake
D. Logging out of Snowflake
E. Performing role authentication
Answer: B C
Federated authentication in Snowflake allows users to use their organizational credentials to log in to Snowflake, leveraging Single Sign-On (SSO). The key activities enabled by this setup include:
References:
QUSTION NO: 594
A user wants to upload a file to an internal Snowflake stage using a put command.
Which tools and or connectors could be used to execute this command? (Select TWO).
A. SnowCD
B. SnowSQL
C. SQL API
D. Python connector
E. Snowsight worksheets
Answer: B, E
To upload a file to an internal Snowflake stage using a PUT command, you can use:
References:
How should clustering be used to optimize the performance of queries that run on a very large table?
Manually re-cluster the table regularly.
Choose one high cardinality column as the clustering key.
Use the column that is most-frequently used in query select clauses as the clustering key.
Assess the average table depth to identify how clustering is impacting the query.
For optimizing the performance of queries that run on a very large table, it is recommended to choose one high cardinality column as the clustering key. This helps to co-locate similar rows in the same micro-partitions, improving scan efficiency in queries by skipping data that does not match filtering predicates4.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which type of loop requires a BREAK statement to stop executing?
FOR
LOOP
REPEAT
WHILE
The LOOP type of loop in Snowflake Scripting does not have a built-in termination condition and requires a BREAK statement to stop executing4.
Which of the following can be executed/called with Snowpipe?
A User Defined Function (UDF)
A stored procedure
A single copy_into statement
A single insert__into statement
Snowpipe is used for continuous, automated data loading into Snowflake. It uses a COPY INTO