What is the default File Format used in the COPY command if one is not specified?
CSV
JSON
Parquet
XML
The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data exchange because it is simple, easy to read, and supported by many data analysis tools.
What is the purpose of an External Function?
To call code that executes outside of Snowflake
To run a function in another Snowflake database
To share data in Snowflake with external parties
To ingest data from on-premises data sources
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external servicesand leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
https://docs.snowflake.com/en/sql-reference/external-functions.html
Which command can be used to load data into an internal stage?
LOAD
copy
GET
PUT
The PUT command is used to load data into an internal stage in Snowflake. This command uploads data files from a local file system to a named internal stage, making the data available for subsequent loading into a Snowflake table using the COPY INTO command.
What are the default Time Travel and Fail-safe retention periods for transient tables?
Time Travel - 1 day. Fail-safe - 1 day
Time Travel - 0 days. Fail-safe - 1 day
Time Travel - 1 day. Fail-safe - 0 days
Transient tables are retained in neither Fail-safe nor Time Travel
Transient tables in Snowflake have a default Time Travel retention period of 1 day, which allows users to access historical data within the last 24 hours. However, transient tables do not have a Fail-safe period. Fail-safe is an additional layer of data protection that retains data beyond the Time Travel period for recovery purposes in case of extreme data loss. Since transient tables are designed for temporary or intermediate workloads with no requirement for long-term durability, they do not include a Fail-safe period by default1.
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.
True
False
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References: Understanding and viewing Fail-safe | Snowflake Documentation
Which cache type is used to cache data output from SQL queries?
Metadata cache
Result cache
Remote cache
Local file cache
The Result cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the result, which saves time and computational resources.
What features does Snowflake Time Travel enable?
Querying data-related objects that were created within the past 365 days
Restoring data-related objects that have been deleted within the past 90 days
Conducting point-in-time analysis for Bl reporting
Analyzing data usage/manipulation over all periods of time
Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period. It enables two key capabilities:
B. Restoring data-related objects that have been deleted within the past 90 days: Time Travel can be used to restore tables, schemas, and databases that have been accidentally or intentionally deleted within the Time Travel retention period.
C. Conducting point-in-time analysis for BI reporting: It allows users to query historical data as it appeared at a specific point in time within the Time Travel retention period, which is crucial for business intelligence and reporting purposes.
While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor does it provide analysis over all periods of time.
Which of the following compute resources or features are managed by Snowflake? (Select TWO).
Execute a COPY command
Updating data
Snowpipe
AUTOMATIC__CLUSTERING
Scaling up a warehouse
Snowflake manages various compute resources and features, including Snowpipe and the ability to scale up a warehouse. Snowpipe is Snowflake’s continuous data ingestion service that allows users to load data as soon as it becomes available. Scaling up a warehouse refers to increasing the compute resources allocated to a virtual warehouse to handle larger workloads or improve performance.
A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option
What does the STRIP_OUTER_ARRAY file format do?
It removes the last element of the outer array.
It removes the outer array structure and loads the records into separate table rows,
It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
The STRIP_OUTER_ARRAY file format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table. This is particularly useful for efficiently loading JSON data that is structured as an array of records1.
What is a key feature of Snowflake architecture?
Zero-copy cloning creates a mirror copy of a database that updates with the original
Software updates are automatically applied on a quarterly basis
Snowflake eliminates resource contention with its virtual warehouse implementation
Multi-cluster warehouses allow users to run a query that spans across multiple clusters
Snowflake automatically sorts DATE columns during ingest for fast retrieval by date
One of the key features of Snowflake’s architecture is its unique approach to eliminating resource contention through the use of virtual warehouses. This is achieved by separating storage and compute resources, allowing multiple virtual warehouses to operate independently on the same data without affecting each other. This means that different workloads, such as loading data, running queries, or performing complex analytics, can be processed simultaneously without any performance degradation due to resource contention.
What happens when a virtual warehouse is resized?
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. Thismeans that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouseis reduced, the compute resources are not removed until they are no longer being used by any current operations1.
How long is Snowpipe data load history retained?
As configured in the create pipe settings
Until the pipe is dropped
64 days
14 days
Snowpipe data load history is retained for 64 days. This retention period allows users to review and audit the data load operations performed by Snowpipe over a significant period of time, which can be crucial for troubleshooting and ensuring data integrity.
What data is stored in the Snowflake storage layer? (Select TWO).
Snowflake parameters
Micro-partitions
Query history
Persisted query results
Standard and secure view results
The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
Informatica
Power Bl
Adobe
Data Robot
Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.
What is a best practice after creating a custom role?
Create the custom role using the SYSADMIN role.
Assign the custom role to the SYSADMIN role
Assign the custom role to the PUBLIC role
Add__CUSTOM to all custom role names
Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.
What Snowflake features allow virtual warehouses to handle high concurrency workloads? (Select TWO)
The ability to scale up warehouses
The use of warehouse auto scaling
The ability to resize warehouses
Use of multi-clustered warehouses
The use of warehouse indexing
Snowflake’s architecture is designed to handle high concurrency workloads through several features, two of which are particularly effective:
B. The use of warehouse auto scaling: This feature allows Snowflake to automatically adjust the compute resources allocated to a virtual warehouse in response to the workload. If there is an increase in concurrent queries, Snowflake can scale up the resources to maintain performance.
D. Use of multi-clustered warehouses: Multi-clustered warehouses enable Snowflake to run multiple clusters of compute resources simultaneously. This allows for the distribution of queries across clusters, thereby reducing the load on any single cluster and improving the system’s ability to handle a high number of concurrent queries.
These features ensure that Snowflake can manage varying levels of demand without manual intervention, providing a seamless experience even during peak usage.
In the query profiler view for a query, which components represent areas that can be used to help optimize query performance? (Select TWO)
Bytes scanned
Bytes sent over the network
Number of partitions scanned
Percentage scanned from cache
External bytes scanned
In the query profiler view, the components that represent areas that can be used to help optimize query performance include ‘Bytes scanned’ and ‘Number of partitions scanned’. ‘Bytes scanned’ indicates the total amount of data the query had to read and is a direct indicator of the query’s efficiency. Reducing the bytes scanned can lead to lower data transfer costs and faster query execution. ‘Number of partitions scanned’ reflects how well the data is clustered; fewer partitions scanned typically means better performance because the system can skip irrelevant data more effectively.
A virtual warehouse's auto-suspend and auto-resume settings apply to which of the following?
The primary cluster in the virtual warehouse
The entire virtual warehouse
The database in which the virtual warehouse resides
The Queries currently being run on the virtual warehouse
The auto-suspend and auto-resume settings in Snowflake apply to the entire virtual warehouse. These settings allow the warehouse to automatically suspend when it’s not in use, helping to save on compute costs. When queries or tasks are submitted to the warehouse, it can automatically resume operation. This functionality is designed to optimize resource usage and cost-efficiency.
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Customer-managed encryption keys through Tri-Secret Secure
Automatic encryption of all data
Up to 90 days of data recovery through Time Travel
Object-level access control
Column-level security to apply data masking policies to tables and views
In all Snowflake editions, two key capabilities are universally available:
B. Automatic encryption of all data: Snowflake automatically encrypts all data stored in its platform, ensuring security and compliance with various regulations. This encryption is transparent to users and does not require any configuration or management.
D. Object-level access control: Snowflake provides granular access control mechanisms that allow administrators to define permissions at the object level, including databases, schemas, tables, and views. This ensures that only authorized users can access specific data objects.
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
Will data cached in a warehouse be lost when the warehouse is resized?
Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
No. because the size of the cache is independent from the warehouse size
Yes. became the new compute resource will no longer have access to the cache encryption key
When a Snowflake virtual warehouse is resized, the data cached in the warehouse is not lost. This is because the cache is maintained independently of the warehouse size. Resizing a warehouse, whether scaling up or down, does not affect the cached data, ensuring that query performance is not impacted by such changes.
What is the recommended file sizing for data loading using Snowpipe?
A compressed file size greater than 100 MB, and up to 250 MB
A compressed file size greater than 100 GB, and up to 250 GB
A compressed file size greater than 10 MB, and up to 100 MB
A compressed file size greater than 1 GB, and up to 2 GB
For data loading using Snowpipe, the recommended file size is a compressed file greater than 10 MB and up to 100 MB. This size range is optimal for Snowpipe’s continuous, micro-batch loadingprocess, allowing for efficient and timely data ingestion without overwhelming the system with files that are too large or too small.
Which command can be used to stage local files from which Snowflake interface?
SnowSQL
Snowflake classic web interface (Ul)
Snowsight
.NET driver
SnowSQL is the command-line client for Snowflake that allows users to execute SQL queries and perform all DDL and DML operations, including staging files for bulk data loading. It is specifically designed for scripting and automating tasks.
What is a limitation of a Materialized View?
A Materialized View cannot support any aggregate functions
A Materialized View can only reference up to two tables
A Materialized View cannot be joined with other tables
A Materialized View cannot be defined with a JOIN
Materialized Views in Snowflake are designed to store the result of a query and can be refreshed to maintain up-to-date data. However, they have certain limitations, one of which is that they cannot be defined using a JOIN clause. This means that a Materialized View can only be created based on a single source table and cannot combine data from multiple tables using JOIN operations.
The fail-safe retention period is how many days?
1 day
7 days
45 days
90 days
Fail-safe is a feature in Snowflake that provides an additional layer of data protection. After the Time Travel retention period ends, Fail-safe offers a non-configurable 7-day period during which historical data may be recoverable by Snowflake. This period is designed to protect against accidental data loss and is not intended for customer access.
Understanding and viewing Fail-safe |Snowflake Documentation
Which services does the Snowflake Cloud Services layer manage? (Select TWO).
Compute resources
Query execution
Authentication
Data storage
Metadata
The Snowflake Cloud Services layer manages a variety of services that are crucial for the operation of the Snowflake platform. Among these services, Authentication and Metadata management are key components. Authentication is essential for controlling access to the Snowflake environment, ensuring that only authorized users can perform actions within the platform. Metadata management involves handling all the metadata related to objects within Snowflake, such as tables, views, and databases, which is vital for the organization and retrieval of data.
What is the default character set used when loading CSV files into Snowflake?
UTF-8
UTF-16
ISO S859-1
ANSI_X3.A
https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#:~:text=For%20delimited%20files%20(CSV%2C%20TSV,encoding%20to%20use%20for%20loading.
For delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for loading. For the list of supported character sets, see Supported Character Sets for Delimited Files (in this topic).
True or False: Reader Accounts are able to extract data from shared data objects for use outside of Snowflake.
True
False
Reader accounts in Snowflake are designed to allow users to read data shared with them but do not have the capability to extract data for use outside of Snowflake. They are intended for consuming shared data within the Snowflake environment only.
Which Snowflake object enables loading data from files as soon as they are available in a cloud storage location?
Pipe
External stage
Task
Stream
In Snowflake, a Pipe is the object designed to enable the continuous, near-real-time loading of data from files as soon as they are available in a cloud storage location. Pipes use Snowflake’s COPY command to load data and can be associated with a Stage object to monitor for new files. When new data files appear in the stage, the pipe automatically loads the data into the target table.
What are value types that a VARIANT column can store? (Select TWO)
STRUCT
OBJECT
BINARY
ARRAY
CLOB
A VARIANT column in Snowflake can store semi-structured data types. This includes:
B. OBJECT: An object is a collection of key-value pairs in JSON, and a VARIANT column can store this type of data structure.
D. ARRAY: An array is an ordered list of zero or more values, which can be of any variant-supported data type, including objects or other arrays.
The VARIANT data type is specifically designed to handle semi-structured data like JSON, Avro, ORC, Parquet, or XML, allowing for the storage of nested and complex data structures.
Which semi-structured file formats are supported when unloading data from a table? (Select TWO).
ORC
XML
Avro
Parquet
JSON
Semi-structured
JSON, Parquet
Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for efficient storage and querying of semi-structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.
https://docs.snowflake.com/en/user-guide/data-unload-prepare.html#:~:text=Supported%20File%20Formats,-The%20following%20file &text=Delimited%20(CSV%2C%20TSV%2C%20etc.)
User-level network policies can be created by which of the following roles? (Select TWO).
ROLEADMIN
ACCOUNTADMIN
SYSADMIN
SECURITYADMIN
USERADMIN
User-level network policies in Snowflake can be created by roles with the necessary privileges to manage security and account settings. The ACCOUNTADMIN role has the highest level of privileges across the account, including the ability to manage network policies. The SECURITYADMIN role is specifically responsible for managing security objects within Snowflake, which includes the creation and management of network policies.
Which statement about billing applies to Snowflake credits?
Credits are billed per-minute with a 60-minute minimum
Credits are used to pay for cloud data storage usage
Credits are consumed based on the number of credits billed for each hour that a warehouse runs
Credits are consumed based on the warehouse size and the time the warehouse is running
Snowflake credits are the unit of measure for the compute resources used in Snowflake. The number of credits consumed depends on the size of the virtual warehouse and the time it is running. Larger warehouses consume more credits per hour than smaller ones, and credits are billed for the time the warehouse is active, regardless of the actual usage within that time.
Which is the MINIMUM required Snowflake edition that a user must have if they want to use AWS/Azure Privatelink or Google Cloud Private Service Connect?
Standard
Premium
Enterprise
Business Critical
https://docs.snowflake.com/en/user-guide/admin-security-privatelink.html
True or False: It is possible for a user to run a query against the query result cache without requiring an active Warehouse.
True
False
Snowflake’s architecture allows for the use of a query result cache that stores the results of queries for a period of time. If the same query is run again and the underlying data has not changed, Snowflake can retrieve the result from this cache without needing to re-run the query on an active warehouse, thus saving on compute resources.
What happens when an external or an internal stage is dropped? (Select TWO).
When dropping an external stage, the files are not removed and only the stage is dropped
When dropping an external stage, both the stage and the files within the stage are removed
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
Which of the following objects can be shared through secure data sharing?
Masking policy
Stored procedure
Task
External table
Secure data sharing in Snowflake allows users to share various objects between Snowflake accounts without physically copying the data, thus not consuming additional storage. Among the options provided, external tables can be shared through secure data sharing. External tables are used to query data directly from files in a stage without loading the data into Snowflake tables, making them suitable for sharing across different Snowflake accounts.
A user needs to create a materialized view in the schema MYDB.MYSCHEMA.
Which statements will provide this access?
GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER1;
GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
In Snowflake, to create a materialized view, the user must have the necessary privileges on the schema where the view will be created. These privileges are granted through roles, not directly to individual users. Therefore, the correct process is to grant the role to the user and then grant the privilege to create the materialized view to the role itself.
The statement GRANT ROLE MYROLE TO USER USER1; grants the specified role to the user, allowing them to assume that role and exercise its privileges. The subsequent statement CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE; grants the privilege to create a materialized view within the specified schema to the role MYROLE. Any user who has been granted MYROLE can then create materialized views in MYDB.MYSCHEMA.
Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).
Internal stages
Incremental backups
Time Travel
Zero-copy clones
Fail-safe
Snowflake’s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts, and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only be performed by Snowflake.
Which data type can be used to store geospatial data in Snowflake?
Variant
Object
Geometry
Geography
Snowflake supports two geospatial data types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth’s surface. The GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer is GEOGRAPHY3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A sales table FCT_SALES has 100 million records.
The following Query was executed
SELECT COUNT (1) FROM FCT__SALES;
How did Snowflake fulfill this query?
Query against the result set cache
Query against a virtual warehouse cache
Query against the most-recently created micro-partition
Query against the metadata excite
Snowflake is designed to optimize query performance by utilizing metadata for certain types of queries. When executing a COUNT query, Snowflake can often fulfill the request by accessing metadata about the table’s row count, rather than scanning the entire table or micro-partitions. This is particularly efficient for large tables like FCT_SALES with a significant number of records. The metadata layer maintains statistics about the table, including the row count, which enables Snowflake to quickly return the result of a COUNT query without the need to perform a full scan.
What is a responsibility of Snowflake's virtual warehouses?
Infrastructure management
Metadata management
Query execution
Query parsing and optimization
Management of the storage layer
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Yes, because a table owner has full control and can unset masking policies.
Yes, because masking policies only apply to cloned tables.
No, because masking policies must always reference specific access roles.
No, because ownership of a table does not include the ability to change masking policies
Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and requirespecific privileges to modify12.
True or False: When you create a custom role, it is a best practice to immediately grant that role to ACCOUNTADMIN.
True
False
The ACCOUNTADMIN role is the most powerful role in Snowflake and should be limited to a select number of users within an organization. It is responsible for account-level configurations and should not be used for day-to-day object creation or management. Granting a custom role to ACCOUNTADMIN could inadvertently give broad access to users with this role, which is not a recommended security practice.
How often are encryption keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days
Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of Snowflake’s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.
Which of the following are benefits of micro-partitioning? (Select TWO)
Micro-partitions cannot overlap in their range of values
Micro-partitions are immutable objects that support the use of Time Travel.
Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
Rows are automatically stored in sorted order within micro-partitions
Micro-partitions can be defined on a schema-by-schema basis
Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage to virtual warehouses. This is because Snowflake’s query optimizer can skipover micro-partitions that do not contain relevant data for a query, thus reducing the amount of data that needs to be scanned and transferred.
What tasks can be completed using the copy command? (Select TWO)
Columns can be aggregated
Columns can be joined with an existing table
Columns can be reordered
Columns can be omitted
Data can be loaded without the need to spin up a virtual warehouse
The COPY command in Snowflake allows for the reordering of columns as they are loaded into a table, and it also permits the omission of columns from the source file during the load process. This provides flexibility in handling the schema of the data being ingested. References: [COF-C02] SnowPro Core Certification Exam Study Guide
In which scenarios would a user have to pay Cloud Services costs? (Select TWO).
Compute Credits = 50 Credits Cloud Services = 10
Compute Credits = 80 Credits Cloud Services = 5
Compute Credits = 10 Credits Cloud Services = 9
Compute Credits = 120 Credits Cloud Services = 10
Compute Credits = 200 Credits Cloud Services = 26
In Snowflake, Cloud Services costs are incurred when the Cloud Services usage exceeds 10% of the compute usage (measured in credits). Therefore, scenarios A and E would result in Cloud Services charges because the Cloud Services usage is more than 10% of the compute credits used.
True or False: Loading data into Snowflake requires that source data files be no larger than 16MB.
True
False
Snowflake does not require source data files to be no larger than 16MB. In fact, Snowflake recommends that for optimal load performance, data files should be roughly 100-250 MB in size when compressed. However, it is not recommended to load very large files (e.g., 100 GB or larger) due to potential delays and wasted credits if errors occur. Smaller files should be aggregated to minimize processing overhead, and larger files should be split to distribute the load among compute resources in an active warehouse.
Which views are included in the DATA SHARING USAGE schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
The DATA_SHARING_USAGE schema includes views that display information about listings published in the Snowflake Marketplace or a data exchange, which includes DATA_TRANSFER_HISTORY and LISTING_TELEMETRY_DAILY2.
Which type of loop requires a BREAK statement to stop executing?
FOR
LOOP
REPEAT
WHILE
The LOOP type of loop in Snowflake Scripting does not have a built-in termination condition and requires a BREAK statement to stop executing4.
What is the purpose of the Snowflake SPLIT TO_TABLE function?
To count the number of characters in a string
To split a string into an array of sub-strings
To split a string and flatten the results into rows
To split a string and flatten the results into columns
The purpose of the Snowflake SPLIT_TO_TABLE function is to split a string based on a specified delimiter and flatten the results into rows. This table function is useful for transforming a delimited string into a set of rows that can be further processed or queried5.
Which Snowflake feature allows administrators to identify unused data that may be archived or deleted?
Access history
Data classification
Dynamic Data Masking
Object tagging
The Access History feature in Snowflake allows administrators to track data access patterns and identify unused data. This information can be used to make decisions about archiving or deleting data to optimize storage and reduce costs.
Which Snowflake feature allows a user to track sensitive data for compliance, discovery, protection, and resource usage?
Tags
Comments
Internal tokenization
Row access policies
Tags in Snowflake allow users to track sensitive data for compliance, discovery, protection, and resource usage. They enable the categorization and tracking of data, supporting compliance with privacy regulations678. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What are the least privileges needed to view and modify resource monitors? (Select TWO).
SELECT
OWNERSHIP
MONITOR
MODIFY
USAGE
To view and modify resource monitors, the least privileges needed are MONITOR and MODIFY. These privileges allow a user to monitor credit usage and make changes to resource monitors3.
Who can activate and enforce a network policy for all users in a Snowflake account? (Select TWO).
A user with an USERADMIN or higher role
A user with a SECURITYADMIN or higher role
A role that has been granted the ATTACH POLICY privilege
A role that has the NETWORK_POLlCY account parameter set
A role that has the ownership of the network policy
In Snowflake, a user with the SECURITYADMIN role or higher can activate and enforce a network policy for all users in an account. Additionally, a role that has ownership of the network policy can also activate and enforce it
Which statements describe benefits of Snowflake's separation of compute and storage? (Select TWO).
The separation allows independent scaling of computing resources.
The separation ensures consistent data encryption across all virtual data warehouses.
The separation supports automatic conversion of semi-structured data into structured data for advanced data analysis.
Storage volume growth and compute usage growth can be tightly coupled.
Compute can be scaled up or down without the requirement to add more storage.
Snowflake’s architecture allows for the independent scaling of compute resources, meaning you can increase or decrease the computational power as needed without affecting storage. This separation also means that storage can grow independently of compute usage, allowing for more flexible and cost-effective data management.
Which function unloads data from a relational table to JSON?
TO_OBJECT
TO_JSON
TO_VARIANT
OBJECT CONSTRUCT
The TO_JSON function is used to convert a VARIANT value into a string containing the JSON representation of the value. This function is suitable for unloading data from a relational table to JSON format. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which statistics can be used to identify queries that have inefficient pruning? (Select TWO).
Bytes scanned
Bytes written to result
Partitions scanned
Partitions total
Percentage scanned from cache
The statistics that can be used to identify queries with inefficient pruning are ‘Partitions scanned’ and ‘Partitions total’. These statistics indicate how much of the data was actually needed and scanned versus the total available, which can highlight inefficiencies in data pruning34.
What does a masking policy consist of in Snowflake?
A single data type, with one or more conditions, and one or more masking functions
A single data type, with only one condition, and only one masking function
Multiple data types, with only one condition, and one or more masking functions
Multiple data types, with one or more conditions, and one or more masking functions
A masking policy in Snowflake consists of a single data type, with one or more conditions, and one or more masking functions. These components define how the data is masked based on the specified conditions3.
Which operation can be performed on Snowflake external tables?
INSERT
JOIN
RENAME
ALTER
Snowflake external tables are read-only, which means data manipulation language (DML) operations like INSERT, RENAME, or ALTER cannot be performed on them. However, external tables can be used for query and join operations3.
What are key characteristics of virtual warehouses in Snowflake? (Select TWO).
Warehouses that are multi-cluster can have nodes of different sizes.
Warehouses can be started and stopped at any time.
Warehouses can be resized at any time, even while running.
Warehouses are billed on a per-minute usage basis.
Warehouses can only be used for querying and cannot be used for data loading.
Virtual warehouses in Snowflake can be started and stopped at any time, providing flexibility in managing compute resources. They can also be resized at any time, even while running, toaccommodate varying workloads910. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which privilege must be granted by one role to another role, and cannot be revoked?
MONITOR
OPERATE
OWNERSHIP
ALL
The OWNERSHIP privilege is unique in that it must be granted by one role to another and cannot be revoked. This ensures that the transfer of ownership is deliberate and permanent, reflecting the importance of ownership in managing access and permissions.
What is the purpose of the STRIP NULL_VALUES file format option when loading semi-structured data files into Snowflake?
It removes null values from all columns in the data.
It converts null values to empty strings during loading.
It skips rows with null values during the loading process.
It removes object or array elements containing null values.
The STRIP NULL_VALUES file format option, when set to TRUE, removes object or array elements that contain null values during the loading process of semi-structured data files into Snowflake. This ensures that the data loaded into Snowflake tables does not contain these null elements, which can be useful when the “null” values in files indicate missingvalues and have no other special meaning2.
Which Snowflake feature provides increased login security for users connecting to Snowflake that is powered by Duo Security service?
OAuth
Network policies
Single Sign-On (SSO)
Multi-Factor Authentication (MFA)
Multi-Factor Authentication (MFA) provides increased login security for users connecting to Snowflake. Snowflake’s MFA is powered by Duo Security service, which adds an additional layer of security during the login process.
What does the LATERAL modifier for the FLATTEN function do?
Casts the values of the flattened data
Extracts the path of the flattened data
Joins information outside the object with the flattened data
Retrieves a single instance of a repeating element in the flattened data
The LATERAL modifier for the FLATTEN function allows joining information outside the object (such as other columns in the source table) with the flattened data, creating a lateral view that correlates with the preceding tables in the FROM clause2345. References: [COF-C02] SnowPro Core Certification Exam Study Guide
When enabling access to unstructured data, which URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens?
File URL
Scoped URL
Relative URL
Pre-Signed URL
A Scoped URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens. It provides a secure way to share access to filesstored in Snowflake
What type of query will benefit from the query acceleration service?
Queries without filters or aggregation
Queries with large scans and selective filters
Queries where the GROUP BY has high cardinality
Queries of tables that have search optimization service enabled
The query acceleration service in Snowflake is designed to benefit queries that involve large scans and selective filters. This service can offload portions of the query processing work to shared compute resources, which can handlethese types of workloads more efficiently by performing morework in parallel and reducing the wall-clock time spent in scanning and filtering2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What function can be used with the recursive argument to return a list of distinct key names in all nested elements in an object?
FLATTEN
GET_PATH
CHECK_JSON
PARSE JSON
The FLATTEN function can be used with the recursive argument to return a list of distinct key names in all nested elements within an object. This function is particularly useful for working with semi-structured data in Snowflake
Which Snowflake data types can be used to build nested hierarchical data? (Select TWO)
INTEGER
OBJECT
VARIANT
VARCHAR
LIST
The Snowflake data types that can be used to build nested hierarchical data are OBJECT and VARIANT. These data types support the storage and querying of semi-structured data, allowing for the creation of complex, nested data structures
Who can grant object privileges in a regular schema?
Object owner
Schema owner
Database owner
SYSADMIN
In a regular schema within Snowflake, the object owner has the privilege to grant object privileges. The object owner is typically the role that created the object or to whom the ownership of the object has been transferred78.
References = [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of a Query Profile?
To profile how many times a particular query was executed and analyze its u^age statistics over time.
To profile a particular query to understand the mechanics of the query, its behavior, and performance.
To profile the user and/or executing role of a query and all privileges and policies applied on the objects within the query.
To profile which queries are running in each warehouse and identify proper warehouse utilization and sizing for better performance and cost balancing.
The purpose of a Query Profile is to provide a detailed analysis of a particular query’s execution plan, including the mechanics, behavior, and performance. It helps in identifying potential performance bottlenecks and areas for optimization
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
File URL
Staged URL
Scoped URL
Pre-signed URL
A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the fileswithout needing to authenticate into Snowflake5.
For which use cases is running a virtual warehouse required? (Select TWO).
When creating a table
When loading data into a table
When unloading data from a table
When executing a show command
When executing a list command
Running a virtual warehouse is required when loading data into a table and when unloading data from a table because these operations require computeresources that are provided by the virtual warehouse23.
What will prevent unauthorized access to a Snowflake account from an unknown source?
Network policy
End-to-end encryption
Multi-Factor Authentication (MFA)
Role-Based Access Control (RBAC)
A network policy in Snowflake is used to restrict access to the Snowflake account from unauthorized or unknown sources. It allows administrators to specify allowed IP address ranges, thus preventing access from any IP addresses not listed in the policy1.
Which Snowflake database object can be used to track data changes made to table data?
Tag
Task
Stream
Stored procedure
A Stream object in Snowflake is used for change data capture (CDC), which records data manipulation language (DML) changes made to tables, including inserts, updates, and deletes3.
A Snowflake account has activated federated authentication.
What will occur when a user with a password that was defined by Snowflake attempts to log in to Snowflake?
The user will be unable to enter a password.
The user will encounter an error, and will not be able to log in.
The user will be able to log into Snowflake successfully.
After entering the username and password, the user will be redirected to an Identity Provider (IdP) login page.
When federated authentication is activated in Snowflake, users authenticate via an external identity provider (IdP) rather than using Snowflake-managed credentials. Therefore, a user with a password defined by Snowflake will be unable to enter a password and must use their IdP credentials to log in.
QUSTION NO:579
What value provides information about disk usage for operations where intermediate results do not fit in memory in a Query Profile?
A. IO
B. Network
C. Pruning
D. Spilling
Answer: D
In Snowflake, when a query execution requires more memory than what is available, Snowflake handles these situations by spilling the intermediate results to disk. This process is known as "spilling." The Query Profile in Snowflake includes a metric that helps users identify when and how much data spilling occurs during the execution of a query. This information is crucial for optimizing queries as excessive spilling can significantly slow down query performance. The value that provides this information about disk usage due to intermediate results not fitting in memory is appropriately labeled as "Spilling" in the Query Profile.
What metadata does Snowflake store for rows in micro-partitions? (Select TWO).
Range of values
Distinct values
Index values
Sorted values
Null values
Snowflake stores metadata for rows in micro-partitions, including the range of values for each column and the number of distinct values1.
Which Snowflake function is maintained separately from the data and helps to support features such as Time Travel, Secure Data Sharing, and pruning?
Column compression
Data clustering
Micro-partitioning
Metadata management
Micro-partitioning is a Snowflake function that is maintained separately from the data and supports features such as Time Travel, Secure Data Sharing, and pruning. It allows Snowflake to efficiently manage and query large datasets by organizing them into micro-partitions1.
How can a dropped internal stage be restored?
Enable Time Travel.
Clone the dropped stage.
Execute the UNDROP command.
Recreate the dropped stage.
Once an internal stage is dropped in Snowflake, it cannot be recovered or restored using Time Travel or UNDROP commands. The only option is to recreate the dropped stage
Which solution improves the performance of point lookup queries that return a small number of rows from large tables using highly selective filters?
Automatic clustering
Materialized views
Query acceleration service
Search optimization service
The search optimization service improves the performance of point lookup queries on large tables by using selective filters to quickly return a small number of rows. It creates an optimized data structure that helps in pruning the micro-partitions that do not contain the queried values3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How can performance be optimized for a query that returns a small amount of data from a very large base table?
Use clustering keys
Create materialized views
Use the search optimization service
Use the query acceleration service
The search optimization service in Snowflake is designed to improve the performance of selective point lookup queries on large tables, which is ideal for scenarios where a query returns a small amount of data from a very large base table1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What transformations are supported when loading data into a table using the COPY INTO