- Home
- Splunk
- Splunk Enterprise Certified Architect
- SPLK-2002
- SPLK-2002 - Splunk Enterprise Certified Architect
Splunk SPLK-2002 Splunk Enterprise Certified Architect Exam Practice Test
Splunk Enterprise Certified Architect Questions and Answers
What is needed to ensure that high-velocity sources will not have forwarding delays to the indexers?
Options:
Increase the default value of sessionTimeout in server, conf.
Increase the default limit for maxKBps in limits.conf.
Decrease the value of forceTimebasedAutoLB in outputs. conf.
Decrease the default value of phoneHomelntervallnSecs in deploymentclient .conf.
Answer:
BExplanation:
To ensure that high-velocity sources will not have forwarding delays to the indexers, the default limit for maxKBps in limits.conf should be increased. This parameter controls the maximum bandwidth that a forwarder can use to send data to the indexers. By default, it is set to 256 KBps, which may not be sufficient for high-volume data sources. Increasing this limit can reduce the forwarding latency and improve the performance of the forwarders. However, this should be done with caution, as it may affect the network bandwidth and the indexer load. Option B is the correct answer. Option A is incorrect because the sessionTimeout parameter in server.conf controls the duration of a TCP connection between a forwarder and an indexer, not the bandwidth limit. Option C is incorrect because the forceTimebasedAutoLB parameter in outputs.conf controls the frequency of load balancing among the indexers, not the bandwidth limit. Option D is incorrect because the phoneHomelntervallnSecs parameter in deploymentclient.conf controls the interval at which a forwarder contacts the deployment server, not the bandwidth limit12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Limitsconf#limits.conf.spec 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Forwarding/Routeandfilterdatad#Set_the_maximum_bandwidth_usage_for_a_forwarder
Which of the following items are important sizing parameters when architecting a Splunk environment? (select all that apply)
Options:
Number of concurrent users.
Volume of incoming data.
Existence of premium apps.
Number of indexes.
Answer:
A, B, CExplanation:
Number of concurrent users: This is an important factor because it affects the search performance and resource utilization of the Splunk environment. More users mean more concurrent searches, which require more CPU, memory, and disk I/O. The number of concurrent users also determines the search head capacity and the search head clustering configuration12
Volume of incoming data: This is another crucial factor because it affects the indexing performance and storage requirements of the Splunk environment. More data means more indexing throughput, which requires more CPU, memory, and disk I/O. The volume of incoming data also determines the indexer capacity and the indexer clustering configuration13
Existence of premium apps: This is a relevant factor because some premium apps, such as Splunk Enterprise Security and Splunk IT Service Intelligence, have additional requirements and recommendations for the Splunk environment. For example, Splunk Enterprise Security requires a dedicated search head cluster and a minimum of 12 CPU cores per search head. Splunk IT Service Intelligence requires a minimum of 16 CPU cores and 64 GB of RAM per search head45
(Which of the following must be included in a deployment plan?)
Options:
Future topology diagrams of the IT environment.
A comprehensive list of stakeholders, either direct or indirect.
Current logging details and data source inventory.
Business continuity and disaster recovery plans.
Answer:
CExplanation:
According to Splunk’s Deployment Planning and Implementation Guidelines, one of the most critical elements of a Splunk deployment plan is a comprehensive data source inventory and current logging details. This information defines the scope of data ingestion and directly influences sizing, architecture design, and licensing.
A proper deployment plan should identify:
All data sources (such as syslogs, application logs, network devices, OS logs, databases, etc.)
Expected daily ingest volume per source
Log formats and sourcetypes
Retention requirements and compliance constraints
This data forms the foundation for index sizing, forwarder configuration, and storage planning. Without a well-defined data inventory, Splunk architects cannot accurately determine hardware capacity, indexing load, or network throughput requirements.
While stakeholder mapping, topology diagrams, and continuity plans (Options A, B, D) are valuable in a broader IT project, Splunk’s official guidance emphasizes logging details and source inventory as mandatory for a deployment plan. It ensures that the Splunk environment is properly sized, licensed, and aligned with business data visibility goals.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Deployment Planning Manual – Data Source Inventory Requirements
• Capacity Planning for Indexer and Search Head Sizing
• Planning Data Onboarding and Ingestion Strategies
• Splunk Architecture and Implementation Best Practices
Which command should be run to re-sync a stale KV Store member in a search head cluster?
Options:
splunk clean kvstore -local
splunk resync kvstore -remote
splunk resync kvstore -local
splunk clean eventdata -local
Answer:
AExplanation:
To resync a stale KV Store member in a search head cluster, you need to stop the search head that has the stale KV Store member, run the command splunk clean kvstore --local, and then restart the search head. This triggers the initial synchronization from other KV Store members12.
The command splunk resync kvstore [-source sourceId] is used to resync the entire KV Store cluster from one of the members, not a single member. This command can only be invoked from the node that is operating as search head cluster captain2.
The command splunk clean eventdata -local is used to delete all indexed data from a standalone indexer or a cluster peer node, not to resync the KV Store3.
What is the logical first step when starting a deployment plan?
Options:
Inventory the currently deployed logging infrastructure.
Determine what apps and use cases will be implemented.
Gather statistics on the expected adoption of Splunk for sizing.
Collect the initial requirements for the deployment from all stakeholders.
Answer:
DExplanation:
The logical first step when starting a deployment plan is to collect the initial requirements for the deployment from all stakeholders. This includes identifying the business objectives, the data sources, the use cases, the security and compliance needs, the scalability and availability expectations, and the budget and timeline constraints. Collecting the initial requirements helps to define the scope and the goals of the deployment, and to align the expectations of all the parties involved.
Inventorying the currently deployed logging infrastructure, determining what apps and use cases will be implemented, and gathering statistics on the expected adoption of Splunk for sizing are all important steps in the deployment planning process, but they are not the logical first step. These steps can be done after collecting the initial requirements, as they depend on the information gathered from the stakeholders.
Which of the following is true regarding Splunk Enterprise's performance? (Select all that apply.)
Options:
Adding search peers increases the maximum size of search results.
Adding RAM to existing search heads provides additional search capacity.
Adding search peers increases the search throughput as the search load increases.
Adding search heads provides additional CPU cores to run more concurrent searches.
Answer:
C, DExplanation:
The following statements are true regarding Splunk Enterprise performance:
Adding search peers increases the search throughput as search load increases. This is because adding more search peers distributes the search workload across more indexers, which reduces the load on each indexer and improves the search speed and concurrency.
Adding search heads provides additional CPU cores to run more concurrent searches. This is because adding more search heads increases the number of search processes that can run in parallel, which improves the search performance and scalability. The following statements are false regarding Splunk Enterprise performance:
Adding search peers does not increase the maximum size of search results. The maximum size of search results is determined by the maxresultrows setting in the limits.conf file, which is independent of the number of search peers.
Adding RAM to an existing search head does not provide additional search capacity. The search capacity of a search head is determined by the number of CPU cores, not the amount of RAM. Adding RAM to a search head may improve the search performance, but not the search capacity. For more information, see Splunk Enterprise performance in the Splunk documentation.
When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers. What is the first thing that should be added to inputs.conf?
Options:
Decrease the value of initCrcLength.
Add a crcSalt=
Increase the value of initCrcLength.
Add a crcSalt=
Answer:
CExplanation:
inputs.conf is a configuration file that contains settings for various types of data inputs, such as files, directories, network ports, scripts, and so on1.
initCrcLength is a setting that specifies the number of characters that the input uses to calculate the CRC (cyclic redundancy check) of a file1. The CRC is a value that uniquely identifies a file based on its content2.
crcSalt is another setting that adds a string to the CRC calculation to force the input to consume files that have matching CRCs1. This can be useful when files have identical headers or when files are renamed or rolled over2.
When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers, the first thing that should be added to inputs.conf is to increase the value of initCrcLength. This is because by default, the input only performs CRC checks against the first 256 bytes of a file, which means that files with long headers may have matching CRCs and be skipped by the input2. By increasing the value of initCrcLength, the input can use more characters from the file to calculate the CRC, which can reduce the chances of CRC collisions and ensure that different files are indexed3.
Option C is the correct answer because it reflects the best practice for troubleshooting this situation. Option A is incorrect because decreasing the value of initCrcLength would make the CRC calculation less reliable and more prone to collisions. Option B is incorrect because adding a crcSalt with a static string would not help differentiate files with long headers, as they would still have matching CRCs. Option D is incorrect because adding a crcSalt with the
If there is a deployment server with many clients and one deployment client is not updating apps, which of the following should be done first?
Options:
Choose a longer phone home interval for all of the deployment clients.
Increase the number of CPU cores for the deployment server.
Choose a corrective action based on the splunkd. log of the deployment client.
Increase the amount of memory for the deployment server.
Answer:
CExplanation:
The correct action to take first if a deployment client is not updating apps is to choose a corrective action based on the splunkd.log of the deployment client. This log file contains information about the communication between the deployment server and the deployment client, and it can help identify the root cause of the problem1. The other actions may or may not help, depending on the situation, but they are not the first steps to take. Choosing a longer phone home interval may reduce the load on the deployment server, but it will also delay the updates for the deployment clients2. Increasing the number of CPU cores or the amount of memory for the deployment server may improve its performance, but it will not fix the issue if the problem is on the deployment client side3. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: Troubleshoot deployment server issues 2: Configure deployment clients 3: Hardware and software requirements for the deployment server
Which of the following can a Splunk diag contain?
Options:
Search history, Splunk users and their roles, running processes, indexed data
Server specs, current open connections, internal Splunk log files, index listings
KV store listings, internal Splunk log files, search peer bundles listings, indexed data
Splunk platform configuration details, Splunk users and their roles, current open connections, index listings
Answer:
BExplanation:
The following artifacts are included in a Splunk diag file:
Server specs. These are the specifications of the server that Splunk runs on, such as the CPU model, the memory size, the disk space, and the network interface. These specs can help understand the Splunk hardware requirements and performance.
Current open connections. These are the connections that Splunk has established with other Splunk instances or external sources, such as forwarders, indexers, search heads, license masters, deployment servers, and data inputs. These connections can help understand the Splunk network topology and communication.
Internal Splunk log files. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.
Index listings. These are the listings of the indexes that Splunk has created and configured, such as the index name, the index location, the index size, and the index attributes. These listings can help understand the Splunk data management and retention. The following artifacts are not included in a Splunk diag file:
Search history. This is the history of the searches that Splunk has executed, such as the search query, the search time, the search results, and the search user. This history is not part of the Splunk diag file, but it can be accessed from the Splunk Web interface or the audit.log file.
Splunk users and their roles. These are the users that Splunk has created and assigned roles to, such as the user name, the user password, the user role, and the user capabilities. These users and roles are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the authentication.conf and authorize.conf files.
KV store listings. These are the listings of the KV store collections and documents that Splunk has created and stored, such as the collection name, the collection schema, the document ID, and the document fields. These listings are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the mongod.log file.
Indexed data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.
Which of the following is unsupported in a production environment?
Options:
Cluster Manager can run on the Monitoring Console instance in smaller environments.
Search Head Cluster Deployer can run on the Monitoring Console instance in smaller environments.
Search heads in a Search Head Cluster can run on virtual machines.
Indexers in an indexer cluster can run on virtual machines.
Answer:
A, DExplanation:
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk Enterprise documentation clarifies that none of the listed configurations are prohibited in production. Splunk allows the Cluster Manager to be colocated with the Monitoring Console in small deployments because both are management-plane functions and do not handle ingestion or search traffic. The documentation also states that the Search Head Cluster Deployer is not a runtime component and has minimal performance requirements, so it may be colocated with the Monitoring Console or Licensing Master when hardware resources permit.
Splunk also supports virtual machines for both search heads and indexers, provided they are deployed with dedicated CPU, storage throughput, and predictable performance. Splunk’s official hardware guidance specifies that while bare metal often yields higher performance, virtualized deployments are fully supported in production as long as sizing principles are met.
Because Splunk explicitly supports all four configurations under proper sizing and best-practice guidelines, there is no correct selection for “unsupported.” The question is outdated relative to current Splunk Enterprise recommendations.
Which of the following most improves KV Store resiliency?
Options:
Decrease latency between search heads.
Add faster storage to the search heads to improve artifact replication.
Add indexer CPU and memory to decrease search latency.
Increase the size of the Operations Log.
Answer:
AExplanation:
KV Store is a feature of Splunk Enterprise that allows apps to store and retrieve data within the context of an app1.
KV Store resides on search heads and replicates data across the members of a search head cluster1.
KV Store resiliency refers to the ability of KV Store to maintain data availability and consistency in the event of failures or disruptions2.
One of the factors that affects KV Store resiliency is the network latency between search heads, which can impact the speed and reliability of data replication2.
Decreasing latency between search heads can improve KV Store resiliency by reducing the chances of data loss, inconsistency, or corruption2.
The other options are not directly related to KV Store resiliency. Faster storage, indexer CPU and memory, and Operations Log size may affect other aspects of Splunk performance, but not KV Store345.
(A high-volume source and a low-volume source feed into the same index. Which of the following items best describe the impact of this design choice?)
Options:
Low volume data will improve the compression factor of the high volume data.
Search speed on low volume data will be slower than necessary.
Low volume data may move out of the index based on volume rather than age.
High volume data is optimized by the presence of low volume data.
Answer:
B, CExplanation:
The Splunk Managing Indexes and Storage Documentation explains that when multiple data sources with significantly different ingestion rates share a single index, index bucket management is governed by volume-based rotation, not by source or time. This means that high-volume data causes buckets to fill and roll more quickly, which in turn causes low-volume data to age out prematurely, even if it is relatively recent — hence Option C is correct.
Additionally, because Splunk organizes data within index buckets based on event time and storage characteristics, low-volume data mixed with high-volume data results in inefficient searches for smaller datasets. Queries that target the low-volume source will have to scan through the same large number of buckets containing the high-volume data, leading to slower-than-necessary search performance — Option B.
Compression efficiency (Option A) and performance optimization through data mixing (Option D) are not influenced by mixing volume patterns; these are determined by the event structure and compression algorithm, not source diversity. Splunk best practices recommend separating data sources into different indexes based on usage, volume, and retention requirements to optimize both performance and lifecycle management.
References (Splunk Enterprise Documentation):
• Managing Indexes and Storage – How Splunk Manages Buckets and Data Aging
• Splunk Indexing Performance and Data Organization Best Practices
• Splunk Enterprise Architecture and Data Lifecycle Management
• Best Practices for Data Volume Segregation and Retention Policies
When configuring a Splunk indexer cluster, what are the default values for replication and search factor?
Options:
replication_factor = 2search_factor = 2
replication_factor = 2search factor = 3
replication_factor = 3search_factor = 2
replication_factor = 3search factor = 3
Answer:
CExplanation:
The replication factor and the search factor are two important settings for a Splunk indexer cluster. The replication factor determines how many copies of each bucket are maintained across the set of peer nodes. The search factor determines how many searchable copies of each bucket are maintained. The default values for both settings are 3, which means that each bucket has three copies, and at least one of them is searchable
Before users can use a KV store, an admin must create a collection. Where is a collection is defined?
Options:
kvstore.conf
collection.conf
collections.conf
kvcollections.conf
Answer:
CExplanation:
A collection is defined in the collections.conf file, which specifies the name, schema, and permissions of the collection. The kvstore.conf file is used to configure the KV store settings, such as the port, SSL, and replication factor. The other two files do not exist1
Splunk configuration parameter settings can differ between multiple .conf files of the same name contained within different apps. Which of the following directories has the highest precedence?
Options:
System local directory.
System default directory.
App local directories, in ASCII order.
App default directories, in ASCII order.
Answer:
AExplanation:
The system local directory has the highest precedence among the following directories that contain Splunk configuration files of the same name within different apps. Splunk configuration files are stored in various directories under the SPLUNK_HOME/etc directory. The precedence of these directories determines which configuration file settings take effect when there are conflicts or overlaps. The system local directory, which is located at SPLUNK_HOME/etc/system/local, has the highest precedence among all directories, because it contains the system-level configurations that are specific to the instance. The system default directory, which is located at SPLUNK_HOME/etc/system/default, has the lowest precedence among all directories, because it contains the system-level configurations that are provided by Splunk and should not be modified. The app local directories, which are located at SPLUNK_HOME/etc/apps/APP_NAME/local, have a higher precedence than the app default directories, which are located at SPLUNK_HOME/etc/apps/APP_NAME/default, because the local directories contain the app-level configurations that are specific to the instance, while the default directories contain the app-level configurations that are provided by the app and should not be modified. The app local and default directories have different precedences depending on the ASCII order of the app names, with the app names that come later in the ASCII order having higher precedences.
(When determining where a Splunk forwarder is trying to send data, which of the following searches can provide assistance?)
Options:
index=_internal sourcetype=internal metrics destHost | dedup destHost
index=_internal sourcetype=splunkd metrics inputHost | dedup inputHost
index=_metrics sourcetype=splunkd metrics destHost | dedup destHost
index=_internal sourcetype=splunkd metrics destHost | dedup destHost
Answer:
DExplanation:
To determine where a Splunk forwarder is attempting to send its data, administrators can search within the _internal index using the metrics logs generated by the forwarder’s Splunkd process. The correct and documented search is:
index=_internal sourcetype=splunkd metrics destHost | dedup destHost
The _internal index contains detailed operational logs from the Splunkd process, including metrics on network connections, indexing pipelines, and output groups. The field destHost records the destination indexer(s) to which the forwarder is attempting to send data. Using dedup destHost ensures that only unique destination hosts are shown.
This search is particularly useful for troubleshooting forwarding issues, such as connection failures, misconfigurations in outputs.conf, or load-balancing behavior in multi-indexer setups.
Other listed options are invalid or incorrect because:
sourcetype=internal does not exist.
index=_metrics is not where Splunk stores forwarding telemetry.
The field inputHost identifies the source host, not the destination.
Thus, Option D aligns with Splunk’s official troubleshooting practices for forwarder-to-indexer communication validation.
References (Splunk Enterprise Documentation):
• Monitoring Forwarder Connections and Destinations
• Troubleshooting Forwarding Using Internal Logs
• _internal Index Reference – Metrics and destHost Fields
• outputs.conf – Verifying Forwarder Data Routing and Connectivity
Which of the following is an indexer clustering requirement?
Options:
Must use shared storage.
Must reside on a dedicated rack.
Must have at least three members.
Must share the same license pool.
Answer:
DExplanation:
An indexer clustering requirement is that the cluster members must share the same license pool and license master. A license pool is a group of licenses that are assigned to a set of Splunk instances. A license master is a Splunk instance that manages the distribution and enforcement of licenses in a pool. In an indexer cluster, all cluster members must belong to the same license pool and report to the same license master, to ensure that the cluster does not exceed the license limit and that the license violations are handled consistently. An indexer cluster does not require shared storage, because each cluster member has its own local storage for the index data. An indexer cluster does not have to reside on a dedicated rack, because the cluster members can be located on different physical or virtual machines, as long as they can communicate with each other. An indexer cluster does not have to have at least three members, because a cluster can have as few as two members, although this is not recommended for high availability
Which two sections can be expanded using the Search Job Inspector?
Options:
Execution costs.
Saved search history.
Search job properties.
Optimization suggestions.
Answer:
C, DExplanation:
The Search Job Inspector can be used to expand the following sections: Search job properties and Optimization suggestions. The Search Job Inspector is a tool that provides detailed information about a search job, such as the search parameters, the search statistics, the search timeline, and the search log. The Search Job Inspector can be accessed by clicking the Job menu in the Search bar and selecting Inspect Job. The Search Job Inspector has several sections that can be expanded or collapsed by clicking the arrow icon next to the section name. The Search job properties section shows the basic information about the search job, such as the SID, the status, the duration, the disk usage, and the scan count. The Optimization suggestions section shows the suggestions for improving the search performance, such as using transforming commands, filtering events, or reducing fields. The Execution costs and Saved search history sections are not part of the Search Job Inspector, and they cannot be expanded. The Execution costs section is part of the Search Dashboard, which shows the relative costs of each search component, such as commands, lookups, or subsearches. The Saved search history section is part of the Saved Searches page, which shows the history of the saved searches that have been run by the user or by a schedule
Which search will show all deployment client messages from the client (UF)?
Options:
index=_audit component=DC* host=
index=_audit component=DC* host=
index=_internal component= DC* host=
index=_internal component=DS* host=
Answer:
CExplanation:
The index=_internal component=DC* host=
(What command will decommission a search peer from an indexer cluster?)
Options:
splunk disablepeer --enforce-counts
splunk decommission —enforce-counts
splunk offline —enforce-counts
splunk remove cluster-peers —enforce-counts
Answer:
CExplanation:
The splunk offline --enforce-counts command is the official and documented method used to gracefully decommission a search peer (indexer) from an indexer cluster in Splunk Enterprise. This command ensures that all replication and search factors are maintained before the peer is removed.
When executed, Splunk initiates a controlled shutdown process for the peer node. The Cluster Manager verifies that sufficient replicated copies of all bucket data exist across the remaining peers according to the configured replication_factor (RF) and search_factor (SF). The --enforce-counts flag specifically enforces that replication and search counts remain intact before the peer fully detaches from the cluster, ensuring no data loss or availability gap.
The sequence typically includes:
Validating cluster state and replication health.
Rolling off the peer’s data responsibilities to other peers.
Removing the peer from the active cluster membership list once replication is complete.
Other options like disablepeer, decommission, or remove cluster-peers are not valid Splunk commands. Therefore, the correct documented method is to use:
splunk offline --enforce-counts
References (Splunk Enterprise Documentation):
• Indexer Clustering: Decommissioning a Peer Node
• Managing Peer Nodes and Maintaining Data Availability
• Splunk CLI Command Reference – splunk offline
• Cluster Manager and Peer Maintenance Procedures
When should a dedicated deployment server be used?
Options:
When there are more than 50 search peers.
When there are more than 50 apps to deploy to deployment clients.
When there are more than 50 deployment clients.
When there are more than 50 server classes.
Answer:
CExplanation:
A dedicated deployment server is a Splunk instance that manages the distribution of configuration updates and apps to a set of deployment clients, such as forwarders, indexers, or search heads. A dedicated deployment server should be used when there are more than 50 deployment clients, because this number exceeds the recommended limit for a non-dedicated deployment server. A non-dedicated deployment server is a Splunk instance that also performs other roles, such as indexing or searching. Using a dedicated deployment server can improve the performance, scalability, and reliability of the deployment process. Option C is the correct answer. Option A is incorrect because the number of search peers does not affect the need for a dedicated deployment server. Search peers are indexers that participate in a distributed search. Option B is incorrect because the number of apps to deploy does not affect the need for a dedicated deployment server. Apps are packages of configurations and assets that provide specific functionality or views in Splunk. Option D is incorrect because the number of server classes does not affect the need for a dedicated deployment server. Server classes are logical groups of deployment clients that share the same configuration updates and apps12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Aboutdeploymentserver 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Whentousedeploymentserver
Which of the following would be the least helpful in troubleshooting contents of Splunk configuration files?
Options:
crash logs
search.log
btool output
diagnostic logs
Answer:
AExplanation:
Splunk configuration files are files that contain settings that control various aspects of Splunk behavior, such as data inputs, outputs, indexing, searching, clustering, and so on1. Troubleshooting Splunk configuration files involves identifying and resolving issues that affect the functionality or performance of Splunk due to incorrect or conflicting configuration settings. Some of the tools and methods that can help with troubleshooting Splunk configuration files are:
search.log: This is a file that contains detailed information about the execution of a search, such as the search pipeline, the search commands, the search results, the search errors, and the search performance2. This file can help troubleshoot issues related to search configuration, such as props.conf, transforms.conf, macros.conf, and so on3.
btool output: This is a command-line tool that displays the effective configuration settings for a given Splunk component, such as inputs, outputs, indexes, props, and so on4. This tool can help troubleshoot issues related to configuration precedence, inheritance, and merging, as well as identify the source of a configuration setting5.
diagnostic logs: These are files that contain information about the Splunk system, such as the Splunk version, the operating system, the hardware, the license, the indexes, the apps, the users, the roles, the permissions, the configuration files, the log files, and the metrics6. These files can help troubleshoot issues related to Splunk installation, deployment, performance, and health7.
Option A is the correct answer because crash logs are the least helpful in troubleshooting Splunk configuration files. Crash logs are files that contain information about the Splunk process when it crashes, such as the stack trace, the memory dump, and the environment variables8. These files can help troubleshoot issues related to Splunk stability, reliability, and security, but not necessarily related to Splunk configuration9.
Which tool(s) can be leveraged to diagnose connection problems between an indexer and forwarder? (Select all that apply.)
Options:
telnet
tcpdump
splunk btool
splunk btprobe
Answer:
A, BExplanation:
The telnet and tcpdump tools can be leveraged to diagnose connection problems between an indexer and forwarder. The telnet tool can be used to test the connectivity and port availability between the indexer and forwarder. The tcpdump tool can be used to capture and analyze the network traffic between the indexer and forwarder. The splunk btool command can be used to check the configuration files of the indexer and forwarder, but it cannot diagnose the connection problems. The splunk btprobe command does not exist, and it is not a valid tool.
Splunk Enterprise platform instrumentation refers to data that the Splunk Enterprise deployment logs in the _introspection index. Which of the following logs are included in this index? (Select all that apply.)
Options:
audit.log
metrics.log
disk_objects.log
resource_usage.log
Answer:
C, DExplanation:
The following logs are included in the _introspection index, which contains data that the Splunk Enterprise deployment logs for platform instrumentation:
disk_objects.log. This log contains information about the disk objects that Splunk creates and manages, such as buckets, indexes, and files. This log can help monitor the disk space usage and the bucket lifecycle.
resource_usage.log. This log contains information about the resource usage of Splunk processes, such as CPU, memory, disk, and network. This log can help monitor the Splunk performance and identify any resource bottlenecks. The following logs are not included in the _introspection index, but rather in the _internal index, which contains data that Splunk generates for internal logging:
audit.log. This log contains information about the audit events that Splunk records, such as user actions, configuration changes, and search activity. This log can help audit the Splunk operations and security.
metrics.log. This log contains information about the performance metrics that Splunk collects, such as data throughput, data latency, search concurrency, and search duration. This log can help measure the Splunk performance and efficiency. For more information, see About Splunk Enterprise logging and [About the _introspection index] in the Splunk documentation.
A customer has a multisite cluster with site1 and site2 configured. They want to configure search heads in these sites to get search results only from data stored on their local sites. Which step prevents this behavior?
Options:
Set site=site0 in the [general] stanza of server.conf on the search head.
Configure site_search_factor = site1:1, total:2.
Implement only two indexers per site.
Configure site_search_factor = site1:2, total:3.
Answer:
AExplanation:
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk’s multisite clustering documentation describes that search affinity is controlled by the site attribute in server.conf on the search head. Splunk explicitly states that assigning site=site0 on a search head removes site affinity, causing the search head to treat all sites as equal and search remotely as needed. The documentation describes site0 as the special value that disables local-site preference and forces the system to behave like a single-site cluster.
The customer wants each site’s search head to pull results only from its local site. This behavior works only if the search head’s site value matches the local site name (e.g., site1 or site2). By setting it to site0, all locality restrictions are removed, which prevents the desired reduction of network traffic.
The site search factor options (B and D) affect replication and searchable copy placement on indexers, not search head behavior. The number of indexers per site (C) also does not disable search affinity. Therefore only option A disables local-only searching.
To improve Splunk performance, parallelIngestionPipelines setting can be adjusted on which of the following components in the Splunk architecture? (Select all that apply.)
Options:
Indexers
Forwarders
Search head
Cluster master
Answer:
A, BExplanation:
The parallelIngestionPipelines setting can be adjusted on the indexers and forwarders to improve Splunk performance. The parallelIngestionPipelines setting determines how many concurrent data pipelines are used to process the incoming data. Increasing the parallelIngestionPipelines setting can improve the data ingestion and indexing throughput, especially for high-volume data sources. The parallelIngestionPipelines setting can be adjusted on the indexers and forwarders by editing the limits.conf file. The parallelIngestionPipelines setting cannot be adjusted on the search head or the cluster master, because they are not involved in the data ingestion and indexing process.
Search dashboards in the Monitoring Console indicate that the distributed deployment is approaching its capacity. Which of the following options will provide the most search performance improvement?
Options:
Replace the indexer storage to solid state drives (SSD).
Add more search heads and redistribute users based on the search type.
Look for slow searches and reschedule them to run during an off-peak time.
Add more search peers and make sure forwarders distribute data evenly across all indexers.
Answer:
DExplanation:
Adding more search peers and making sure forwarders distribute data evenly across all indexers will provide the most search performance improvement when the distributed deployment is approaching its capacity. Adding more search peers will increase the search concurrency and reduce the load on each indexer. Distributing data evenly across all indexers will ensure that the search workload is balanced and no indexer becomes a bottleneck. Replacing the indexer storage to SSD will improve the search performance, but it is a costly and time-consuming option. Adding more search heads will not improve the search performance if the indexers are the bottleneck. Rescheduling slow searches to run during an off-peak time will reduce the search contention, but it will not improve the search performance for each individual search. For more information, see [Scale your indexer cluster] and [Distribute data across your indexers] in the Splunk documentation.
Where does the Splunk deployer send apps by default?
Options:
etc/slave-apps/
etc/deploy-apps/
etc/apps/
etc/shcluster/
Answer:
DExplanation:
The Splunk deployer sends apps to the search head cluster members by default to the path etc/shcluster/
Splunk's documentation recommends placing the configuration bundle in the $SPLUNK_HOME/etc/shcluster/apps directory on the deployer, which then gets distributed to the search head cluster members. However, it should be noted that within each app's directory, configurations can be under default or local subdirectories, with local taking precedence over default for configurations. The reference to etc/shcluster/
What is the default log size for Splunk internal logs?
Options:
10MB
20 MB
25MB
30MB
Answer:
CExplanation:
Splunk internal logs are stored in the SPLUNK_HOME/var/log/splunk directory by default. The default log size for Splunk internal logs is 25 MB, which means that when a log file reaches 25 MB, Splunk rolls it to a backup file and creates a new log file. The default number of backup files is 5, which means that Splunk keeps up to 5 backup files for each log file
A three-node search head cluster is skipping a large number of searches across time. What should be done to increase scheduled search capacity on the search head cluster?
Options:
Create a job server on the cluster.
Add another search head to the cluster.
server.conf captain_is_adhoc_searchhead = true.
Change limits.conf value for max_searches_per_cpu to a higher value.
Answer:
DExplanation:
Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.
What information is needed about the current environment before deploying Splunk? (select all that apply)
Options:
List of vendors for network devices.
Overall goals for the deployment.
Key users.
Data sources.
Answer:
B, C, DExplanation:
Before deploying Splunk, it is important to gather some information about the current environment, such as:
Overall goals for the deployment: This includes the business objectives, the use cases, the expected outcomes, and the success criteria for the Splunk deployment. This information helps to define the scope, the requirements, the design, and the validation of the Splunk solution1.
Key users: This includes the roles, the responsibilities, the expectations, and the needs of the different types of users who will interact with the Splunk deployment, such as administrators, analysts, developers, and end users. This information helps to determine the user access, the user experience, the user training, and the user feedback for the Splunk solution1.
Data sources: This includes the types, the formats, the volumes, the locations, and the characteristics of the data that will be ingested, indexed, and searched by the Splunk deployment. This information helps to estimate the data throughput, the data retention, the data quality, and the data analysis for the Splunk solution1.
Option B, C, and D are the correct answers because they reflect the essential information that is needed before deploying Splunk. Option A is incorrect because the list of vendors for network devices is not a relevant information for the Splunk deployment. The network devices may be part of the data sources, but the vendors are not important for the Splunk solution.
Which Splunk cluster feature requires additional indexer storage?
Options:
Search Head Clustering
Indexer Discovery
Indexer Acknowledgement
Index Summarization
Answer:
DExplanation:
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk’s documentation on summary indexing and data-model acceleration clarifies that summary data is stored as additional indexed data on the indexers. Summary indexing produces new events—aggregations, rollups, scheduled search outputs—and stores them in summary indexes. Splunk explains that these summaries accumulate over time and require additional bucket storage, retention considerations, and sizing adjustments.
The documentation for accelerated data models further confirms that acceleration summaries are stored alongside raw data on indexers, increasing disk usage proportional to the acceleration workload. This makes summary indexing the only listed feature that strictly increases indexer storage demand.
In contrast, Search Head Clustering replicates configuration and knowledge objects across search heads—not on indexers. Indexer Discovery affects forwarder behavior, not storage. Indexer Acknowledgement controls data-delivery guarantees but does not create extra indexed content.
Therefore, only Index Summarization (summary indexing) directly increases indexer storage requirements.
Determining data capacity for an index is a non-trivial exercise. Which of the following are possible considerations that would affect daily indexing volume? (select all that apply)
Options:
Average size of event data.
Number of data sources.
Peak data rates.
Number of concurrent searches on data.
Answer:
A, B, CExplanation:
According to the Splunk documentation1, determining data capacity for an index is a complex task that depends on several factors, such as:
Average size of event data. This is the average number of bytes per event that you send to Splunk. The larger the events, the more storage space they require and the more indexing time they consume.
Number of data sources. This is the number of different types of data that you send to Splunk, such as logs, metrics, network packets, etc. The more data sources you have, the more diverse and complex your data is, and the more processing and parsing Splunk needs to do to index it.
Peak data rates. This is the maximum amount of data that you send to Splunk per second, minute, hour, or day. The higher the peak data rates, the more load and pressure Splunk faces to index the data in a timely manner.
The other option is false because:
Number of concurrent searches on data. This is not a factor that affects daily indexing volume, as it is related to the search performance and the search scheduler, not the indexing process. However, it can affect the overall resource utilization and the responsiveness of Splunk2.
When adding or rejoining a member to a search head cluster, the following error is displayed:
Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.
What corrective action should be taken?
Options:
Restart the search head.
Run the splunk apply shcluster-bundle command from the deployer.
Run the clean raft command on all members of the search head cluster.
Run the splunk resync shcluster-replicated-config command on this member.
Answer:
DExplanation:
When adding or rejoining a member to a search head cluster, and the following error is displayed: Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.
The corrective action that should be taken is to run the splunk resync shcluster-replicated-config command on this member. This command will delete the existing configuration files on this member and replace them with the latest configuration files from the captain. This will ensure that the member has the same configuration as the rest of the cluster. Restarting the search head, running the splunk apply shcluster-bundle command from the deployer, or running the clean raft command on all members of the search head cluster are not the correct actions to take in this scenario. For more information, see Resolve configuration inconsistencies across cluster members in the Splunk documentation.
Which Splunk tool offers a health check for administrators to evaluate the health of their Splunk deployment?
Options:
btool
DiagGen
SPL Clinic
Monitoring Console
Answer:
DExplanation:
The Monitoring Console is the Splunk tool that offers a health check for administrators to evaluate the health of their Splunk deployment. The Monitoring Console provides dashboards and alerts that show the status and performance of various Splunk components, such as indexers, search heads, forwarders, license usage, and search activity. The Monitoring Console can also run health checks on the deployment and identify any issues or recommendations. The btool is a command-line tool that shows the effective settings of the configuration files, but it does not offer a health check. The DiagGen is a tool that generates diagnostic snapshots of the Splunk environment, but it does not offer a health check. The SPL Clinic is a tool that analyzes and optimizes SPL queries, but it does not offer a health check. For more information, see About the Monitoring Console in the Splunk documentation.
What does setting site=site0 on all Search Head Cluster members do in a multi-site indexer cluster?
Options:
Disables search site affinity.
Sets all members to dynamic captaincy.
Enables multisite search artifact replication.
Enables automatic search site affinity discovery.
Answer:
AExplanation:
Setting site=site0 on all Search Head Cluster members disables search site affinity. Search site affinity is a feature that allows search heads to preferentially search the peer nodes that are in the same site as the search head, to reduce network latency and bandwidth consumption. By setting site=site0, which is a special value that indicates no site, the search heads will search all peer nodes regardless of their site. Setting site=site0 does not set all members to dynamic captaincy, enable multisite search artifact replication, or enable automatic search site affinity discovery. Dynamic captaincy is a feature that allows any member to become the captain, and it is enabled by default. Multisite search artifact replication is a feature that allows search artifacts to be replicated across sites, and it is enabled by setting site_replication_factor to a value greater than 1. Automatic search site affinity discovery is a feature that allows search heads to automatically determine their site based on the network latency to the peer nodes, and it is enabled by setting site=auto
A single-site indexer cluster has a replication factor of 3, and a search factor of 2. What is true about this cluster?
Options:
The cluster will ensure there are at least two copies of each bucket, and at least three copies of searchable metadata.
The cluster will ensure there are at most three copies of each bucket, and at most two copies of searchable metadata.
The cluster will ensure only two search heads are allowed to access the bucket at the same time.
The cluster will ensure there are at least three copies of each bucket, and at least two copies of searchable metadata.
Answer:
DExplanation:
A single-site indexer cluster is a group of Splunk Enterprise instances that index and replicate data across the cluster1. A bucket is a directory that contains indexed data, along with metadata and other information2. A replication factor is the number of copies of each bucket that the cluster maintains1. A search factor is the number of searchable copies of each bucket that the cluster maintains1. A searchable copy is a copy that contains both the raw data and the index files3. A search head is a Splunk Enterprise instance that coordinates the search activities across the peer nodes1.
Option D is the correct answer because it reflects the definitions of replication factor and search factor. The cluster will ensure that there are at least three copies of each bucket, one on each peer node, to satisfy the replication factor of 3. The cluster will also ensure that there are at least two searchable copies of each bucket, one primary and one searchable, to satisfy the search factor of 2. The primary copy is the one that the search head uses to run searches, and the searchable copy is the one that can be promoted to primary if the original primary copy becomes unavailable3.
Option A is incorrect because it confuses the replication factor and the search factor. The cluster will ensure there are at least three copies of each bucket, not two, to meet the replication factor of 3. The cluster will ensure there are at least two copies of searchable metadata, not three, to meet the search factor of 2.
Option B is incorrect because it uses the wrong terms. The cluster will ensure there are at least, not at most, three copies of each bucket, to meet the replication factor of 3. The cluster will ensure there are at least, not at most, two copies of searchable metadata, to meet the search factor of 2.
Option C is incorrect because it has nothing to do with the replication factor or the search factor. The cluster does not limit the number of search heads that can access the bucket at the same time. The search head can search across multiple clusters, and the cluster can serve multiple search heads1.
1: The basics of indexer cluster architecture - Splunk Documentation 2: About buckets - Splunk Documentation 3: Search factor - Splunk Documentation
(The performance of a specific search is performing poorly. The search must run over All Time and is expected to have very few results. Analysis shows that the search accesses a very large number of buckets in a large index. What step would most significantly improve the performance of this search?)
Options:
Increase the disk I/O hardware performance.
Increase the number of indexing pipelines.
Set indexed_realtime_use_by_default = true in limits.conf.
Change this to a real-time search using an All Time window.
Answer:
AExplanation:
As per Splunk Enterprise Search Performance documentation, the most significant factor affecting search performance when querying across a large number of buckets is disk I/O throughput. A search that spans “All Time” forces Splunk to inspect all historical buckets (hot, warm, cold, and potentially frozen if thawed), even if only a few events match the query. This dramatically increases the amount of data read from disk, making the search bound by I/O performance rather than CPU or memory.
Increasing the number of indexing pipelines (Option B) only benefits data ingestion, not search performance. Changing to a real-time search (Option D) does not help because real-time searches are optimized for streaming new data, not historical queries. The indexed_realtime_use_by_default setting (Option C) applies only to streaming indexed real-time searches, not historical “All Time” searches.
To improve performance for such searches, Splunk documentation recommends enhancing disk I/O capability — typically through SSD storage, increased disk bandwidth, or optimized storage tiers. Additionally, creating summary indexes or accelerated data models may help for repeated “All Time” queries, but the most direct improvement comes from faster disk performance since Splunk must scan large numbers of buckets for even small result sets.
References (Splunk Enterprise Documentation):
• Search Performance Tuning and Optimization
• Understanding Bucket Search Mechanics and Disk I/O Impact
• limits.conf Parameters for Search Performance
• Storage and Hardware Sizing Guidelines for Indexers and Search Heads
The master node distributes configuration bundles to peer nodes. Which directory peer nodes receive the bundles?
Options:
apps
deployment-apps
slave-apps
master-apps
Answer:
CExplanation:
The master node distributes configuration bundles to peer nodes in the slave-apps directory under $SPLUNK_HOME/etc. The configuration bundle method is the only supported method for managing common configurations and app deployment across the set of peers. It ensures that all peers use the same versions of these files1. Bundles typically contain a subset of files (configuration files and assets) from $SPLUNK_HOME/etc/system, $SPLUNK_HOME/etc/apps, and $SPLUNK_HOME/etc/users2. The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head’s apps3.
A customer has installed a 500GB Enterprise license. They also purchased and installed a 300GB, no enforcement license on the same license master. How much data can the customer ingest before the search is locked out?
Options:
300GB. After this limit, the search is locked out.
500GB. After this limit, the search is locked out.
800GB. After this limit, the search is locked out.
Search is not locked out. Violations are still recorded.
Answer:
DExplanation:
Search is not locked out when a customer has installed a 500GB Enterprise license and a 300GB, no enforcement license on the same license master. The no enforcement license allows the customer to exceed the license quota without locking search, but violations are still recorded. The customer can ingest up to 800GB of data per day without violating the license, but if they ingest more than that, they will incur a violation. However, the violation will not lock search, as the no enforcement license overrides the enforcement policy of the Enterprise license. For more information, see [No enforcement licenses] and [License violations] in the Splunk documentation.
Which of the following is a valid use case that a search head cluster addresses?
Options:
Provide redundancy in the event a search peer fails.
Search affinity.
Knowledge Object replication.
Increased Search Factor (SF).
Answer:
CExplanation:
The correct answer is C. Knowledge Object replication. This is a valid use case that a search head cluster addresses, as it ensures that all the search heads in the cluster have the same set of knowledge objects, such as saved searches, dashboards, reports, and alerts1. The search head cluster replicates the knowledge objects across the cluster members, and synchronizes any changes or updates1. This provides a consistent user experience and avoids data inconsistency or duplication1. The other options are not valid use cases that a search head cluster addresses. Option A, providing redundancy in the event a search peer fails, is not a use case for a search head cluster, but for an indexer cluster, which maintains multiple copies of the indexed data and can recover from indexer failures2. Option B, search affinity, is not a use case for a search head cluster, but for a multisite indexer cluster, which allows the search heads to preferentially search the data on the local site, rather than on a remote site3. Option D, increased Search Factor (SF), is not a use case for a search head cluster, but for an indexer cluster, which determines how many searchable copies of each bucket are maintained across the indexers4. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: About search head clusters 2: About indexer clusters and index replication 3: Configure search affinity 4: Configure the search factor
A Splunk user successfully extracted an ip address into a field called src_ip. Their colleague cannot see that field in their search results with events known to have src_ip. Which of the following may explain the problem? (Select all that apply.)
Options:
The field was extracted as a private knowledge object.
The events are tagged as communicate, but are missing the network tag.
The Typing Queue, which does regular expression replacements, is blocked.
The colleague did not explicitly use the field in the search and the search was set to Fast Mode.
Answer:
A, DExplanation:
The following may explain the problem of why a colleague cannot see the src_ip field in their search results: The field was extracted as a private knowledge object, and the colleague did not explicitly use the field in the search and the search was set to Fast Mode. A knowledge object is a Splunk entity that applies some knowledge or intelligence to the data, such as a field extraction, a lookup, or a macro. A knowledge object can have different permissions, such as private, app, or global. A private knowledge object is only visible to the user who created it, and it cannot be shared with other users. A field extraction is a type of knowledge object that extracts fields from the raw data at index time or search time. If a field extraction is created as a private knowledge object, then only the user who created it can see the extracted field in their search results. A search mode is a setting that determines how Splunk processes and displays the search results, such as Fast, Smart, or Verbose. Fast mode is the fastest and most efficient search mode, but it also limits the number of fields and events that are displayed. Fast mode only shows the default fields, such as _time, host, source, sourcetype, and _raw, and any fields that are explicitly used in the search. If a field is not used in the search and it is not a default field, then it will not be shown in Fast mode. The events are tagged as communicate, but are missing the network tag, and the Typing Queue, which does regular expression replacements, is blocked, are not valid explanations for the problem. Tags are labels that can be applied to fields or field values to make them easier to search. Tags do not affect the visibility of fields, unless they are used as filters in the search. The Typing Queue is a component of the Splunk data pipeline that performs regular expression replacements on the data, such as replacing IP addresses with host names. The Typing Queue does not affect the field extraction process, unless it is configured to do so
A search head has successfully joined a single site indexer cluster. Which command is used to configure the same search head to join another indexer cluster?
Options:
splunk add cluster-config
splunk add cluster-master
splunk edit cluster-config
splunk edit cluster-master
Answer:
BExplanation:
The splunk add cluster-master command is used to configure the same search head to join another indexer cluster. A search head can search multiple indexer clusters by adding multiple cluster-master entries in its server.conf file. The splunk add cluster-master command can be used to add a new cluster-master entry to the server.conf file, by specifying the host name and port number of the master node of the other indexer cluster. The splunk add cluster-config command is used to configure the search head to join the first indexer cluster, not the second one. The splunk edit cluster-config command is used to edit the existing cluster configuration of the search head, not to add a new one. The splunk edit cluster-master command does not exist, and it is not a valid command.
Where in the Job Inspector can details be found to help determine where performance is affected?
Options:
Search Job Properties > runDuration
Search Job Properties > runtime
Job Details Dashboard > Total Events Matched
Execution Costs > Components
Answer:
DExplanation:
This is where in the Job Inspector details can be found to help determine where performance is affected, as it shows the time and resources spent by each component of the search, such as commands, subsearches, lookups, and post-processing1. The Execution Costs > Components section can help identify the most expensive or inefficient parts of the search, and suggest ways to optimize or improve the search performance1. The other options are not as useful as the Execution Costs > Components section for finding performance issues. Option A, Search Job Properties > runDuration, shows the total time, in seconds, that the search took to run2. This can indicate the overall performance of the search, but it does not provide any details on the specific components or factors that affected the performance. Option B, Search Job Properties > runtime, shows the time, in seconds, that the search took to run on the search head2. This can indicate the performance of the search head, but it does not account for the time spent on the indexers or the network. Option C, Job Details Dashboard > Total Events Matched, shows the number of events that matched the search criteria3. This can indicate the size and scope of the search, but it does not provide any information on the performance or efficiency of the search. Therefore, option D is the correct answer, and options A, B, and C are incorrect.
1: Execution Costs > Components 2: Search Job Properties 3: Job Details Dashboard
(Which deployer push mode should be used when pushing built-in apps?)
Options:
merge_to_default
local_only
full
default only
Answer:
BExplanation:
According to the Splunk Enterprise Search Head Clustering (SHC) Deployer documentation, the “local_only” push mode is the correct option when deploying built-in apps. This mode ensures that the deployer only pushes configurations from the local directory of built-in Splunk apps (such as search, learned, or launcher) without overwriting or merging their default app configurations.
In an SHC environment, the deployer is responsible for distributing configuration bundles to all search head members. Each push can be executed in different modes depending on how the admin wants to handle the app directories:
full: Overwrites both default and local folders of all apps in the bundle.
merge_to_default: Merges configurations into the default folder (used primarily for custom apps).
local_only: Pushes only local configurations, preserving default settings of built-in apps (the safest method for core Splunk apps).
default only: Pushes only default folder configurations (rarely used and not ideal for built-in app updates).
Using the “local_only” mode ensures that default Splunk system apps are not modified, preventing corruption or overwriting of base configurations that are critical for Splunk operation. It is explicitly recommended for pushing Splunk-provided (built-in) apps like search, launcher, and user-prefs from the deployer to all SHC members.
References (Splunk Enterprise Documentation):
• Managing Configuration Bundles with the Deployer (Search Head Clustering)
• Deployer Push Modes and Their Use Cases
• Splunk Enterprise Admin Manual – SHC Deployment Management
• Best Practices for Maintaining Built-in Splunk Apps in SHC Environments
A monitored log file is changing on the forwarder. However, Splunk searches are not finding any new data that has been added. What are possible causes? (select all that apply)
Options:
An admin ran splunk clean eventdata -index
An admin has removed the Splunk fishbucket on the forwarder.
The last 256 bytes of the monitored file are not changing.
The first 256 bytes of the monitored file are not changing.
Answer:
B, CExplanation:
A monitored log file is changing on the forwarder, but Splunk searches are not finding any new data that has been added. This could be caused by two possible reasons: B. An admin has removed the Splunk fishbucket on the forwarder. C. The last 256 bytes of the monitored file are not changing. Option B is correct because the Splunk fishbucket is a directory that stores information about the files that have been monitored by Splunk, such as the file name, size, modification time, and CRC checksum. If an admin removes the fishbucket, Splunk will lose track of the files that have been previously indexed and will not index any new data from those files. Option C is correct because Splunk uses the CRC checksum of the last 256 bytes of a monitored file to determine if the file has changed since the last time it was read. If the last 256 bytes of the file are not changing, Splunk will assume that the file is unchanged and will not index any new data from it. Option A is incorrect because running the splunk clean eventdata -index
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Monitorfilesanddirectories 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Didyouloseyourfishbucket
As a best practice, where should the internal licensing logs be stored?
Options:
Indexing layer.
License server.
Deployment layer.
Search head layer.
Answer:
BExplanation:
As a best practice, the internal licensing logs should be stored on the license server. The license server is a Splunk instance that manages the distribution and enforcement of licenses in a Splunk deployment. The license server generates internal licensing logs that contain information about the license usage, violations, warnings, and pools. The internal licensing logs should be stored on the license server itself, because they are relevant to the license server’s role and function. Storing the internal licensing logs on the license server also simplifies the license monitoring and troubleshooting process. The internal licensing logs should not be stored on the indexing layer, the deployment layer, or the search head layer, because they are not related to the roles and functions of these layers. Storing the internal licensing logs on these layers would also increase the network traffic and disk space consumption
An index has large text log entries with many unique terms in the raw data. Other than the raw data, which index components will take the most space?
Options:
Index files (*. tsidx files).
Bloom filters (bloomfilter files).
Index source metadata (sources.data files).
Index sourcetype metadata (SourceTypes. data files).
Answer:
AExplanation:
Index files (. tsidx files) are the main components of an index that store the raw data and the inverted index of terms. They take the most space in an index, especially if the raw data has many unique terms that increase the size of the inverted index. Bloom filters, source metadata, and sourcetype metadata are much smaller in comparison and do not depend on the number of unique terms in the raw data.
What is the algorithm used to determine captaincy in a Splunk search head cluster?
Options:
Raft distributed consensus.
Rapt distributed consensus.
Rift distributed consensus.
Round-robin distribution consensus.
Answer:
AExplanation:
The algorithm used to determine captaincy in a Splunk search head cluster is Raft distributed consensus. Raft is a consensus algorithm that is used to elect a leader among a group of nodes in a distributed system. In a Splunk search head cluster, Raft is used to elect a captain among the cluster members. The captain is the cluster member that is responsible for coordinating the search activities, replicating the configurations and apps, and pushing the knowledge bundles to the search peers. The captain is dynamically elected based on various criteria, such as CPU load, network latency, and search load. The captain can change over time, depending on the availability and performance of the cluster members. Rapt, Rift, and Round-robin are not valid algorithms for determining captaincy in a Splunk search head cluster
When designing the number and size of indexes, which of the following considerations should be applied?
Options:
Expected daily ingest volume, access controls, number of concurrent users
Number of installed apps, expected daily ingest volume, data retention time policies
Data retention time policies, number of installed apps, access controls
Expected daily ingest volumes, data retention time policies, access controls
Answer:
DExplanation:
When designing the number and size of indexes, the following considerations should be applied:
Expected daily ingest volumes: This is the amount of data that will be ingested and indexed by the Splunk platform per day. This affects the storage capacity, the indexing performance, and the license usage of the Splunk deployment. The number and size of indexes should be planned according to the expected daily ingest volumes, as well as the peak ingest volumes, to ensure that the Splunk deployment can handle the data load and meet the business requirements12.
Data retention time policies: This is the duration for which the data will be stored and searchable by the Splunk platform. This affects the storage capacity, the data availability, and the data compliance of the Splunk deployment. The number and size of indexes should be planned according to the data retention time policies, as well as the data lifecycle, to ensure that the Splunk deployment can retain the data for the desired period and meet the legal or regulatory obligations13.
Access controls: This is the mechanism for granting or restricting access to the data by the Splunk users or roles. This affects the data security, the data privacy, and the data governance of the Splunk deployment. The number and size of indexes should be planned according to the access controls, as well as the data sensitivity, to ensure that the Splunk deployment can protect the data from unauthorized or inappropriate access and meet the ethical or organizational standards14.
Option D is the correct answer because it reflects the most relevant and important considerations for designing the number and size of indexes. Option A is incorrect because the number of concurrent users is not a direct factor for designing the number and size of indexes, but rather a factor for designing the search head capacity and the search head clustering configuration5. Option B is incorrect because the number of installed apps is not a direct factor for designing the number and size of indexes, but rather a factor for designing the app compatibility and the app performance. Option C is incorrect because it omits the expected daily ingest volumes, which is a crucial factor for designing the number and size of indexes.
Several critical searches that were functioning correctly yesterday are not finding a lookup table today. Which log file would be the best place to start troubleshooting?
Options:
btool.log
web_access.log
health.log
configuration_change.log
Answer:
BExplanation:
A lookup table is a file that contains a list of values that can be used to enrich or modify the data during search time1. Lookup tables can be stored in CSV files or in the KV Store1. Troubleshooting lookup tables involves identifying and resolving issues that prevent the lookup tables from being accessed, updated, or applied correctly by the Splunk searches. Some of the tools and methods that can help with troubleshooting lookup tables are:
web_access.log: This is a file that contains information about the HTTP requests and responses that occur between the Splunk web server and the clients2. This file can help troubleshoot issues related to lookup table permissions, availability, and errors, such as 404 Not Found, 403 Forbidden, or 500 Internal Server Error34.
btool output: This is a command-line tool that displays the effective configuration settings for a given Splunk component, such as inputs, outputs, indexes, props, and so on5. This tool can help troubleshoot issues related to lookup table definitions, locations, and precedence, as well as identify the source of a configuration setting6.
search.log: This is a file that contains detailed information about the execution of a search, such as the search pipeline, the search commands, the search results, the search errors, and the search performance. This file can help troubleshoot issues related to lookup table commands, arguments, fields, and outputs, such as lookup, inputlookup, outputlookup, lookup_editor, and so on .
Option B is the correct answer because web_access.log is the best place to start troubleshooting lookup table issues, as it can provide the most relevant and immediate information about the lookup table access and status. Option A is incorrect because btool output is not a log file, but a command-line tool. Option C is incorrect because health.log is a file that contains information about the health of the Splunk components, such as the indexer cluster, the search head cluster, the license master, and the deployment server. This file can help troubleshoot issues related to Splunk deployment health, but not necessarily related to lookup tables. Option D is incorrect because configuration_change.log is a file that contains information about the changes made to the Splunk configuration files, such as the user, the time, the file, and the action. This file can help troubleshoot issues related to Splunk configuration changes, but not necessarily related to lookup tables.
Users who receive a link to a search are receiving an "Unknown sid" error message when they open the link.
Why is this happening?
Options:
The users have insufficient permissions.
An add-on needs to be updated.
The search job has expired.
One or more indexers are down.
Answer:
CExplanation:
According to the Splunk documentation1, the “Unknown sid” error message means that the search job associated with the link has expired or been deleted. The sid (search ID) is a unique identifier for each search job, and it is used to retrieve the results of the search. If the sid is not found, the search cannot be displayed. The other options are false because:
The users having insufficient permissions would result in a different error message, such as “You do not have permission to view this page” or "You do not have permission to run this search"1.
An add-on needing to be updated would not affect the validity of the sid, unless the add-on changes the search syntax or the data source in a way that makes the search invalid or inaccessible1.
One or more indexers being down would not cause the “Unknown sid” error, as the sid is stored on the search head, not the indexers. However, it could cause other errors, such as “Unable to distribute to peer” or "Search peer has the following message: not enough disk space"1.
Which CLI command converts a Splunk instance to a license slave?
Options:
splunk add licenses
splunk list licenser-slaves
splunk edit licenser-localslave
splunk list licenser-localslave
Answer:
CExplanation:
The splunk edit licenser-localslave command is used to convert a Splunk instance to a license slave. This command will configure the Splunk instance to contact a license master and receive a license from it. This command should be used when the Splunk instance is part of a distributed deployment and needs to share a license pool with other instances. The splunk add licenses command is used to add a license to a Splunk instance, not to convert it to a license slave. The splunk list licenser-slaves command is used to list the license slaves that are connected to a license master, not to convert a Splunk instance to a license slave. The splunk list licenser-localslave command is used to list the license master that a license slave is connected to, not to convert a Splunk instance to a license slave. For more information, see Configure license slaves in the Splunk documentation.
In the deployment planning process, when should a person identify who gets to see network data?
Options:
Deployment schedule
Topology diagramming
Data source inventory
Data policy definition
Answer:
DExplanation:
In the deployment planning process, a person should identify who gets to see network data in the data policy definition step. This step involves defining the data access policies and permissions for different users and roles in Splunk. The deployment schedule step involves defining the timeline and milestones for the deployment project. The topology diagramming step involves creating a visual representation of the Splunk architecture and components. The data source inventory step involves identifying and documenting the data sources and types that will be ingested by Splunk
The frequency in which a deployment client contacts the deployment server is controlled by what?
Options:
polling_interval attribute in outputs.conf
phoneHomeIntervalInSecs attribute in outputs.conf
polling_interval attribute in deploymentclient.conf
phoneHomeIntervalInSecs attribute in deploymentclient.conf
Answer:
DExplanation:
The frequency in which a deployment client contacts the deployment server is controlled by the phoneHomeIntervalInSecs attribute in deploymentclient.conf. This attribute specifies how often the deployment client checks in with the deployment server to get updates on the apps and configurations that it should receive. The polling_interval attribute in outputs.conf controls how often the forwarder sends data to the indexer or another forwarder. The polling_interval attribute in deploymentclient.conf and the phoneHomeIntervalInSecs attribute in outputs.conf are not valid Splunk attributes. For more information, see Configure deployment clients and Configure forwarders with outputs.conf in the Splunk documentation.
Which props.conf setting has the least impact on indexing performance?
Options:
SHOULD_LINEMERGE
TRUNCATE
CHARSET
TIME_PREFIX
Answer:
CExplanation:
According to the Splunk documentation1, the CHARSET setting in props.conf specifies the character set encoding of the source data. This setting has the least impact on indexing performance, as it only affects how Splunk interprets the bytes of the data, not how it processes or transforms the data. The other options are false because:
The SHOULD_LINEMERGE setting in props.conf determines whether Splunk breaks events based on timestamps or newlines. This setting has a significant impact on indexing performance, as it affects how Splunk parses the data and identifies the boundaries of the events2.
The TRUNCATE setting in props.conf specifies the maximum number of characters that Splunk indexes from a single line of a file. This setting has a moderate impact on indexing performance, as it affects how much data Splunk reads and writes to the index3.
The TIME_PREFIX setting in props.conf specifies the prefix that directly precedes the timestamp in the event data. This setting has a moderate impact on indexing performance, as it affects how Splunk extracts the timestamp and assigns it to the event
(Which of the following is a benefit of using SmartStore?)
Options:
Automatic selection of replication and search factors.
Separating storage from compute.
Knowledge Object replication.
Cluster Manager is no longer required.
Answer:
BExplanation:
According to the Splunk SmartStore Architecture Guide, the primary benefit of SmartStore is the separation of storage from compute resources within an indexer cluster. SmartStore enables Splunk to decouple indexer storage (data at rest) from the compute layer (indexers that perform searches and indexing).
With SmartStore, active (hot/warm) data remains on local disk for fast access, while older, less frequently searched (remote) data is stored in an external object storage system such as Amazon S3, Google Cloud Storage, or on-premises S3-compatible storage. This separation reduces the storage footprint on indexers, allowing organizations to scale compute and storage independently.
This architecture improves cost efficiency and scalability by:
Lowering on-premises storage costs using object storage for retention.
Enabling dynamic scaling of indexers without impacting total data availability.
Reducing replication overhead since SmartStore manages data objects efficiently.
SmartStore does not affect replication or search factors (Option A), does not handle Knowledge Object replication (Option C), and the Cluster Manager is still required (Option D) to coordinate cluster activities.
References (Splunk Enterprise Documentation):
• SmartStore Overview and Architecture Guide
• SmartStore Deployment and Configuration Manual
• Managing Storage and Compute Independence in Indexer Clusters
• Splunk Enterprise Capacity Planning – SmartStore Sizing Guidelines
In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?
Options:
Input
Search
Parsing
Indexing
Answer:
DExplanation:
Indexed extraction configurations are processed in the indexing phase of the Splunk Enterprise data pipeline. The data pipeline is the process that Splunk uses to ingest, parse, index, and search data. Indexed extraction configurations are settings that determine how Splunk extracts fields from data at index time, rather than at search time. Indexed extraction can improve search performance, but it also increases the size of the index. Indexed extraction configurations are applied in the indexing phase, which is the phase where Splunk writes the data and the .tsidx files to the index. The input phase is the phase where Splunk receives data from various sources and formats. The parsing phase is the phase where Splunk breaks the data into events, timestamps, and hosts. The search phase is the phase where Splunk executes search commands and returns results.
What is the minimum reference server specification for a Splunk indexer?
Options:
12 CPU cores, 12GB RAM, 800 IOPS
16 CPU cores, 16GB RAM, 800 IOPS
24 CPU cores, 16GB RAM, 1200 IOPS
28 CPU cores, 32GB RAM, 1200 IOPS
Answer:
AExplanation:
The minimum reference server specification for a Splunk indexer is 12 CPU cores, 12GB RAM, and 800 IOPS. This specification is based on the assumption that the indexer will handle an average indexing volume of 100GB per day, with a peak of 300GB per day, and a typical search load of 1 concurrent search per 1GB of indexing volume. The other specifications are either higher or lower than the minimum requirement. For more information, see [Reference hardware] in the Splunk documentation.
(If a license peer cannot communicate to a license manager for 72 hours or more, what will happen?)
Options:
The license peer is placed in violation, and a warning is generated.
A license warning is generated, and there is no impact to the license peer.
What happens depends on license type.
The license peer is placed in violation, and search is blocked.
Answer:
DExplanation:
Per the Splunk Enterprise Licensing Documentation, a license peer (such as an indexer or search head) must regularly communicate with its license manager to report data usage and verify license validity. Splunk allows a 72-hour grace period during which the peer continues operating normally even if communication with the license manager fails.
If this communication is not re-established within 72 hours, the peer enters a “license violation” state. In this state, the system blocks all search activities, including ad-hoc and scheduled searches, but continues to ingest and index data. Administrative and licensing-related searches may still run for diagnostic purposes, but user searches are restricted.
The intent of this design is to prevent prolonged unlicensed data ingestion while ensuring the environment remains compliant. The 72-hour rule is hard-coded in Splunk Enterprise and applies uniformly across license types (Enterprise or Distributed). This ensures consistent licensing enforcement across distributed deployments.
Warnings are generated during the grace period, but after 72 hours, searches are automatically blocked until the peer successfully reconnects to its license manager.
References (Splunk Enterprise Documentation):
• Managing Licenses in a Distributed Environment
• License Manager and Peer Communication Workflow
• Splunk License Enforcement and Violation Behavior
• Splunk Enterprise Admin Manual – License Usage and Reporting Policies
Unlock SPLK-2002 Features
- SPLK-2002 All Real Exam Questions
- SPLK-2002 Exam easy to use and print PDF format
- Download Free SPLK-2002 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet
Questions & Answers PDF Demo
- SPLK-2002 All Real Exam Questions
- SPLK-2002 Exam easy to use and print PDF format
- Download Free SPLK-2002 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet