What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
The restoration priorities of a Disaster Recovery Plan (DRP) are based on which of the following documents?
Service Level Agreement (SLA)
Business Continuity Plan (BCP)
Business Impact Analysis (BIA)
Crisis management plan
According to the CISSP All-in-One Exam Guide, the restoration priorities of a Disaster Recovery Plan (DRP) are based on the Business Impact Analysis (BIA). A DRP is a document that defines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A restoration priority is the order or sequence in which the critical business processes and functions, as well as the supporting resources, such as data, systems, personnel, and facilities, are restored after a disaster. A BIA is a process that assesses the potential impact and consequences of a disaster on the organization’s business processes and functions, as well as the supporting resources. A BIA helps to identify and prioritize the critical business processes and functions, as well as the recovery objectives and time frames for them. A BIA also helps to determine the dependencies and interdependencies among the business processes and functions, as well as the supporting resources. Therefore, the restoration priorities of a DRP are based on the BIA, as it provides the information and analysis that are needed to plan and execute the recovery strategy. A Service Level Agreement (SLA) is not the document that the restoration priorities of a DRP are based on, although it may be a factor that influences the restoration priorities. An SLA is a document that defines the expectations and requirements for the quality and performance of a service or product that is provided by a service provider to a customer or client, such as the availability, reliability, scalability, or security of the service or product. An SLA may help to justify or support the restoration priorities of a DRP, but it does not provide the information and analysis that are needed to plan and execute the recovery strategy. A Business Continuity Plan (BCP) is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A BCP is a document that defines the procedures and actions to be taken to ensure the continuity of the essential business operations during and after a disaster. A BCP may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the continuity rather than the recovery of them. A BCP may also include other aspects or components that are not covered by a DRP, such as the prevention, mitigation, or response to a disaster. A crisis management plan is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A crisis management plan is a document that defines the procedures and actions to be taken to manage and resolve a crisis or emergency situation that may affect the organization, such as a natural disaster, a cyberattack, or a pandemic. A crisis management plan may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the management rather than the recovery of them. A crisis management plan may also include other aspects or components that are not covered by a DRP, such as the communication, coordination, or escalation of the crisis or emergency situation.
Which of the following disaster recovery test plans will be MOST effective while providing minimal risk?
Read-through
Parallel
Full interruption
Simulation
A disaster recovery test plan is a document that describes the methods and procedures for testing the effectiveness and readiness of a disaster recovery plan (DRP), which is a subset of a BCP that focuses on restoring the organization’s IT systems and data after a disruption or disaster. A disaster recovery test plan can have different types and levels of testing, depending on the objectives, scope, and resources of the organization. A parallel test is a type of disaster recovery test plan that involves activating the backup site and running the critical systems and processes in parallel with the primary site, without disrupting the normal operations. A parallel test is the most effective type of disaster recovery test plan, as it simulates a realistic scenario of a disruption or disaster, and allows the organization to evaluate the performance and functionality of the backup site, as well as the communication and coordination between the primary and backup sites. A parallel test also provides minimal risk, as it does not affect the normal operations of the primary site, and does not require switching over to the backup site. A read-through test is a type of disaster recovery test plan that involves reviewing the DRP document and verifying its accuracy and completeness, without performing any actual actions. A read-through test is the least effective type of disaster recovery test plan, as it does not test the actual implementation and execution of the DRP, and does not identify any potential issues or gaps in the DRP. A full interruption test is a type of disaster recovery test plan that involves shutting down the primary site and switching over to the backup site, and performing the normal operations from the backup site. A full interruption test is the most realistic type of disaster recovery test plan, as it tests the actual implementation and execution of the DRP, and identifies any potential issues or gaps in the DRP. However, a full interruption test also provides the highest risk, as it affects the normal operations of the primary site, and may cause data loss, downtime, or customer dissatisfaction. A simulation test is a type of disaster recovery test plan that involves simulating a disruption or disaster scenario and performing the actions and procedures of the DRP, without activating the backup site or affecting the normal operations. A simulation test is a moderately effective type of disaster recovery test plan, as it tests the implementation and execution of the DRP, and identifies any potential issues or gaps in the DRP. However, a simulation test also provides moderate risk, as it does not test the performance and functionality of the backup site, and may not simulate a realistic scenario of a disruption or disaster. References: [Disaster Recovery Test Plan], [CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations]2
Discretionary Access Control (DAC) restricts access according to
data classification labeling.
page views within an application.
authorizations granted to the user.
management accreditation.
Discretionary Access Control (DAC) restricts access according to authorizations granted to the user. DAC is a type of access control that allows the owner or creator of a resource to decide who can access it and what level of access they can have. DAC uses access control lists (ACLs) to assign permissions to resources, and users can pass or change their permissions to other users
Which of the following prevents improper aggregation of privileges in Role Based Access Control (RBAC)?
Hierarchical inheritance
Dynamic separation of duties
The Clark-Wilson security model
The Bell-LaPadula security model
The method that prevents improper aggregation of privileges in role based access control (RBAC) is dynamic separation of duties. RBAC is a type of access control model that assigns permissions and privileges to users or devices based on their roles or functions within an organization, rather than their identities or attributes. RBAC can simplify and streamline the access control management, as it can reduce the complexity and redundancy of the permissions and privileges. However, RBAC can also introduce the risk of improper aggregation of privileges, which is the situation where a user or a device can accumulate more permissions or privileges than necessary or appropriate for their role or function, either by having multiple roles or by changing roles over time. Dynamic separation of duties is a method that prevents improper aggregation of privileges in RBAC, by enforcing rules or constraints that limit or restrict the roles or the permissions that a user or a device can have or use at any given time or situation.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 349; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 310
While investigating a malicious event, only six days of audit logs from the last month were available. What policy should be updated to address this problem?
Retention
Reporting
Recovery
Remediation
The policy that should be updated to address the problem of having only six days of audit logs from the last month available while investigating a malicious event is the retention policy. A retention policy is a policy that defines and specifies the duration and conditions for keeping or storing the records or data of an organization, such as audit logs, backups, or archives. A retention policy should be based on the legal, regulatory, operational, or business requirements of the organization, and should balance the costs and benefits of retaining or disposing the records or data. The problem of having only six days of audit logs from the last month available while investigating a malicious event indicates that the retention policy is inadequate or ineffective, as it does not ensure the availability or accessibility of the audit logs for the investigation purposes. The retention policy should be updated to address this problem by extending or adjusting the retention period or criteria for the audit logs, and by enforcing or monitoring the compliance with the retention policy. The other options are not the policies that should be updated to address this problem, but rather different or irrelevant policies. A reporting policy is a policy that defines and specifies the procedures and actions for communicating or disclosing the information or incidents of an organization, such as audit results, security breaches, or performance metrics. A reporting policy should be based on the legal, regulatory, operational, or business requirements of the organization, and should ensure the accuracy, timeliness, and completeness of the reporting. A recovery policy is a policy that defines and specifies the objectives and strategies for restoring the normal operations of an organization after a disaster or disruption, such as recovery time objective, recovery point objective, or recovery methods. A recovery policy should be based on the business impact analysis and risk assessment of the organization, and should ensure the continuity, resilience, and availability of the organization. A remediation policy is a policy that defines and specifies the procedures and actions for correcting or improving the security or performance of an organization, such as vulnerability remediation, incident response, or root cause analysis. A remediation policy should be based on the security assessment and audit findings of the organization, and should ensure the effectiveness, efficiency, and compliance of the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, p. 376; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 406.
Which one of the following operates at the session, transport, or network layer of the Open System Interconnection (OSI) model?
Data at rest encryption
Configuration Management
Integrity checking software
Cyclic redundancy check (CRC)
Cyclic redundancy check (CRC) is the only option that operates at the session, transport, or network layer of the OSI model. CRC is a technique or algorithm that calculates a checksum or a hash value for a data block or a packet, and appends it to the data or the packet. CRC can be used to detect errors or corruption in the data or the packet during transmission or storage, by comparing the checksum or the hash value at the sender and the receiver. CRC can operate at different layers of the OSI model, such as the session layer (e.g., NetBIOS), the transport layer (e.g., TCP), or the network layer (e.g., IP).
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 199; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 166
What balance MUST be considered when web application developers determine how informative application error messages should be constructed?
Risk versus benefit
Availability versus auditability
Confidentiality versus integrity
Performance versus user satisfaction
According to the CXL blog2, the balance that must be considered when web application developers determine how informative application error messages should be constructed is risk versus benefit. Application error messages are the messages that are displayed or communicated to the users when an error or a problem occurs in the web application, such as a login failure, a form validation error, or a server error. Application error messages are important for the user experience and the conversion rate, as they help the users to understand and resolve the error or the problem, as well as to continue or complete their tasks or goals on the web application. However, application error messages also pose some risks or challenges for the web application developers, as they may reveal or expose some sensitive or confidential information about the web application, such as the system architecture, the database structure, or the security vulnerabilities, which may be exploited or attacked by the malicious users or hackers. Therefore, web application developers need to consider the balance between the risk and the benefit when determining how informative application error messages should be constructed. The risk is the potential or possibility of harm or damage to the web application, the data, or the users, as a result of the application error messages, such as the loss of privacy, integrity, or availability. The benefit is the value or advantage of the application error messages for the web application, the data, or the users, such as the improvement of usability, functionality, or security. Web application developers need to weigh the risk and the benefit of the application error messages, and decide how much and what kind of information to include or exclude in the application error messages, as well as how to present or format the information in the application error messages, in order to achieve the optimal balance between the risk and the benefit. Availability versus auditability is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the performance or the monitoring of the web application. Availability is the property that ensures that the web application, the data, or the users are accessible or usable when needed or desired, and are protected from unauthorized or unintended denial or disruption. Auditability is the property that ensures that the web application, the data, or the users are traceable or accountable for their actions or events, and are supported by the logging or recording mechanisms. Availability and auditability are both important for the web application, the data, and the users, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. Confidentiality versus integrity is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the protection or the quality of the data. Confidentiality is the property that ensures that the data is only accessible or disclosed to the authorized parties, and is protected from unauthorized or unintended access or disclosure. Integrity is the property that ensures that the data is accurate, complete, and consistent, and is protected from unauthorized or unintended modification or corruption. Confidentiality and integrity are both important for the data, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. Performance versus user satisfaction is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the efficiency or the effectiveness of the web application. Performance is the measure or indicator of how well the web application performs its functions or services, such as the speed, reliability, or scalability of the web application. User satisfaction is the measure or indicator of how satisfied the users are with the web application, its functions or services, or its user experience, such as the usability, functionality, or security of the web application. Performance and user satisfaction are both important for the web application, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. References: 2
Which of the following restricts the ability of an individual to carry out all the steps of a particular process?
Job rotation
Separation of duties
Least privilege
Mandatory vacations
According to the CISSP For Dummies3, the concept that restricts the ability of an individual to carry out all the steps of a particular process is separation of duties. Separation of duties is a security principle that divides the tasks and responsibilities of a process among different individuals or roles, so that no one person or role has complete control or authority over the process. Separation of duties helps to prevent or detect fraud, errors, abuse, or collusion, by requiring multiple approvals, checks, or verifications for each step of the process. Separation of duties also helps to enforce the principle of least privilege, which states that users and processes should only have the minimum access required to perform their tasks, and no more. Job rotation is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Job rotation is a security practice that requires the individuals or roles to periodically switch or rotate their tasks and responsibilities, so that no one person or role performs the same task or responsibility for a long period of time. Job rotation helps to prevent or detect fraud, errors, abuse, or collusion, by exposing the activities and performance of each individual or role to different perspectives and evaluations. Job rotation also helps to reduce the risk of insider threats, by limiting the opportunity and familiarity of each individual or role with the tasks and responsibilities. Least privilege is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a principle that supports separation of duties. Least privilege is a security principle that states that users and processes should only have the minimum access required to perform their tasks, and no more. Least privilege helps to prevent or limit unauthorized or malicious actions, as well as the impact of potential incidents, by reducing the access rights and permissions of each user and process. Mandatory vacations is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Mandatory vacations is a security practice that requires the individuals or roles to take a mandatory leave of absence from their tasks and responsibilities for a certain period of time, so that no one person or role performs the same task or responsibility continuously. Mandatory vacations helps to prevent or detect fraud, errors, abuse, or collusion, by allowing the activities and performance of each individual or role to be reviewed and audited by others during their absence. Mandatory vacations also helps to reduce the risk of insider threats, by disrupting the routine and plans of each individual or role with the tasks and responsibilities. References: 3
An organization has hired a security services firm to conduct a penetration test. Which of the following will the organization provide to the tester?
Limits and scope of the testing.
Physical location of server room and wiring closet.
Logical location of filters and concentrators.
Employee directory and organizational chart.
The organization will provide the limits and scope of the testing to the security services firm that will conduct a penetration test. The limits and scope of the testing define the boundaries, objectives, and rules of engagement for the penetration test, such as the target systems, the testing methods, the testing duration, the testing schedule, the testing team, the testing tools, the testing reporting, and the testing authorization. The limits and scope of the testing are essential for ensuring the legality, the ethics, and the effectiveness of the penetration test.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 427; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 381
Which of the following statements is TRUE regarding value boundary analysis as a functional software testing technique?
It is useful for testing communications protocols and graphical user interfaces.
It is characterized by the stateless behavior of a process implemented in a function.
Test inputs are obtained from the derived threshold of the given functional specifications.
An entire partition can be covered by considering only one representative value from that partition.
Value boundary analysis is a functional software testing technique that tests the behavior of a software system or component when it receives inputs that are at the boundary or edge of the expected range of values. Value boundary analysis is based on the assumption that errors are more likely to occur at the boundary values than at the normal values. Test inputs are obtained from the derived threshold of the given functional specifications, such as the minimum, maximum, or just above or below the boundary values. Value boundary analysis can help identify errors or defects in the software system or component that may cause unexpected or incorrect outputs, crashes, or failures34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 497; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1015.
Discretionary Access Control (DAC) is based on which of the following?
Information source and destination
Identification of subjects and objects
Security labels and privileges
Standards and guidelines
Discretionary Access Control (DAC) is based on the identification of subjects and objects. DAC is a type of access control model that grants or denies access to the objects based on the identity or attributes of the subjects, as well as the permissions or rules defined by the owners of the objects. Subjects are the entities that request or initiate the access, such as users, processes, or programs. Objects are the entities that are accessed, such as files, folders, databases, or devices. In DAC, the owners of the objects have the discretion or authority to determine who can access their objects and what actions they can perform on them. DAC can provide flexibility and convenience for the subjects and the owners, but it can also introduce security risks, such as unauthorized access, privilege escalation, or information leakage.References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, p. 254; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management, p. 633.
How does a Host Based Intrusion Detection System (HIDS) identify a potential attack?
Examines log messages or other indications on the system.
Monitors alarms sent to the system administrator
Matches traffic patterns to virus signature files
Examines the Access Control List (ACL)
According to the CISSP All-in-One Exam Guide3, a Host Based Intrusion Detection System (HIDS) identifies a potential attack by examining log messages or other indications on the system. This means that a HIDS is a type of intrusion detection system that monitors the activities and events that occur on a specific host, such as a server or a workstation, and analyzes them for signs of malicious or unauthorized behavior. A HIDS can examine various sources of data on the host, such as system logs, audit trails, registry entries, file system changes, network connections, and so on. A HIDS does not identify a potential attack by monitoring alarms sent to the system administrator, as this is a function of the intrusion detection system management console, which receives and displays the alerts generated by the HIDS. A HIDS does not identify a potential attack by matching traffic patterns to virus signature files, as this is a function of an antivirus software, which scans the incoming and outgoing data for known malware signatures. A HIDS does not identify a potential attack by examining the Access Control List (ACL), as this is a mechanism that defines the permissions and restrictions for accessing a resource, not a source of intrusion detection data.
In the Open System Interconnection (OSI) model, which layer is responsible for the transmission of binary data over a communications network?
Application Layer
Physical Layer
Data-Link Layer
Network Layer
In the Open System Interconnection (OSI) model, the layer that is responsible for the transmission of binary data over a communications network is the Physical Layer. The OSI model is a conceptual framework or a reference model that describes or defines how the different components or elements of a communications network interact or communicate with each other, using a layered or a modular approach. The OSI model consists of seven layers, each with a specific function or role, and each communicating or interfacing with the adjacent layers. The Physical Layer is the lowest or the first layer of the OSI model, and it is responsible for the transmission of binary data over a communications network, which means that it is responsible for converting or encoding the data or the information into electrical signals, optical signals, or radio waves, and sending or receiving them over the physical medium or the channel, such as the cable, the fiber, or the air. The Physical Layer is also responsible for defining or specifying the physical characteristics or the properties of the physical medium or the channel, such as the voltage, the frequency, the bandwidth, or the modulation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 101; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 158
Which of the following analyses is performed to protect information assets?
Business impact analysis
Feasibility analysis
Cost benefit analysis
Data analysis
The analysis that is performed to protect information assets is the cost benefit analysis, which is a method of comparing the costs and benefits of different security solutions or alternatives. The cost benefit analysis helps to justify the investment in security controls and measures by evaluating the trade-offs between the security costs and the security benefits. The security costs include the direct and indirect expenses of acquiring, implementing, operating, and maintaining the security controls and measures. The security benefits include the reduction of risks, losses, and liabilities, as well as the enhancement of productivity, performance, and reputation. The other options are not the analysis that is performed to protect information assets, but rather different types of analyses. A business impact analysis is a method of identifying and quantifying the potential impacts of disruptive events on the organization’s critical business functions and processes. A feasibility analysis is a method of assessing the technical, operational, and economic viability of a proposed project or solution. A data analysis is a method of processing, transforming, and modeling data to extract useful information and insights. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 28; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 21; CISSP practice exam questions and answers, Question 10.
What is the process called when impact values are assigned to the security objectives for information types?
Qualitative analysis
Quantitative analysis
Remediation
System security categorization
The process called when impact values are assigned to the security objectives for information types is system security categorization. System security categorization is a process of determining the potential impact on an organization if a system or information is compromised, based on the security objectives of confidentiality, integrity, and availability. System security categorization helps to identify the security requirements and controls for the system or information, as well as to prioritize the resources and efforts for protecting them. System security categorization can be based on the standards or guidelines provided by the organization or the relevant authorities, such as the Federal Information Processing Standards (FIPS) Publication 199 or the National Institute of Standards and Technology (NIST) Special Publication 800-6034 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 31.
Which of the following is a recommended alternative to an integrated email encryption system?
Sign emails containing sensitive data
Send sensitive data in separate emails
Encrypt sensitive data separately in attachments
Store sensitive information to be sent in encrypted drives
The recommended alternative to an integrated email encryption system is to encrypt sensitive data separately in attachments. An integrated email encryption system is a system or a service that provides or offers the encryption or the protection for the email messages or the email communications, by using or applying the cryptographic techniques or the mechanisms, such as the public key encryption, the symmetric key encryption, or the digital signatures. An integrated email encryption system can protect the confidentiality, the integrity, or the authenticity of the email messages or the email communications, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or spoofing of the email messages or the email communications by the third parties or the attackers who intercept or capture the email messages or the email communications over the network. However, an integrated email encryption system can also have some limitations or challenges, such as the compatibility, the usability, or the cost. Therefore, the recommended alternative to an integrated email encryption system is to encrypt sensitive data separately in attachments, which means that instead of encrypting the entire email message or the email communication, only the sensitive data or the information that is attached or appended to the email message or the email communication, such as the documents, the files, or the images, are encrypted or protected, using the cryptographic techniques or the mechanisms, such as the password, the passphrase, or the key. Encrypting sensitive data separately in attachments can provide a similar level of security or protection for the email messages or the email communications, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or spoofing of the sensitive data or the information by the third parties or the attackers who intercept or capture the email messages or the email communications over the network, and it can also overcome or address some of the limitations or challenges of the integrated email encryption system, such as the compatibility, the usability, or the cost. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 116; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 173
Data remanence refers to which of the following?
The remaining photons left in a fiber optic cable after a secure transmission.
The retention period required by law or regulation.
The magnetic flux created when removing the network connection from a server or personal computer.
The residual information left on magnetic storage media after a deletion or erasure.
Data remanence refers to the residual information left on magnetic storage media after a deletion or erasure. Data remanence is a security risk, as it may allow unauthorized or malicious parties to recover the deleted or erased data, which may contain sensitive or confidential information. Data remanence can be caused by the physical properties of the magnetic storage media, such as hard disks, floppy disks, or tapes, which may retain some traces of the data even after it is overwritten or formatted. Data remanence can also be caused by the logical properties of the file systems or operating systems, which may not delete or erase the data completely, but only mark the space as available or remove the pointers to the data. Data remanence can be prevented or reduced by using secure deletion or erasure methods, such as cryptographic wiping, degaussing, or physical destruction56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 443; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 855.
When designing a vulnerability test, which one of the following is likely to give the BEST indication of what components currently operate on the network?
Topology diagrams
Mapping tools
Asset register
Ping testing
According to the CISSP All-in-One Exam Guide2, when designing a vulnerability test, mapping tools are likely to give the best indication of what components currently operate on the network. Mapping tools are software applications that scan and discover the network topology, devices, services, and protocols. They can provide a graphical representation of the network structure and components, as well as detailed information about each node and connection. Mapping tools can help identify potential vulnerabilities and weaknesses in the network configuration and architecture, as well as the exposure and attack surface of the network. Topology diagrams are not likely to give the best indication of what components currently operate on the network, as they may be outdated, inaccurate, or incomplete. Topology diagrams are static and abstract representations of the network layout and design, but they may not reflect the actual and dynamic state of the network. Asset register is not likely to give the best indication of what components currently operate on the network, as it may be outdated, inaccurate, or incomplete. Asset register is a document that lists and categorizes the assets owned by an organization, such as hardware, software, data, and personnel. However, it may not capture the current status, configuration, and interconnection of the assets, as well as the changes and updates that occur over time. Ping testing is not likely to give the best indication of what components currently operate on the network, as it is a simple and limited technique that only checks the availability and response time of a host. Ping testing is a network utility that sends an echo request packet to a target host and waits for an echo reply packet. It can measure the connectivity and latency of the host, but it cannot provide detailed information about the host’s characteristics, services, and vulnerabilities. References: 2
Application of which of the following Institute of Electrical and Electronics Engineers (IEEE) standards will prevent an unauthorized wireless device from being attached to a network?
IEEE 802.1F
IEEE 802.1H
IEEE 802.1Q
IEEE 802.1X
IEEE 802.1X is a standard for port-based Network Access Control (PNAC). It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN, preventing unauthorized devices from gaining network access.
References: CISSP For Dummies, Seventh Edition, Chapter 4, page 97; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 247
What is the PRIMARY difference between security policies and security procedures?
Policies are used to enforce violations, and procedures create penalties
Policies point to guidelines, and procedures are more contractual in nature
Policies are included in awareness training, and procedures give guidance
Policies are generic in nature, and procedures contain operational details
The primary difference between security policies and security procedures is that policies are generic in nature, and procedures contain operational details. Security policies are the high-level statements or rules that define the goals, objectives, and requirements of security for an organization. Security procedures are the low-level steps or actions that specify how to implement, enforce, and comply with the security policies.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 17; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 13
Which of the following is the PRIMARY concern when using an Internet browser to access a cloud-based service?
Insecure implementation of Application Programming Interfaces (API)
Improper use and storage of management keys
Misconfiguration of infrastructure allowing for unauthorized access
Vulnerabilities within protocols that can expose confidential data
The primary concern when using an Internet browser to access a cloud-based service is the vulnerabilities within protocols that can expose confidential data. Protocols are the rules and formats that govern the communication and exchange of data between systems or applications. Protocols can have vulnerabilities or flaws that can be exploited by attackers to intercept, modify, or steal the data. For example, some protocols may not provide adequate encryption, authentication, or integrity for the data, or they may have weak or outdated algorithms, keys, or certificates. When using an Internet browser to access a cloud-based service, the data may be transmitted over various protocols, such as HTTP, HTTPS, SSL, TLS, etc. If any of these protocols are vulnerable, the data may be compromised, especially if the data is sensitive or confidential. Therefore, it is important to use secure and updated protocols, as well as to monitor and patch any vulnerabilities12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 338; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 456.
Which of the following is the PRIMARY reason to perform regular vulnerability scanning of an organization network?
Provide vulnerability reports to management.
Validate vulnerability remediation activities.
Prevent attackers from discovering vulnerabilities.
Remediate known vulnerabilities.
According to the CISSP Official (ISC)2 Practice Tests, the primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities. A vulnerability scanning is the process of identifying and measuring the weaknesses and exposures in a system, network, or application, that may be exploited by threats and cause harm to the organization or its assets. A vulnerability scanning can be performed by using various tools, techniques, or methods, such as automated scanners, manual tests, or penetration tests. The primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities, which means to fix, mitigate, or eliminate the vulnerabilities that are discovered or reported by the vulnerability scanning. Remediation of known vulnerabilities helps to improve the security posture and effectiveness of the system, network, or application, as well as to reduce the overall risk to an acceptable level. Providing vulnerability reports to management is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Vulnerability reports are the documents that provide the evidence and analysis of the vulnerability scanning, such as the scope, objectives, methods, results, and recommendations of the vulnerability scanning. Vulnerability reports help to communicate and document the findings and issues of the vulnerability scanning, as well as to support the decision making and planning for the remediation of the vulnerabilities. Validating vulnerability remediation activities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a part or step of it. Validating vulnerability remediation activities is the process of verifying and testing the effectiveness and completeness of the remediation actions that are taken to address the vulnerabilities, such as patching, updating, configuring, or replacing the system, network, or application components. Validating vulnerability remediation activities helps to ensure that the vulnerabilities are properly and successfully remediated, and that no new or residual vulnerabilities are introduced or left behind. Preventing attackers from discovering vulnerabilities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Preventing attackers from discovering vulnerabilities is the process of hiding or obscuring the vulnerabilities from the potential attackers, by using various techniques or methods, such as encryption, obfuscation, or deception. Preventing attackers from discovering vulnerabilities helps to reduce the likelihood and opportunity of the attackers to exploit the vulnerabilities, but it does not address the root cause or the impact of the vulnerabilities.
After acquiring the latest security updates, what must be done before deploying to production systems?
Use tools to detect missing system patches
Install the patches on a test system
Subscribe to notifications for vulnerabilities
Assess the severity of the situation
After acquiring the latest security updates, the best practice is to install the patches on a test system before deploying them to the production systems. This is to ensure that the patches are compatible with the system configuration and do not cause any adverse effects or conflicts with the existing applications or services. The test system should be isolated from the production environment and should have the same or similar specifications and settings as the production system.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 336; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 297
Which of the following is generally indicative of a replay attack when dealing with biometric authentication?
False Acceptance Rate (FAR) is greater than 1 in 100,000
False Rejection Rate (FRR) is greater than 5 in 100
Inadequately specified templates
Exact match
The indicator that is generally indicative of a replay attack when dealing with biometric authentication is an exact match. A replay attack is a type of attack or a threat that aims or intends to bypass or to break the authentication or the verification mechanism or process of a system or a service, by capturing or recording the authentication or the verification data or information, such as the username, the password, or the token, and replaying or resending the authentication or the verification data or information to the system or the service, as if it was the legitimate or the authentic user or device. A replay attack can result or lead to the unauthorized or the inappropriate access or login to the system or the service, by the attacker or the malicious actor. Biometric authentication is a type of authentication or a verification mechanism or process that uses or applies the biometric features or characteristics of the user or the device, such as the fingerprint, the iris, or the voice, to identify or to authenticate the user or the device, and to grant or to deny the access or the login to the system or the service. Biometric authentication can provide a high level of security or protection for the system or the service, as it can prevent or reduce the risk of impersonation, duplication, or sharing of the biometric features or characteristics of the user or the device. However, biometric authentication can also be vulnerable or susceptible to the replay attack, as the attacker or the malicious actor can capture or record the biometric features or characteristics of the user or the device, such as by using a camera, a scanner, or a microphone, and replay or resend the biometric features or characteristics to the system or the service, as if it was the legitimate or the authentic user or device. The indicator that is generally indicative of a replay attack when dealing with biometric authentication is an exact match, which means that the biometric features or characteristics of the user or the device that are captured or recorded by the attacker or the malicious actor, and that are replayed or resent to the system or the service, are identical or indistinguishable to the biometric features or characteristics of the user or the device that are stored or registered in the system or the service, and that are used or compared for the authentication or the verification. An exact match can indicate or suggest that the biometric features or characteristics of the user or the device are not genuine or live, but rather fake or replayed, and that the authentication or the verification mechanism or process of the system or the service is compromised or breached by the replay attack.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 149; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 214
What is the PRIMARY goal for using Domain Name System Security Extensions (DNSSEC) to sign records?
Integrity
Confidentiality
Accountability
Availability
The primary goal for using Domain Name System Security Extensions (DNSSEC) to sign records is integrity. DNSSEC is a set of extensions or enhancements to the Domain Name System (DNS) protocol, which is a protocol that translates or resolves the domain names or the hostnames into the IP addresses or the network addresses, and vice versa. DNSSEC is designed or intended to provide the security or the protection for the DNS protocol, by using the digital signatures or the cryptographic keys to sign or to verify the DNS records or the DNS data, such as the A records, the AAAA records, or the MX records. The primary goal for using DNSSEC to sign records is integrity, which means that DNSSEC aims to ensure or to confirm that the DNS records or the DNS data are authentic, accurate, or reliable, and that they have not been modified, altered, or corrupted by the third parties or the attackers who intercept or manipulate the DNS queries or the DNS responses over the network. DNSSEC can provide the integrity for the DNS records or the DNS data, by using the public key cryptography or the asymmetric cryptography to generate or to validate the digital signatures or the cryptographic keys that are attached or appended to the DNS records or the DNS data, and that can prove or demonstrate the origin, the identity, or the validity of the DNS records or the DNS data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 113; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 170
Network-based logging has which advantage over host-based logging when reviewing malicious activity about a victim machine?
Addresses and protocols of network-based logs are analyzed.
Host-based system logging has files stored in multiple locations.
Properly handled network-based logs may be more reliable and valid.
Network-based systems cannot capture users logging into the console.
According to the CISSP CBK Official Study Guide1, the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine is that properly handled network-based logs may be more reliable and valid. Logging is the process of recording or documenting the events or the activities that occur or happen in the system or the network, such as the access, the communication, or the operation of the system or the network. Logging can be classified into two types, which are:
The advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine is that properly handled network-based logs may be more reliable and valid, as they may provide a more accurate, complete, and consistent record or documentation of the malicious activity, as well as a more independent, objective, and verifiable evidence or proof of the malicious activity. Properly handled network-based logs may be more reliable and valid, as they may:
Addresses and protocols of network-based logs are analyzed is not the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine, although it may be a benefit or a consequence of network-based logging. Analyzing the addresses and protocols of network-based logs is the process of examining or evaluating the traffic or the data that passes through the network, which may include or reveal the source, the destination, the protocol, or the port of the traffic or the data, by applying the appropriate tools or techniques, such as the packet capture, the packet analysis, or the packet filtering tools or techniques. Analyzing the addresses and protocols of network-based logs may help to identify and analyze the malicious activity, as well as to determine and measure the impact or the consequence of the malicious activity. Analyzing the addresses and protocols of network-based logs may be a benefit or a consequence of network-based logging, as network-based logging may provide the traffic or the data that passes through the network, which may include or reveal the source, the destination, the protocol, or the port of the traffic or the data. However, analyzing the addresses and protocols of network-based logs is not the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine, as it is not the reason or the factor that makes network-based logging superior or preferable to host-based logging when reviewing malicious activity about a victim machine. Host-based system logging has files stored in multiple locations is not the advantage of network
How does Encapsulating Security Payload (ESP) in transport mode affect the Internet Protocol (IP)?
Encrypts and optionally authenticates the IP header, but not the IP payload
Encrypts and optionally authenticates the IP payload, but not the IP header
Authenticates the IP payload and selected portions of the IP header
Encrypts and optionally authenticates the complete IP packet
Encapsulating Security Payload (ESP) in transport mode affects the Internet Protocol (IP) by encrypting and optionally authenticating the IP payload, but not the IP header. ESP is a protocol that provides confidentiality, integrity, and authentication for data transmitted over a network. ESP can operate in two modes: transport mode and tunnel mode. In transport mode, ESP only protects the data or payload of the IP packet, while leaving the IP header intact and visible. This mode is suitable for end-to-end communication between two hosts. In tunnel mode, ESP protects the entire IP packet, including the header and the payload, by encapsulating it within another IP packet. This mode is suitable for gateway-to-gateway or host-to-gateway communication34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 345; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 464.
Data leakage of sensitive information is MOST often concealed by which of the following?
Secure Sockets Layer (SSL)
Secure Hash Algorithm (SHA)
Wired Equivalent Privacy (WEP)
Secure Post Office Protocol (POP)
Data leakage of sensitive information is most often concealed by Secure Sockets Layer (SSL), which is a protocol that provides encryption and authentication for data transmitted over a network. SSL can prevent eavesdropping, tampering, and spoofing of data by encrypting the data with a symmetric key and verifying the identity of the parties with a digital certificate. SSL can also provide data integrity by using a message authentication code (MAC) to detect any alteration of the data. SSL is widely used for securing web traffic, email, instant messaging, and other applications that require confidentiality and integrity of data56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 339; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 457.
Which Radio Frequency Interference (RFI) phenomenon associated with bundled cable runs can create information leakage?
Transference
Covert channel
Bleeding
Cross-talk
Cross-talk is a type of Radio Frequency Interference (RFI) phenomenon that occurs when signals from one cable or circuit interfere with signals from another cable or circuit. Cross-talk can create information leakage by allowing an attacker to eavesdrop on or modify the transmitted data. Cross-talk can be caused by electromagnetic induction, capacitive coupling, or common impedance coupling. Cross-talk can be reduced by using shielded cables, twisted pairs, or optical fibers123. References:
In which order, from MOST to LEAST impacted, does user awareness training reduce the occurrence of the events below?
The correct order is:
Comprehensive Explanation: User awareness training is a process of educating and informing users about the security policies, procedures, and best practices of an organization. User awareness training can help reduce the occurrence of security events by increasing the users’ knowledge, skills, and attitude towards security. User awareness training can have different impacts on different types of security events, depending on the nature and source of the events. The order of impact from most to least is as follows:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 440; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 852.
What should happen when an emergency change to a system must be performed?
The change must be given priority at the next meeting of the change control board.
Testing and approvals must be performed quickly.
The change must be performed immediately and then submitted to the change board.
The change is performed and a notation is made in the system log.
In cases of emergency changes, the priority is to address the issue at hand immediately to prevent any potential impacts on the system or organization. After implementing the change, it should then be documented and submitted to the change control board for review and approval post-implementation. References: CISSP Official (ISC)2 Practice Tests, Chapter 7, page 187; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 346
Secure Sockets Layer (SSL) encryption protects
data at rest.
the source IP address.
data transmitted.
data availability.
SSL encryption is used to secure communications over computer networks by encrypting data transmitted between two systems—usually your computer and a server.
References:
Which one of the following activities would present a significant security risk to organizations when employing a Virtual Private Network (VPN) solution?
VPN bandwidth
Simultaneous connection to other networks
Users with Internet Protocol (IP) addressing conflicts
Remote users with administrative rights
According to the CISSP For Dummies4, the activity that would present a significant security risk to organizations when employing a VPN solution is simultaneous connection to other networks. A VPN is a technology that creates a secure and encrypted tunnel over a public or untrusted network, such as the internet, to connect remote users or sites to the organization’s private network, such as the intranet. A VPN provides security and privacy for the data and communication that are transmitted over the tunnel, as well as access to the network resources and services that are available on the private network. However, a VPN also introduces some security risks and challenges, such as configuration errors, authentication issues, malware infections, or data leakage. One of the security risks of a VPN is simultaneous connection to other networks, which occurs when a VPN user connects to the organization’s private network and another network at the same time, such as a home network, a public Wi-Fi network, or a malicious network. This creates a potential vulnerability or backdoor for the attackers to access or compromise the organization’s private network, by exploiting the weaker security or lower trust of the other network. Therefore, the organization should implement and enforce policies and controls to prevent or restrict the simultaneous connection to other networks when using a VPN solution. VPN bandwidth is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that affects the performance and availability of the VPN solution. VPN bandwidth is the amount of data that can be transmitted or received over the VPN tunnel per unit of time, which depends on the speed and capacity of the network connection, the encryption and compression methods, the traffic load, and the network congestion. VPN bandwidth may limit the quality and efficiency of the data and communication that are transmitted over the VPN tunnel, but it does not directly pose a significant security risk to the organization’s private network. Users with IP addressing conflicts is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that causes errors and disruptions in the VPN solution. IP addressing conflicts occur when two or more devices or hosts on the same network have the same IP address, which is a unique identifier that is assigned to each device or host to communicate over the network.
A network scan found 50% of the systems with one or more critical vulnerabilities. Which of the following represents the BEST action?
Assess vulnerability risk and program effectiveness.
Assess vulnerability risk and business impact.
Disconnect all systems with critical vulnerabilities.
Disconnect systems with the most number of vulnerabilities.
The best action after finding 50% of the systems with one or more critical vulnerabilities is to assess the vulnerability risk and business impact. This means to evaluate the likelihood and severity of the vulnerabilities being exploited, as well as the potential consequences and costs for the business operations and objectives. This assessment can help prioritize the remediation efforts, allocate the resources, and justify the investments.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 343; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 304
What is one way to mitigate the risk of security flaws in custom software?
Include security language in the Earned Value Management (EVM) contract
Include security assurance clauses in the Service Level Agreement (SLA)
Purchase only Commercial Off-The-Shelf (COTS) products
Purchase only software with no open source Application Programming Interfaces (APIs)
One way to mitigate the risk of security flaws in custom software is to include security assurance clauses in the Service Level Agreement (SLA) between the customer and the software developer. The SLA is a contract that defines the expectations and obligations of both parties, such as the scope, quality, performance, and security of the software. By including security assurance clauses, the customer can specify the security requirements and standards that the software must meet, and the developer can agree to provide evidence of compliance and remediation of any defects. The other options are not effective ways to mitigate the risk of security flaws in custom software. Including security language in the Earned Value Management (EVM) contract is not relevant, as EVM is a project management technique that measures the progress and performance of a project, not the security of the software. Purchasing only Commercial Off-The-Shelf (COTS) products or software with no open source Application Programming Interfaces (APIs) does not guarantee that the software is free of security flaws, as COTS and closed source software can also have vulnerabilities and may not meet the customer’s specific needs and expectations. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21, p. 1119; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, p. 507.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Which of the following is a critical factor for implementing a successful data classification program?
Executive sponsorship
Information security sponsorship
End-user acceptance
Internal audit acceptance
The critical factor for implementing a successful data classification program is executive sponsorship. Executive sponsorship is the support and commitment from the senior management of the organization for the data classification program. Executive sponsorship can provide the necessary resources, authority, and guidance for the data classification program, and ensure that the program aligns with the organization’s goals, policies, and culture. Executive sponsorship can also influence and motivate the data owners, custodians, and users to participate and comply with the data classification program. The other options are not as critical as executive sponsorship, as they either do not have the same level of influence or authority (B, C, and D), or do not directly contribute to the data classification program (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 66; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 72.
Refer to the information below to answer the question.
A large organization uses unique identifiers and requires them at the start of every system session. Application access is based on job classification. The organization is subject to periodic independent reviews of access controls and violations. The organization uses wired and wireless networks and remote access. The organization also uses secure connections to branch offices and secure backup and recovery strategies for selected information and processes.
Which of the following BEST describes the access control methodology used?
Least privilege
Lattice Based Access Control (LBAC)
Role Based Access Control (RBAC)
Lightweight Directory Access Control (LDAP)
The access control methodology that best describes the scenario is Role Based Access Control (RBAC). RBAC is a type of access control that assigns permissions and privileges to the users or the devices based on their roles or functions in the organization, rather than their identities or attributes. RBAC can simplify and streamline the access control management, as it can reduce the complexity and redundancy of the access control policies and procedures, and it can support the principle of least privilege and the separation of duties. The scenario indicates that the application access is based on job classification, which is a characteristic of RBAC, as the job classification can define the role or the function of the user or the device in the organization. Least privilege, Lattice Based Access Control (LBAC), and Lightweight Directory Access Control (LDAP) are not the access control methodologies that best describe the scenario, as they are related to the minimum level of access required, the mathematical model of access rights, or the protocol for accessing directory services, not the role or the function of the user or the device in the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 667. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 683.
Which of the following describes the concept of a Single Sign -On (SSO) system?
Users are authenticated to one system at a time.
Users are identified to multiple systems with several credentials.
Users are authenticated to multiple systems with one login.
Only one user is using the system at a time.
Single Sign-On (SSO) is a technology that allows users to securely access multiple applications and services using just one set of credentials, such as a username and a password56
With SSO, users do not have to remember and enter multiple passwords for different applications and services, which can improve their convenience and productivity. SSO also enhances security, as users can use stronger passwords, avoid reusing passwords, and comply with password policies more easily. Moreover, SSO reduces the risk of phishing, credential theft, and password fatigue56
SSO is based on the concept of federated identity, which means that the identity of a user is shared and trusted across different systems that have established a trust relationship. SSO uses various protocols and standards, such as SAML, OAuth, OIDC, and Kerberos, to enable the exchange of identity information and authentication tokens between the systems56
A thorough review of an organization's audit logs finds that a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient. What type of attack has MOST likely occurred?
Spoofing
Eavesdropping
Man-in-the-middle
Denial of service
The type of attack that has most likely occurred when a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of attack that involves an attacker intercepting, modifying, or redirecting the communication between two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or impersonate one of the parties, to achieve malicious goals, such as stealing information, compromising security, or disrupting service. A MITM attack can be performed on various types of networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of service are not the types of attack that have most likely occurred in this scenario, as they do not involve the modification or manipulation of the communication between the parties, but rather the falsification, observation, or prevention of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will be the PRIMARY security concern as staff is released from the organization?
Inadequate IT support
Loss of data and separation of duties
Undocumented security controls
Additional responsibilities for remaining staff
The primary security concern as staff is released from the organization is the loss of data and separation of duties. The loss of data is the event or the situation where the data is deleted, corrupted, stolen, or leaked by the staff who are leaving the organization, either intentionally or unintentionally, and where the data is no longer available or recoverable by the organization. The loss of data can compromise the confidentiality, the integrity, and the availability of the data, and can cause damage or harm to the organization’s operations, reputation, or objectives. The separation of duties is the principle or the practice of dividing the tasks or the responsibilities among different staff or roles, to prevent or reduce the conflicts of interest, the collusion, the fraud, or the errors. The separation of duties can be compromised when the staff is released from the organization, as it can create the gaps or the overlaps in the tasks or the responsibilities, and it can increase the risk of the unauthorized or the malicious access or activity. Inadequate IT support, undocumented security controls, and additional responsibilities for remaining staff are not the primary security concerns as staff is released from the organization, as they are related to the quality, the transparency, or the workload of the IT operations, not the loss of data or the separation of duties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 29. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 44.
What is the MOST important reason to configure unique user IDs?
Supporting accountability
Reducing authentication errors
Preventing password compromise
Supporting Single Sign On (SSO)
Unique user IDs are essential for supporting accountability, which is the ability to trace actions or events to their source. Accountability is a key principle of security and helps to deter, detect, and correct unauthorized or malicious activities. Without unique user IDs, it would be difficult or impossible to identify who performed what action on a system or network. Reducing authentication errors, preventing password compromise, and supporting Single Sign On (SSO) are all possible benefits of using unique user IDs, but they are not the most important reason for configuring them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 38.
With data labeling, which of the following MUST be the key decision maker?
Information security
Departmental management
Data custodian
Data owner
With data labeling, the data owner must be the key decision maker. The data owner is the person or entity that has the authority and responsibility for the data, including its classification, protection, and usage. The data owner must decide how to label the data according to its sensitivity, criticality, and value, and communicate the labeling scheme to the data custodians and users. The data owner must also review and update the data labels as needed. The other options are not the key decision makers for data labeling, as they either do not have the authority or responsibility for the data (A, B, and C), or do not have the knowledge or interest in the data (B and C). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 63; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 69.
Which of the following problems is not addressed by using OAuth (Open Standard to Authorization) 2.0 to integrate a third-party identity provider for a service?
Resource Servers are required to use passwords to authenticate end users.
Revocation of access of some users of the third party instead of all the users from the third party.
Compromise of the third party means compromise of all the users in the service.
Guest users need to authenticate with the third party identity provider.
The problem that is not addressed by using OAuth 2.0 to integrate a third-party identity provider for a service is that resource servers are required to use passwords to authenticate end users. OAuth 2.0 is a framework that enables a third-party application to obtain limited access to a protected resource on behalf of a resource owner, without exposing the resource owner’s credentials to the third-party application. OAuth 2.0 relies on an authorization server that acts as an identity provider and issues access tokens to the third-party application, based on the resource owner’s consent and the scope of the access request. OAuth 2.0 does not address the authentication of the resource owner or the end user by the resource server, which is the server that hosts the protected resource. The resource server may still require the resource owner or the end user to use passwords or other methods to authenticate themselves, before granting access to the protected resource. Revocation of access of some users of the third party instead of all the users from the third party, compromise of the third party means compromise of all the users in the service, and guest users need to authenticate with the third party identity provider are problems that are addressed by using OAuth 2.0 to integrate a third-party identity provider for a service, as they are related to the delegation, revocation, or granularity of the access control or the identity management. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 692. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 708.
Refer to the information below to answer the question.
In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users, while Table B lists the security classes of four different files.
In a Bell-LaPadula system, which user has the MOST restrictions when writing data to any of the four files?
User A
User B
User C
User D
In a Bell-LaPadula system, a user has the most restrictions when writing data to any of the four files if they have the lowest clearance level. This is because of the star property (*property) of the Bell-LaPadula model, which states that a subject with a given security clearance may write data to an object if and only if the object’s security level is greater than or equal to the subject’s security level. This rule is also known as the no write-down rule, as it prevents the leakage of information from a higher level to a lower level. In this question, User A has a Restricted clearance, which is the lowest level among the four users. Therefore, User A has the most restrictions when writing data to any of the four files, as they can only write data to File 1, which has the same security level as their clearance. User B, User C, and User D have less restrictions when writing data to any of the four files, as they can write data to more than one file, depending on their clearance levels and the security classes of the files. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 514.
Which of the following provides the MOST protection against data theft of sensitive information when a laptop is stolen?
Set up a BIOS and operating system password
Encrypt the virtual drive where confidential files can be stored
Implement a mandatory policy in which sensitive data cannot be stored on laptops, but only on the corporate network
Encrypt the entire disk and delete contents after a set number of failed access attempts
Encrypting the entire disk and deleting the contents after a set number of failed access attempts provides the most protection against data theft of sensitive information when a laptop is stolen. This method ensures that the data is unreadable without the correct decryption key, and that the data is erased if someone tries to guess the key or bypass the encryption. Setting up a BIOS and operating system password, encrypting the virtual drive, or implementing a policy are less effective methods, as they can be circumvented by physical access, booting from another device, or copying the data to another location. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, p. 269; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), p. 521.
When using third-party software developers, which of the following is the MOST effective method of providing software development Quality Assurance (QA)?
Retain intellectual property rights through contractual wording.
Perform overlapping code reviews by both parties.
Verify that the contractors attend development planning meetings.
Create a separate contractor development environment.
When using third-party software developers, the most effective method of providing software development Quality Assurance (QA) is to perform overlapping code reviews by both parties. Code reviews are the process of examining the source code of an application for quality, functionality, security, and compliance. Overlapping code reviews by both parties means that the code is reviewed by both the third-party developers and the contracting organization, and that the reviews cover the same or similar aspects of the code. This can ensure that the code meets the requirements and specifications, that the code is free of defects or vulnerabilities, and that the code is consistent and compatible with the existing system or environment. Retaining intellectual property rights through contractual wording, verifying that the contractors attend development planning meetings, and creating a separate contractor development environment are all possible methods of providing software development QA, but they are not the most effective method of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1026. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1050.
According to best practice, which of the following is required when implementing third party software in a production environment?
Scan the application for vulnerabilities
Contract the vendor for patching
Negotiate end user application training
Escrow a copy of the software
According to best practice, one of the requirements when implementing third party software in a production environment is to scan the application for vulnerabilities. Vulnerabilities are weaknesses or flaws in the software that can be exploited by attackers to compromise the security, functionality, or performance of the system or network. Scanning the application for vulnerabilities can help to identify and mitigate the potential risks, ensure the compliance with the security policies and standards, and prevent the introduction of malicious code or backdoors. Contracting the vendor for patching, negotiating end user application training, and escrowing a copy of the software are all possible requirements when implementing third party software in a production environment, but they are not the most essential or best practice requirement of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1018. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1040.
What is the PRIMARY reason for ethics awareness and related policy implementation?
It affects the workflow of an organization.
It affects the reputation of an organization.
It affects the retention rate of employees.
It affects the morale of the employees.
The primary reason for ethics awareness and related policy implementation is to affect the reputation of an organization positively, by demonstrating its commitment to ethical principles, values, and standards in its business practices, services, and products. Ethics awareness and policy implementation can also help the organization avoid legal liabilities, fines, or sanctions for unethical conduct, and foster trust and loyalty among its customers, partners, and employees. The other options are not as important as affecting the reputation, as they either do not directly relate to ethics (A), or are secondary outcomes of ethics (C and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 28.
Without proper signal protection, embedded systems may be prone to which type of attack?
Brute force
Tampering
Information disclosure
Denial of Service (DoS)
The type of attack that embedded systems may be prone to without proper signal protection is information disclosure. Information disclosure is a type of attack that exposes or reveals sensitive or confidential information to unauthorized parties, such as attackers, competitors, or the public. Information disclosure can occur through various means, such as interception, leakage, or theft of the information. Embedded systems are systems that are integrated into other devices or machines, such as cars, medical devices, or industrial controllers, and perform specific functions or tasks. Embedded systems may communicate with other systems or devices through signals, such as radio frequency, infrared, or sound waves. Without proper signal protection, such as encryption, authentication, or shielding, embedded systems may be vulnerable to information disclosure, as the signals may be captured, analyzed, or modified by attackers, and the information contained in the signals may be compromised. Brute force, tampering, and denial of service are not the types of attack that embedded systems may be prone to without proper signal protection, as they are related to the guessing, alteration, or prevention of the access or functionality of the systems, not the exposure or revelation of the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 311. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 327.
What component of a web application that stores the session state in a cookie an attacker can bypass?
An initialization check
An identification check
An authentication check
An authorization check
An authorization check is a component of a web application that stores the session state in a cookie that can be bypassed by an attacker. An authorization check verifies that the user has the appropriate permissions to access the requested resources or perform the desired actions. However, if the session state is stored in a cookie, an attacker can manipulate the cookie to change the user’s role or privileges, and bypass the authorization check. Therefore, it is recommended to store the session state on the server side, or use encryption and integrity protection for the cookie. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1015; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 503.
Which of the following is critical for establishing an initial baseline for software components in the operation and maintenance of applications?
Application monitoring procedures
Configuration control procedures
Security audit procedures
Software patching procedures
Configuration control procedures are critical for establishing an initial baseline for software components in the operation and maintenance of applications. Configuration control procedures are the processes and activities that ensure the integrity, consistency, and traceability of the software components throughout the SDLC. Configuration control procedures include identifying, documenting, storing, reviewing, approving, and updating the software components, as well as managing the changes and versions of the components. By establishing an initial baseline, the organization can have a reference point for measuring and evaluating the performance, quality, and security of the software components, and for applying and tracking the changes and updates to the components. The other options are not as critical as configuration control procedures, as they either do not establish an initial baseline (A and C), or do not apply to all software components (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 468; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 568.
Which of the following provides effective management assurance for a Wireless Local Area Network (WLAN)?
Maintaining an inventory of authorized Access Points (AP) and connecting devices
Setting the radio frequency to the minimum range required
Establishing a Virtual Private Network (VPN) tunnel between the WLAN client device and a VPN concentrator
Verifying that all default passwords have been changed
The action that provides effective management assurance for a WLAN is establishing a VPN tunnel between the WLAN client device and a VPN concentrator. A VPN is a secure and encrypted connection that enables remote access to a private network over a public network, such as the internet. A VPN concentrator is a device that manages and authenticates the VPN connections, and provides encryption and decryption services. By establishing a VPN tunnel, the organization can protect the confidentiality, integrity, and availability of the data transmitted over the WLAN, and prevent unauthorized or malicious access to the network. The other options are not as effective as establishing a VPN tunnel, as they either do not provide sufficient security for the WLAN (A and B), or do not address the management assurance aspect (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 167; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 177.
What is the MOST critical factor to achieve the goals of a security program?
Capabilities of security resources
Executive management support
Effectiveness of security management
Budget approved for security resources
The most critical factor to achieve the goals of a security program is the executive management support. The executive management is the highest level of authority or decision-making in the organization, such as the board of directors, the chief executive officer, or the chief information officer. The executive management support is the endorsement, the sponsorship, or the involvement of the executive management in the security program, such as the security planning, the security implementation, the security monitoring, or the security auditing. The executive management support is the most critical factor to achieve the goals of the security program, as it can provide the vision, the direction, or the strategy for the security program, and it can align the security program with the business needs and requirements. The executive management support can also provide the resources, the budget, or the authority for the security program, and it can foster the security culture, the security awareness, or the security governance in the organization. The executive management support can also influence the stakeholders, the customers, or the regulators, and it can demonstrate the commitment, the accountability, or the responsibility for the security program. Capabilities of security resources, effectiveness of security management, and budget approved for security resources are not the most critical factors to achieve the goals of the security program, as they are related to the skills, the performance, or the funding of the security program, not the endorsement, the sponsorship, or the involvement of the executive management in the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 33. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
What is the MAIN feature that onion routing networks offer?
Non-repudiation
Traceability
Anonymity
Resilience
The main feature that onion routing networks offer is anonymity. Anonymity is the state of being unknown or unidentifiable by hiding or masking the identity or the location of the sender or the receiver of a communication. Onion routing is a technique that enables anonymous communication over a network, such as the internet, by encrypting and routing the messages through multiple layers of intermediate nodes, called onion routers. Onion routing can protect the privacy and security of the users or the data, and can prevent censorship, surveillance, or tracking by third parties. Non-repudiation, traceability, and resilience are not the main features that onion routing networks offer, as they are related to the proof, tracking, or recovery of the communication, not the anonymity of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 467. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 483.
Which of the following is an example of two-factor authentication?
Retina scan and a palm print
Fingerprint and a smart card
Magnetic stripe card and an ID badge
Password and Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA)
An example of two-factor authentication is fingerprint and a smart card. Two-factor authentication is a type of authentication that requires two different factors or methods to verify the identity or the credentials of a user or a device. The factors or methods can be categorized into three types: something you know, something you have, or something you are. Something you know is a factor that relies on the knowledge of the user or the device, such as a password, a PIN, or a security question. Something you have is a factor that relies on the possession of the user or the device, such as a smart card, a token, or a certificate. Something you are is a factor that relies on the biometrics of the user or the device, such as a fingerprint, a retina scan, or a voice recognition. Fingerprint and a smart card are an example of two-factor authentication, as they combine two different factors: something you are and something you have. Retina scan and a palm print are not an example of two-factor authentication, as they are both the same factor: something you are. Magnetic stripe card and an ID badge are not an example of two-factor authentication, as they are both the same factor: something you have. Password and CAPTCHA are not an example of two-factor authentication, as they are both the same factor: something you know. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
After magnetic drives were degaussed twice according to the product manufacturer's directions, what is the MOST LIKELY security issue with degaussing?
Commercial products often have serious weaknesses of the magnetic force available in the degausser product.
Degausser products may not be properly maintained and operated.
The inability to turn the drive around in the chamber for the second pass due to human error.
Inadequate record keeping when sanitizing mediA.
The most likely security issue with degaussing is that the degausser products may not be properly maintained and operated. Degaussing is a method of sanitizing magnetic media, such as hard disk drives, by applying a strong magnetic field that erases the data stored on the media. Degaussing can be effective in destroying the data, but it requires that the degausser products are calibrated, tested, and used according to the manufacturer’s specifications and instructions. If the degausser products are not properly maintained and operated, they may not generate a sufficient magnetic force to erase the data completely, or they may damage the media or the device. Commercial products often have serious weaknesses of the magnetic force available in the degausser product, the inability to turn the drive around in the chamber for the second pass due to human error, and inadequate record keeping when sanitizing media are not the most likely security issues with degaussing, as they are related to the quality, the technique, or the documentation of the degaussing process, not the maintenance or the operation of the degausser products. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 888. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 904.
Which of the following is the MOST effective attack against cryptographic hardware modules?
Plaintext
Brute force
Power analysis
Man-in-the-middle (MITM)
The most effective attack against cryptographic hardware modules is power analysis. Power analysis is a type of side-channel attack that exploits the physical characteristics or behavior of a cryptographic device, such as a smart card, a hardware security module, or a cryptographic processor, to extract secret information, such as keys, passwords, or algorithms. Power analysis measures the power consumption or the electromagnetic radiation of the device, and analyzes the variations or patterns that correspond to the cryptographic operations or the data being processed. Power analysis can reveal the internal state or the logic of the device, and can bypass the security mechanisms or the tamper resistance of the device. Power analysis can be performed with low-cost and widely available equipment, and can be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM) are not the most effective attacks against cryptographic hardware modules, as they are related to the encryption or transmission of the data, not the physical properties or behavior of the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644.
A security manager has noticed an inconsistent application of server security controls resulting in vulnerabilities on critical systems. What is the MOST likely cause of this issue?
A lack of baseline standards
Improper documentation of security guidelines
A poorly designed security policy communication program
Host-based Intrusion Prevention System (HIPS) policies are ineffective
The most likely cause of the inconsistent application of server security controls resulting in vulnerabilities on critical systems is a lack of baseline standards. Baseline standards are the minimum level of security controls and measures that must be applied to the servers or other assets to ensure their protection and compliance. Baseline standards help to establish a consistent and uniform security posture across the organization, and to prevent or reduce the exposure to threats and risks. If there is a lack of baseline standards, the server security controls may vary in quality, effectiveness, or completeness, resulting in vulnerabilities on critical systems. Improper documentation of security guidelines, a poorly designed security policy communication program, and ineffective Host-based Intrusion Prevention System (HIPS) policies are not the most likely causes of this issue, as they do not directly affect the application of server security controls or the existence of baseline standards. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 35. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
Organizational policy requires the deletion of user data from Personal Digital Assistant (PDA) devices before disposal. It may not be possible to delete the user data if the device is malfunctioning. Which destruction method below provides the BEST assurance that the data has been removed?
Knurling
Grinding
Shredding
Degaussing
The best destruction method that provides the assurance that the data has been removed from a malfunctioning PDA device is shredding. Shredding is a method of physically destroying the media, such as flash memory cards, by cutting or tearing them into small pieces that make the data unrecoverable. Shredding can be effective in removing the data from a PDA device that cannot be deleted by software or firmware methods, as it does not depend on the functionality of the device or the media. Shredding can also prevent the reuse or the recycling of the media or the device, as it renders them unusable. Knurling, grinding, and degaussing are not the best destruction methods that provide the assurance that the data has been removed from a malfunctioning PDA device, as they are related to the methods of altering the surface, the shape, or the magnetic field of the media, not the methods of cutting or tearing the media into small pieces. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 889. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 905.
Identify the component that MOST likely lacks digital accountability related to information access.
Click on the correct device in the image below.
Storage Area Network (SAN): SANs are designed for centralized storage and access control mechanisms can be implemented to track users and their activities.
What physical characteristic does a retinal scan biometric device measure?
The amount of light reflected by the retina
The size, curvature, and shape of the retina
The pattern of blood vessels at the back of the eye
The pattern of light receptors at the back of the eye
A retinal scan is a biometric technique that uses unique patterns on a person’s retina blood vessels to identify them. The retina is a thin layer of tissue at the back of the eye that contains millions of light-sensitive cells and blood vessels. The retina converts the light rays that enter the eye into electrical signals that are sent to the brain for visual processing78
The pattern of blood vessels in the retina is not genetically determined and varies from person to person, even among identical twins. The retina also remains unchanged from birth until death, unless affected by some diseases or injuries. Therefore, the retina is considered to be one of the most accurate and reliable biometrics, apart from DNA78
A retinal scan is performed by projecting a low-energy infrared beam of light into a person’s eye as they look through the scanner’s eyepiece. The beam traces a standardized path on the retina, and the amount of light reflected by the blood vessels is measured. The pattern of variations in the reflection is digitized and stored in a database for comparison
When dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS), an organization that shares card holder information with a service provider MUST do which of the following?
Perform a service provider PCI-DSS assessment on a yearly basis.
Validate the service provider's PCI-DSS compliance status on a regular basis.
Validate that the service providers security policies are in alignment with those of the organization.
Ensure that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis.
The action that an organization that shares card holder information with a service provider must do when dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS) is to validate the service provider’s PCI-DSS compliance status on a regular basis. PCI-DSS is a set of security standards that applies to any organization that stores, processes, or transmits card holder data, such as credit or debit card information. PCI-DSS aims to protect the card holder data from unauthorized access, use, disclosure, or theft, and to ensure the security and integrity of the payment transactions. If an organization shares card holder data with a service provider, such as a payment processor, a hosting provider, or a cloud provider, the organization is still responsible for the security and compliance of the card holder data, and must ensure that the service provider also meets the PCI-DSS requirements. The organization must validate the service provider’s PCI-DSS compliance status on a regular basis, by obtaining and reviewing the service provider’s PCI-DSS assessment reports, such as the Self-Assessment Questionnaire (SAQ), the Report on Compliance (ROC), or the Attestation of Compliance (AOC). Performing a service provider PCI-DSS assessment on a yearly basis, validating that the service provider’s security policies are in alignment with those of the organization, and ensuring that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis are not the actions that an organization that shares card holder information with a service provider must do when dealing with compliance with PCI-DSS, as they are not sufficient or relevant to verify the service provider’s PCI-DSS compliance status or to protect the card holder data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
A system is developed so that its business users can perform business functions but not user administration functions. Application administrators can perform administration functions but not user business functions. These capabilities are BEST described as
least privilege.
rule based access controls.
Mandatory Access Control (MAC).
separation of duties.
The capabilities of the system that allow its business users to perform business functions but not user administration functions, and its application administrators to perform administration functions but not user business functions, are best described as separation of duties. Separation of duties is a security principle that divides the roles and responsibilities of different tasks or functions among different individuals or groups, so that no one person or group has complete control or authority over a critical process or asset. Separation of duties can help to prevent fraud, collusion, abuse, or errors, and to ensure accountability, oversight, and checks and balances. Least privilege, rule based access controls, and Mandatory Access Control (MAC) are not the best descriptions of the capabilities of the system, as they do not reflect the division of roles and responsibilities among different users or groups. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 32. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 45.
If an attacker in a SYN flood attack uses someone else's valid host address as the source address, the system under attack will send a large number of Synchronize/Acknowledge (SYN/ACK) packets to the
default gateway.
attacker's address.
local interface being attacked.
specified source address.
A SYN flood attack is a type of denial-of-service attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP). The attacker sends a large number of TCP packets with the SYN flag set, indicating a request to establish a connection, to the target system, using a spoofed source address. The target system responds with a TCP packet with the SYN and ACK flags set, indicating an acknowledgment of the request, and waits for a final TCP packet with the ACK flag set, indicating the completion of the handshake, from the source address. However, since the source address is fake, the final ACK packet never arrives, and the target system keeps the connection half-open, consuming its resources and preventing legitimate connections. Therefore, the system under attack will send a large number of SYN/ACK packets to the specified source address, which is the spoofed address used by the attacker. The default gateway, the attacker’s address, and the local interface being attacked are not the destinations of the SYN/ACK packets in a SYN flood attack. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 460. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 476.
A large university needs to enable student access to university resources from their homes. Which of the following provides the BEST option for low maintenance and ease of deployment?
Provide students with Internet Protocol Security (IPSec) Virtual Private Network (VPN) client software.
Use Secure Sockets Layer (SSL) VPN technology.
Use Secure Shell (SSH) with public/private keys.
Require students to purchase home router capable of VPN.
The best option for low maintenance and ease of deployment to enable student access to university resources from their homes is to use Secure Sockets Layer (SSL) VPN technology. SSL VPN is a type of virtual private network that uses the SSL protocol to provide secure and remote access to the network resources over the internet. SSL VPN does not require the installation or configuration of any special client software or hardware on the student’s device, as it can use the web browser as the client interface. SSL VPN can also support various types of devices, operating systems, and applications, and can provide granular access control and encryption for the network traffic. Providing students with Internet Protocol Security (IPSec) VPN client software, using Secure Shell (SSH) with public/private keys, and requiring students to purchase home router capable of VPN are not the best options for low maintenance and ease of deployment, as they involve more complexity, cost, and compatibility issues for the students and the university. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 507. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 523.
If virus infection is suspected, which of the following is the FIRST step for the user to take?
Unplug the computer from the network.
Save the opened files and shutdown the computer.
Report the incident to service desk.
Update the antivirus to the latest version.
The first step for the user to take if virus infection is suspected is to report the incident to service desk. This will help to contain the infection, prevent further damage, and initiate the recovery process. The service desk can also provide guidance on how to handle the infected computer and what actions to take next. Unplugging the computer from the network, saving the opened files and shutting down the computer, or updating the antivirus to the latest version are possible subsequent steps, but they should not be done before reporting the incident. References:
CISSP Official (ISC)2 Practice Tests, 3rd Edition, Domain 7: Security Operations, Question 7.1.2
CISSP CBK, 5th Edition, Chapter 7: Security Operations, Section: Incident Management
If a content management system (CMC) is implemented, which one of the following would occur?
Developers would no longer have access to production systems
The applications placed into production would be secure
Patching the systems would be completed more quickly
The test and production systems would be running the same software
If a content management system (CMS) is implemented, one of the outcomes would be that the test and production systems would be running the same software. A CMS is a software application that is used to create, manage, and deliver digital content, such as web pages, blogs, or documents. A CMS typically consists of two components: the content management application (CMA) and the content delivery application (CDA). The CMA is the front-end interface that allows users to create, edit, and organize the content. The CDA is the back-end component that stores, processes, and delivers the content to the end-users. A CMS can simplify and streamline the content creation and delivery process, by providing a consistent and standardized platform for both the test and production systems. A CMS can also ensure the quality, security, and availability of the content, by providing features such as version control, access control, backup, and recovery. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 410; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 550]
Which of the following BEST ensures the integrity of transactions to intended recipients?
Public key infrastructure (PKI)
Blockchain technology
Pre-shared key (PSK)
Web of trust
The best option that ensures the integrity of transactions to intended recipients is public key infrastructure (PKI). PKI is a system that provides the services and the mechanisms for creating, managing, distributing, using, storing, and revoking the digital certificates and the public keys that are used for securing the communication and the transactions between the systems or the entities. PKI ensures the integrity of transactions to intended recipients, because it can:
The other options are not the best options that ensure the integrity of transactions to intended recipients. Blockchain technology is a system that provides a distributed and decentralized ledger or database that records and validates the transactions or the events that are shared and agreed upon by the participants or the nodes in the network, by using the cryptographic hashes and the consensus mechanisms. Blockchain technology can ensure the integrity of transactions to intended recipients, but it is not the best option, because it may not provide the same level of verification, authentication, encryption, decryption, signing, and verification as PKI, and it may have some limitations or challenges, such as the scalability, the performance, or the interoperability of the system. Pre-shared key (PSK) is a system that provides a symmetric encryption or decryption key that is shared or agreed upon by the systems or the entities that are involved in the communication or the transactions, and that is used for securing the communication or the transactions. PSK can ensure the integrity of transactions to intended recipients, but it is not the best option, because it may not provide the same level of verification, authentication, encryption, decryption, signing, and verification as PKI, and it may have some risks or drawbacks, such as the key distribution, the key management, or the key compromise of the system. Web of trust is a system that provides a decentralized and distributed trust model that relies on the users or the entities to create, validate, and exchange the digital certificates and the public keys that are used for securing the communication or the transactions, by using the endorsements or the ratings of the other users or the entities. Web of trust can ensure the integrity of transactions to intended recipients, but it is not the best option, because it may not provide the same level of verification, authentication, encryption, decryption, signing, and verification as PKI, and it may have some issues or problems, such as the quality, the reliability, or the consistency of the system. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, page 633. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Communication and Network Security, page 634.
Which of the following goals represents a modern shift in risk management according to National Institute of Standards and Technology (NIST)?
Focus on operating environments that are changing, evolving, and full of emerging threats.
Secure information technology (IT) systems that store, process, or transmit organizational information.
Enable management to make well-informed risk-based decisions justifying security expenditure.
Provide an improved mission accomplishment approach.
The goal that represents a modern shift in risk management according to National Institute of Standards and Technology (NIST) is to focus on operating environments that are changing, evolving, and full of emerging threats. Risk management is the process that involves identifying, assessing, and prioritizing the risks that affect an organization or a system, and applying the appropriate strategies and actions to mitigate or reduce the risks, and to achieve the objectives and goals of the organization or the system. NIST is a federal agency that develops and publishes standards, guidelines, and best practices for various fields, including information security and risk management. NIST has proposed a modern shift in risk management, which is to focus on operating environments that are changing, evolving, and full of emerging threats, rather than on static or fixed environments that are assumed to be stable or predictable. This modern shift in risk management recognizes the dynamic and complex nature of the current and future operating environments, and the need for adaptive and resilient risk management approaches that can respond to the changing and evolving risks and opportunities34. References: CISSP CBK, Fifth Edition, Chapter 1, page 19; 2024 Pass4itsure CISSP Dumps, Question 20.
Which of the following needs to be included in order for High Availability (HA) to continue operations during planned system outages?
Redundant hardware, disk spanning, and patching
Load balancing, power reserves, and disk spanning
Backups, clustering, and power reserves
Clustering, load balancing, and fault-tolerant options
High Availability (HA) is a system design goal that ensures the system or network can continue to operate and provide the expected level of service and performance during planned or unplanned outages or disruptions. To achieve HA, the system or network needs to have various components and features that enhance its reliability, availability, and resilience. Some of these components and features are clustering, load balancing, and fault-tolerant options. Clustering is the process of grouping two or more servers or devices together to act as a single system and provide redundancy and scalability. Load balancing is the process of distributing the workload or traffic among multiple servers or devices to optimize the performance and efficiency of the system or network. Fault-tolerant options are the mechanisms or techniques that enable the system or network to detect, isolate, and recover from faults or failures without affecting the service or performance. Clustering, load balancing, and fault-tolerant options can help to achieve HA by ensuring that the system or network can continue to operate and provide the expected level of service and performance during planned system outages, such as maintenance, upgrade, or backup. Redundant hardware, disk spanning, and patching, load balancing, power reserves, and disk spanning, or backups, clustering, and power reserves are not the best components or features to include in order to achieve HA during planned system outages, as they are more related to data security, data protection, or data recovery aspects of HA. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 11: Security Operations, page 678; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.10, page 274.
What Is a risk of using commercial off-the-shelf (COTS) products?
COTS products may not map directly to an organization’s security requirements.
COTS products are typically more expensive than developing software in-house.
Cost to implement COTS products is difficult to predict.
Vendors are often hesitant to share their source code.
A risk of using commercial off-the-shelf (COTS) products is that they may not map directly to an organization’s security requirements. COTS products are software or hardware products that are ready-made and available for purchase from vendors or suppliers, without any customization or modification. COTS products can offer some advantages, such as lower cost, faster deployment, or better compatibility, but they can also pose some risks, such as:
The other options are not risks of using COTS products. COTS products are typically cheaper than developing software in-house, as the organization does not have to invest in the development, testing, or maintenance of the software. Cost to implement COTS products is not difficult to predict, as the organization can estimate the cost based on the product’s price, features, or specifications. Vendors are not often hesitant to share their source code, as they may offer some level of customization or integration for their COTS products, or they may use open source or standard code for their COTS products. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Engineering, page 439. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Security Architecture and Engineering, page 440.
A retail company is looking to start a development project that will utilize open source components in its code for the first time. The development team has already acquired several
‘open source components and utilized them in proof of concept (POC) code. The team recognizes that the legal and operational risks are outweighed by the benefits of open-source
software use. What MUST the organization do next?
Mandate that all open-source components be approved by the Information Security Manager (ISM).
Scan all open-source components for security vulnerabilities.
Establish an open-source compliance policy.
Require commercial support for all open-source components.
Establishing an open-source compliance policy is the action that the organization must do next when looking to start a development project that will utilize open source components in its code for the first time. Open source components are the software components that are licensed under the open source licenses, and that allow the users to access, modify, and distribute the source code of the software components, such as the libraries, frameworks, or tools. Open source components can offer benefits such as cost savings, innovation, collaboration, or customization, but they can also pose risks such as security vulnerabilities, legal liabilities, or operational challenges. Establishing an open-source compliance policy is the action that involves defining and documenting the rules, guidelines, and procedures for the acquisition, use, and management of the open source components in the development project, and that aligns with the legal and operational requirements and standards that apply to the open source components, such as the license terms, the attribution obligations, the security controls, or the quality assurance. Establishing an open-source compliance policy is the action that the organization must do next when looking to start a development project that will utilize open source components in its code for the first time, because it can provide the following benefits:
The security team plans on using automated account reconciliation in the corporate user access review process. Which of the following must be implemented for the BEST results with fewest errors when running the audit?
Removal of service accounts from review
Segregation of Duties (SoD)
Clear provisioning policies
Frequent audits
Clear provisioning policies are essential for the best results with fewest errors when running the audit using automated account reconciliation. Automated account reconciliation is a process that compares the user accounts and access rights in the system with the expected or authorized accounts and access rights, and identifies any discrepancies or anomalies. Clear provisioning policies define the rules and criteria for creating, modifying, deleting, and reviewing user accounts and access rights, and they provide a baseline for the automated account reconciliation process. The other options are not correct. Removal of service accounts from review may reduce the scope of the audit, but it may also introduce errors or risks if the service accounts are not properly managed or secured. Segregation of Duties (SoD) is a principle that prevents a single user from having conflicting or excessive access rights, but it does not ensure the accuracy or completeness of the automated account reconciliation process. Frequent audits may increase the frequency of the automated account reconciliation process, but they do not improve the quality or reliability of the process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1039. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1040.
In the last 15 years a company has experienced three electrical failures. The cost associated with each failure is listed below.
Which of the following would be a reasonable annual loss expectation?
140,000
3,500
350,000
14,000
The reasonable annual loss expectation for the company is 3,500, which is calculated by dividing the total loss over 15 years by 15. The total loss over 15 years is 52,500, which is the sum of the costs associated with each failure, as shown in the image. The annual loss expectation is an estimate of the potential loss that the company may incur due to a specific threat or risk in a given year. It is calculated by multiplying the annualized rate of occurrence (ARO) of the threat or risk by the single loss expectancy (SLE) of the asset. The ARO is the frequency or probability of the threat or risk occurring in a year, and the SLE is the cost or impact of the threat or risk on the asset. In this case, the ARO is 0.2, which is the average number of electrical failures per year (3/15), and the SLE is 17,500, which is the average cost of each failure (52,500/3). Therefore, the annual loss expectation is 0.2 x 17,500 = 3,50034. References: CISSP CBK, Fifth Edition, Chapter 2, page 133; 2024 Pass4itsure CISSP Dumps, Question 7.
An organization is outsourcing its payroll system and is requesting to conduct a full audit on the third-party information technology (IT) systems. During the due diligence process, the third party provides previous audit report on its IT system.
Which of the following MUST be considered by the organization in order for the audit reports to be acceptable?
The audit assessment has been conducted by an independent assessor.
The audit reports have been signed by the third-party senior management.
The audit reports have been issued in the last six months.
The audit assessment has been conducted by an international audit firm.
The most important factor that the organization must consider in order for the audit reports to be acceptable is that the audit assessment has been conducted by an independent assessor. An independent assessor is a person or an entity that has no affiliation or interest with the third party or the organization, and that can perform the audit assessment objectively and impartially. An independent assessor can provide a credible and reliable evaluation of the third party’s information technology (IT) systems, and identify any risks, issues, or gaps that may affect the security, performance, or compliance of the outsourced payroll system. An independent assessor can also verify that the third party’s IT systems meet the organization’s requirements and expectations, and that the third party follows the best practices and standards for IT security and management. The audit reports being signed by the third-party senior management, being issued in the last six months, or being conducted by an international audit firm are not as critical as the audit assessment being conducted by an independent assessor, as they do not guarantee the quality, validity, or relevance of the audit reports, or they may not be applicable or feasible in all cases. References:
Which of the following BEST describes the purpose of the reference monitor when defining access control to enforce the security model?
Quality design principles to ensure quality by design
Policies to validate organization rules
Cyber hygiene to ensure organizations can keep systems healthy
Strong operational security to keep unit members safe
The purpose of the reference monitor when defining access control to enforce the security model is to provide strong operational security to keep unit members safe. The reference monitor is an abstract concept that represents the mechanism that mediates all access requests between subjects and objects, and enforces the security policy defined by the security model. The reference monitor should be tamper-proof, always invoked, and verifiable. The reference monitor should ensure that only authorized and legitimate access requests are granted, and that any unauthorized or malicious access requests are denied or logged. The reference monitor should also protect the confidentiality, integrity, and availability of the system and the data. Quality design principles to ensure quality by design, policies to validate organization rules, and cyber hygiene to ensure organizations can keep systems healthy are not the purpose of the reference monitor, but rather the goals or the outcomes of the security program. References: CISSP CBK Reference, 5th Edition, Chapter 5, page 265; CISSP All-in-One Exam Guide, 8th Edition, Chapter 5, page 237
A software engineer uses automated tools to review application code and search for application flaws, back doors, or other malicious code. Which of the following is the
FIRST Software Development Life Cycle (SDLC) phase where this takes place?
Design
Test
Development
Deployment
The development phase is the first Software Development Life Cycle (SDLC) phase where a software engineer uses automated tools to review application code and search for application flaws, back doors, or other malicious code. The development phase is the phase where the software engineer writes, compiles, and tests the application code, based on the design specifications and requirements. The development phase is also the phase where the software engineer performs code review and analysis, using automated tools, such as static or dynamic analysis tools, to identify and eliminate any errors, vulnerabilities, or malicious code in the application code. Code review and analysis is an important security activity in the development phase, as it can help to improve the quality, functionality, and security of the application, and to prevent or mitigate any potential attacks or exploits on the application12. References: CISSP CBK, Fifth Edition, Chapter 3, page 217; CISSP Practice Exam – FREE 20 Questions and Answers, Question 11.
How is Remote Authentication Dial-In User Service (RADIUS) authentication accomplished?
It uses clear text and firewall rules.
It relies on Virtual Private Networks (VPN).
It uses clear text and shared secret keys.
It relies on asymmetric encryption keys.
Remote Authentication Dial-In User Service (RADIUS) authentication is accomplished by using clear text and shared secret keys. RADIUS is a protocol that provides centralized authentication, authorization, and accounting for remote network access. RADIUS uses User Datagram Protocol (UDP) to communicate between the client and the server. RADIUS authentication uses clear text to send the username and password of the user, but it also uses a shared secret key to encrypt a message authentication code (MAC) that is appended to the packet. The MAC is used to verify the integrity and authenticity of the packet. The shared secret key is only known by the client and the server, and it is never transmitted over the network. The other options are not correct. RADIUS does not use firewall rules, Virtual Private Networks (VPN), or asymmetric encryption keys for authentication. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 503-504; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, p. 228-229.
Which of the following is the BEST method to validate secure coding techniques against injection and overflow attacks?
Scheduled team review of coding style and techniques for vulnerability patterns
Using automated programs to test for the latest known vulnerability patterns
The regular use of production code routines from similar applications already in use
Ensure code editing tools are updated against known vulnerability patterns
The best method to validate secure coding techniques against injection and overflow attacks is to use automated programs to test for the latest known vulnerability patterns. Secure coding techniques are the practices and methods that aim to write software that is free from vulnerabilities or defects that could compromise the security, functionality, or performance of the software or the system. Secure coding techniques include input validation, output encoding, error handling, encryption, or logging. Injection and overflow attacks are two common types of attacks that exploit the vulnerabilities or defects in the software code, by inserting or injecting malicious or unexpected data or commands into the software, or by overflowing or exceeding the memory or buffer allocated for the software. Injection and overflow attacks can result in unauthorized access, data loss or corruption, denial of service, or remote code execution. To validate secure coding techniques against injection and overflow attacks, automated programs can be used to test for the latest known vulnerability patterns, which are the patterns or signatures that indicate the presence or occurrence of the vulnerabilities or attacks. Automated programs can provide the following advantages over other methods, such as manual review, code reuse, or code editing tools:
Which layer of the Open systems Interconnection (OSI) model is being targeted in the event of a Synchronization (SYN) flood attack?
Session
Transport
Network
Presentation
A Synchronization (SYN) flood attack is a type of denial-of-service (DoS) attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP), which operates at the transport layer of the Open Systems Interconnection (OSI) model. In a SYN flood attack, the attacker sends a large number of SYN packets to the target server, but does not respond to the SYN-ACK packets sent by the server. This causes the server to exhaust its resources and become unable to accept legitimate requests. The session, network, and presentation layers of the OSI model are not directly involved in this attack. References:
CISSP Official (ISC)2 Practice Tests, 3rd Edition, Domain 4: Communication and Network Security, Question 4.2.1
CISSP CBK, 5th Edition, Chapter 4: Communication and Network Security, Section: Secure Network Architecture and Design
Which of the following is the FIRST step during digital identity provisioning?
Authorizing the entity for resource access
Synchronizing directories
Issuing an initial random password
Creating the entity record with the correct attributes
The first step during digital identity provisioning is creating the entity record with the correct attributes. Digital identity provisioning is a process that involves the creation, management, or deletion of the digital identities, accounts, or credentials, of the entities, such as users, devices, or processes, that need to access or use the systems, networks, or resources, of an organization, to ensure the security, efficiency, or compliance of the access or use of the systems, networks, or resources, by the entities. Digital identity provisioning can follow various methods, models, or frameworks, such as the Identity Management Life Cycle (IMLC), the Identity and Access Management (IAM), or the Identity Governance and Administration (IGA), that can define, structure, or guide the digital identity provisioning process, by using various phases, stages, or steps, such as initialization, issuance, maintenance, or revocation. The first step during digital identity provisioning is creating the entity record with the correct attributes, which means to establish, register, or store the information, data, or details, of the entity, such as the name, role, or privilege of the entity, that are required or sufficient to identify, authenticate, or authorize the entity, to access or use the systems, networks, or resources, of the organization. Creating the entity record with the correct attributes can help to ensure the validity, accuracy, or consistency of the digital identity, account, or credential, of the entity, as well as to enable or facilitate the subsequent steps or actions, such as issuing, updating, or deleting the digital identity, account, or credential, of the entity. Authorizing the entity for resource access, synchronizing directories, or issuing an initial random password are not the first steps during digital identity provisioning, as they are either more related to the other phases, stages, or steps, such as issuance, maintenance, or revocation, that are performed or conducted after the creation of the entity record with the correct attributes, during the digital identity provisioning process, or to the other activities, tasks, or functions, such as granting, aligning, or generating the access, permissions, or credentials, of the entity, that are performed or conducted during the digital identity provisioning process, rather than to the creation of the entity record with the correct attributes, during the digital identity provisioning process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 287; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.15, page 225.
How does security in a distributed file system using mutual authentication differ from file security in a multi-user host?
Access control can rely on the Operating System (OS), but eavesdropping is
Access control cannot rely on the Operating System (OS), and eavesdropping
Access control can rely on the Operating System (OS), and eavesdropping is
Access control cannot rely on the Operating System (OS), and eavesdropping
Security in a distributed file system using mutual authentication differs from file security in a multi-user host in that access control cannot rely on the Operating System (OS), and eavesdropping is possible. A distributed file system is a system that allows users to access files stored on remote servers over a network. Mutual authentication is a process where both the client and the server verify each other’s identity before establishing a connection. In a distributed file system, access control cannot rely on the OS, because the OS may not have the same security policies or mechanisms as the remote server. Therefore, access control must be implemented at the application layer, using protocols such as Kerberos or SSL/TLS. Eavesdropping is also possible in a distributed file system, because the network traffic may be intercepted or modified by malicious parties. Therefore, encryption and integrity checks must be used to protect the data in transit. A multi-user host is a system that allows multiple users to access files stored on a local server. In a multi-user host, access control can rely on the OS, because the OS can enforce security policies and mechanisms such as permissions, groups, and roles. Eavesdropping is less likely in a multi-user host, because the network traffic is confined to the local server. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 373-374; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Architecture and Engineering, p. 149-150.
As a design principle, which one of the following actors is responsible for identifying and approving data security requirements in a cloud ecosystem?
Cloud broker
Cloud provider
Cloud consumer
Cloud auditor
As a design principle, the cloud consumer is the actor that is responsible for identifying and approving data security requirements in a cloud ecosystem. A cloud ecosystem is a complex system of interrelated and interdependent components that provide cloud services and solutions to the users and customers. A cloud ecosystem consists of different actors, such as cloud provider, cloud consumer, cloud broker, or cloud auditor, that have different roles and responsibilities in the cloud ecosystem. The cloud consumer is the actor that uses the cloud services or solutions provided by the cloud provider or the cloud broker. The cloud consumer is responsible for identifying and approving data security requirements in a cloud ecosystem, because:
What requirement MUST be met during internal security audits to ensure that all information provided is expressed as an objective assessment without risk of retaliation?
The auditor must be independent and report directly to the management.
The auditor must utilize automated tools to back their findings.
The auditor must work closely with both the information Technology (IT) and security sections of an organization.
The auditor must perform manual reviews of systems and processes.
The requirement that must be met during internal security audits to ensure that all information provided is expressed as an objective assessment without risk of retaliation is that the auditor must be independent and report directly to the management. An internal security audit is a process that involves the examination or evaluation of the security policies, procedures, or practices of an organization, by an internal auditor or a team of internal auditors, to identify or detect any security gaps, weaknesses, or issues, as well as to provide or recommend any security improvements, enhancements, or solutions. An internal security audit can help to ensure the security, compliance, or performance of the organization, as well as to protect the organization from various security threats or risks, such as unauthorized access, data leakage, or malware infection. However, an internal security audit can also face various challenges, difficulties, or biases, such as conflicts of interest, lack of cooperation, or resistance to change, that may affect the quality, accuracy, or reliability of the audit results or findings, as well as the implementation, acceptance, or effectiveness of the audit recommendations or suggestions. Therefore, an internal security audit should be conducted with integrity, objectivity, or professionalism, by following various security standards, guidelines, or best practices. The requirement that must be met during internal security audits to ensure that all information provided is expressed as an objective assessment without risk of retaliation is that the auditor must be independent and report directly to the management. The auditor must be independent, which means that the auditor must not have any personal, professional, or financial relationship or interest with the auditee or the subject of the audit, that may compromise or influence the auditor’s judgment, opinion, or decision. The auditor must also report directly to the management, which means that the auditor must communicate or deliver the audit results or findings to the highest level of authority or responsibility in the organization, such as the board of directors, the executive committee, or the senior management, without any interference, manipulation, or censorship from any other party or stakeholder. The auditor must be independent and report directly to the management, to ensure that all information provided is expressed as an objective assessment, which means that the information is based on facts, evidence, or data, rather than on opinions, assumptions, or emotions, and without risk of retaliation, which means that the information is provided without fear, pressure, or intimidation from any party or stakeholder, that may harm, punish, or discourage the auditor for providing the information. The auditor must utilize automated tools to back their findings, the auditor must work closely with both the information technology (IT) and security sections of an organization, or the auditor must perform manual reviews of systems and processes are not the requirements that must be met during internal security audits to ensure that all information provided is expressed as an objective assessment without risk of retaliation, as they are either more related to the methods, techniques, or tools that are used or applied by the auditor during the audit process, rather than the principles, standards, or practices that are followed or adhered by the auditor during the audit process, or to the relationships, interactions, or collaborations that are established or maintained by the auditor with the other parties or stakeholders during the audit process, rather than the independence, objectivity, or professionalism that are demonstrated or exhibited by the auditor during the audit process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 484; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.13, page 276.
Which of the following is a peor entity authentication method for Point-to-Point
Protocol (PPP)?
Challenge Handshake Authentication Protocol (CHAP)
Message Authentication Code (MAC)
Transport Layer Security (TLS) handshake protocol
Challenge-response authentication mechanism
Point-to-Point Protocol (PPP) is a network protocol that provides a simple and secure way of establishing a direct connection between two hosts over a serial link, such as a phone line or a cable. PPP supports various methods of peer entity authentication, which is the process of verifying the identity and credential of the hosts involved in the connection. One of the peer entity authentication methods supported by PPP is Challenge Handshake Authentication Protocol (CHAP). CHAP is a challenge-response authentication mechanism that uses a shared secret, such as a password, and a one-way hash function, such as MD5, to authenticate the hosts. CHAP periodically sends a random challenge to the host, and the host responds with a hash of the challenge and the secret. The challenger then compares the hash with its own calculation and accepts or rejects the host. CHAP provides a secure and efficient way of authenticating the hosts without sending the secret over the network. Message Authentication Code (MAC), Transport Layer Security (TLS) handshake protocol, or challenge-response authentication mechanism are not peer entity authentication methods for PPP, as they are more related to data integrity, secure communication, or general authentication concepts. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 14: Access Control, page 849; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.10, page 221.
Computer forensics require which of the following are MAIN steps?
Announce the incident to responsible sections, analyze the data, and assimilate the data for correlation
Take action to contain the damage, announce the incident to responsible sections, and analyze the data
Acquire the data without altering, authenticate the recovered data, and analyze the data
Access the data before destruction, assimilate the data for correlation, and take action to contain the damage
The main steps that computer forensics requires are to acquire the data without altering, authenticate the recovered data, and analyze the data. Computer forensics is the process of collecting, preserving, and examining digital evidence from computers or other electronic devices, such as smartphones, tablets, or cameras. Computer forensics follows a standard methodology that consists of the following steps:
The other options are not the main steps that computer forensics requires. Announce the incident to responsible sections, take action to contain the damage, and assimilate the data for correlation are steps that are more related to incident response or security operations, not computer forensics. Access the data before destruction is not a step that computer forensics requires, as it implies that the data is already compromised or lost, which may prevent the acquisition or the authentication of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1070. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1071.
Which of the following types of hosts should be operating in the demilitarized zone (DMZ)?
Hosts intended to provide limited access to public resources
Database servers that can provide useful information to the public
Hosts that store unimportant data such as demographical information
File servers containing organizational data
A demilitarized zone (DMZ) is a network segment that is separated from both the internal and external networks by firewalls. The purpose of a DMZ is to provide limited access to public resources, such as web servers, email servers, or DNS servers, while protecting the internal network from unauthorized access. A DMZ should not contain database servers, file servers, or hosts that store sensitive or unimportant data, as these could be compromised by attackers who gain access to the DMZ. References: CISSP Official Study Guide, 9th Edition, page 414; CISSP All-in-One Exam Guide, 8th Edition, page 1130
The signer verifies that the software being loaded is the software originated by the signer.
The vendor certifies the software being loaded is free of malicious code and that it was originated by the signer.
The signer verifies that the software being loaded is free of malicious code.
Both vendor and the signer certify the software being loaded is free of malicious code and it was originated by the signer.
The statement that best describes the purpose of a digital signature is that the vendor certifies the software being loaded is free of malicious code and that it was originated by the signer. A digital signature is a type of electronic signature that uses a cryptographic technique to verify the authenticity, integrity, and non-repudiation of a digital document, such as a software, message, or transaction. A digital signature is created by applying a hash function and a private key to the digital document, and it is verified by using a public key and a certificate. A digital signature can help to certify that the software being loaded is free of malicious code and that it was originated by the signer, by ensuring that the software has not been altered or tampered with since it was signed, and that the signer is the legitimate and authorized source of the software. The signer verifies that the software being loaded is the software originated by the signer, the signer verifies that the software being loaded is free of malicious code, or both vendor and the signer certify the software being loaded is free of malicious code and it was originated by the signer are not the statements that best describe the purpose of a digital signature, as they are either inaccurate, incomplete, or redundant statements about the digital signature. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Cryptography and Symmetric Key Algorithms, page 259; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 3: Security Engineering, Question 3.10, page 136.
An organization is planning to have an it audit of its as a Service (SaaS) application to demonstrate to external parties that the security controls around availability are designed. The audit report must also cover a certain period of time to show the operational effectiveness of the controls. Which Service Organization Control (SOC) report would BEST fit their needs?
SOC 1 Type 1
SOC 1 Type 2
SOC 2 Type 1
SOC 2 Type 2
A SOC 2 Type 2 report would best fit the needs of the organization that wants to have an IT audit of its SaaS application to demonstrate the security controls around availability. A SOC 2 Type 2 report provides information about the design and the operating effectiveness of the controls at a service organization relevant to the availability trust service category, as well as the other trust service categories such as security, processing integrity, confidentiality, and privacy. A SOC 2 Type 2 report covers a specified period of time, usually between six and twelve months, and includes the description of the tests of controls and the results performed by the auditor. A SOC 2 Type 2 report is intended for the general or the restricted use of the user entities and the other interested parties that need to understand the security controls of the service organization.
The other options are not the best fit for the needs of the organization. A SOC 1 report is for organizations whose internal security controls can impact a customer’s financial statements, and it is based on the SSAE 18 standard. A SOC 1 report does not cover the availability trust service category, but rather the control objectives defined by the service organization. A SOC 1 report can be either Type 1 or Type 2, depending on whether it evaluates the design of the controls at a point in time or the operating effectiveness of the controls over a period of time. A SOC 1 report is intended for the restricted use of the user entities and the other interested parties that need to understand the internal control over financial reporting of the service organization. A SOC 2 Type 1 report is similar to a SOC 2 Type 2 report, except that it evaluates the design of the controls at a point in time, and does not include the tests of controls and the results. A SOC 2 Type 1 report may not provide sufficient assurance about the operational effectiveness of the controls over a period of time. A SOC 3 report is a short form, general use report that gives users and interested parties a report about controls at a service organization related to the trust service categories. A SOC 3 report does not include the description of tests of controls and results, which limits its usability and detail.
References: SOC Report Types: Type 1 vs Type 2 SOC Reports/Audits, SOC 1 vs SOC 2 vs SOC 3: What’s the Difference? | Secureframe, A Comprehensive Guide to SOC Reports - SC&H Group, Service Organization Control (SOC) Reports Explained - Cherry Bekaert, Service Organization Controls (SOC) Reports | Rapid7
What term is commonly used to describe hardware and software assets that are stored in a configuration management database (CMDB)?
Configuration element
Asset register
Ledger item
Configuration item
A configuration item is a term commonly used to describe hardware and software assets that are stored in a configuration management database (CMDB). A configuration item is an identifiable and manageable component of a system or service that has a defined lifecycle and configuration. A CMDB is a repository that contains information about the configuration items and their relationships. A configuration element, an asset register, and a ledger item are not terms that are used to describe hardware and software assets in a CMDB. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 19: Security Operations, page 1836.
Which change management role is responsible for the overall success of the project and supporting the change throughout the organization?
Change driver
Change implementer
Program sponsor
Project manager
The change management role that is responsible for the overall success of the project and supporting the change throughout the organization is the program sponsor. The program sponsor is the senior executive or stakeholder who provides the vision, direction, and support for the change management project, and who ensures the alignment and integration of the change management project with the business goals and strategy of the organization. The program sponsor is responsible for the overall success of the project and supporting the change throughout the organization, as they can provide the leadership, guidance, and resources for the change management project, and communicate and advocate the benefits and value of the change management project to the other stakeholders, such as the management, the employees, or the customers34. References: CISSP CBK, Fifth Edition, Chapter 6, page 554; 2024 Pass4itsure CISSP Dumps, Question 20.
Which is the MOST critical aspect of computer-generated evidence?
Objectivity
Integrity
Timeliness
Relevancy
Integrity is the most critical aspect of computer-generated evidence. Integrity means that the evidence has not been altered, tampered, or corrupted in any way, and that it reflects the original state of the data or system. Integrity can be ensured by using cryptographic techniques, such as hashing, digital signatures, and encryption, as well as by following proper chain of custody and documentation procedures. Integrity is essential for the admissibility and reliability of the evidence in a court of law or an investigation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 13: Legal, Regulations, Investigations, and Compliance, page 685. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
What is the benefit of using Network Admission Control (NAC)?
Operating system (OS) versions can be validated prior to allowing network access.
NAC supports validation of the endpoint's security posture prior to allowing the session to go into an authorized state.
NAC can require the use of certificates, passwords, or a combination of both before allowing network admission.
NAC only supports Windows operating systems (OS).
Network Admission Control (NAC) is a security technique that verifies the identity and compliance of the endpoints (devices or users) that attempt to access a network. NAC supports validation of the endpoint’s security posture prior to allowing the session to go into an authorized state, which means that NAC checks whether the endpoint meets the predefined security policies and requirements, such as having the latest patches, antivirus software, firewall settings, or encryption standards, before granting network access. NAC can also enforce remediation actions, such as updating, quarantining, or blocking the endpoint, if it does not comply with the security policies and requirements. NAC can prevent unauthorized, infected, or vulnerable endpoints from compromising the network security and performance. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, pp. 685-686; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Communication and Network Security, pp. 785-786.
Which security architecture strategy could be applied to secure an operating system (OS) baseline for deployment within the corporate enterprise?
Principle of Least Privilege
Principle of Separation of Duty
Principle of Secure Default
principle of Fail Secure
The security architecture strategy that could be applied to secure an operating system (OS) baseline for deployment within the corporate enterprise is the principle of secure default. An OS baseline is a set of minimum security standards or configurations that are applied to an OS before it is deployed or installed on a device or a system within an organization. An OS baseline can include settings such as passwords, permissions, firewalls, patches, or encryption. An OS baseline can enhance the security, performance, and functionality of the OS, and can prevent or reduce the risk of vulnerabilities, attacks, or errors. A security architecture strategy is a method or a technique that guides the design, development, implementation, or evaluation of the security aspects or features of a system or a product, such as an OS. A security architecture strategy can improve the security, reliability, or usability of the system or the product, and can align with the security objectives or requirements of the organization. The principle of secure default is a security architecture strategy that states that the system or the product should have the most secure settings or options enabled by default, and that the user or the administrator should have the option to change or disable them if needed. The principle of secure default can be applied to secure an OS baseline for deployment within the corporate enterprise, as it can ensure that the OS has the highest level of security and protection from the start, and that the user or the administrator can customize or adjust the OS settings or options according to their preferences or needs. The principle of least privilege, the principle of separation of duty, and the principle of fail secure are not security architecture strategies that could be applied to secure an OS baseline for deployment within the corporate enterprise, as they are either not related to the default settings or options of the OS, or they have different purposes or functions than securing the OS baseline. References:
A company wants to implement two-factor authentication (2FA) to protect their computers from unauthorized users. Which solution provides the MOST secure means of authentication and meets the criteria they have set?
Username and personal identification number (PIN)
Fingerprint and retinal scanners
Short Message Services (SMS) and smartphone authenticator
Hardware token and password
Two-factor authentication (2FA) is a method of authentication that requires two independent factors to verify the identity of a user. The factors are usually classified into three categories: something you know (such as a password or a PIN), something you have (such as a hardware token or a smart card), and something you are (such as a fingerprint or a retinal scan). A hardware token and a password provide the most secure means of authentication among the given options, as they belong to different categories and are less susceptible to theft, duplication, or compromise. A username and a PIN are both something you know, and thus do not constitute 2FA. A fingerprint and a retinal scanner are both something you are, and thus do not constitute 2FA. A Short Message Service (SMS) and a smartphone authenticator are both something you have, and thus do not constitute 2FA. Moreover, SMS is not a secure channel for transmitting authentication codes, as it can be intercepted or spoofed by attackers. References: 1, 2, 6
What is the BEST method if an investigator wishes to analyze a hard drive which may be used as evidence?
Leave the hard drive in place and use only verified and authenticated Operating Systems (OS) utilities ...
Log into the system and immediately make a copy of all relevant files to a Write Once, Read Many ...
Remove the hard drive from the system and make a copy of the hard drive's contents using imaging hardware.
Use a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive.
The best method if an investigator wishes to analyze a hard drive which may be used as evidence is to remove the hard drive from the system and make a copy of the hard drive’s contents using imaging hardware. Imaging hardware is a device that can create a bit-by-bit copy of the hard drive’s contents, including the deleted or hidden files, without altering or damaging the original hard drive. Imaging hardware can also verify the integrity of the copy by generating and comparing the hash values of the original and the copy. The copy can then be used for analysis, while the original can be preserved and stored in a secure location. This method can help to ensure the authenticity, reliability, and admissibility of the hard drive as evidence, as well as to prevent any potential tampering or contamination of the hard drive. Leaving the hard drive in place and using only verified and authenticated Operating Systems (OS) utilities, logging into the system and immediately making a copy of all relevant files to a Write Once, Read Many, or using a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive are not the best methods if an investigator wishes to analyze a hard drive which may be used as evidence, as they are either risky, incomplete, or inefficient methods that may compromise the integrity, validity, or quality of the hard drive as evidence. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 19: Digital Forensics, page 1049; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.15, page 306.
A large organization’s human resources and security teams are planning on implementing technology to eliminate manual user access reviews and improve compliance. Which of the following options is MOST likely to resolve the issues associated with user access?
Implement a role-based access control (RBAC) system.
Implement identity and access management (IAM) platform.
Implement a Privileged Access Management (PAM) system.
Implement a single sign-on (SSO) platform.
Implementing an identity and access management (IAM) platform is the option that is most likely to resolve the issues associated with user access. IAM is a framework that defines and manages the identities and access rights of the users and entities in an organization, such as the employees, contractors, customers, partners, or devices. IAM can help to eliminate manual user access reviews and improve compliance by providing features and functions such as:
A financial services organization has employed a security consultant to review processes used by employees across various teams. The consultant interviewed a member of
the application development practice and found gaps in their threat model. Which of the following correctly represents a trigger for when a threat model should be revised?
A new data repository is added.
is After operating system (OS) patches are applied
After a modification to the firewall rule policy
A new developer is hired into the team.
A threat model is a structured representation of the potential threats and vulnerabilities that may affect an application or system. A threat model should be revised whenever there is a significant change in the design, architecture, functionality, or environment of the application or system that may introduce new threats or vulnerabilities or alter the existing ones. A new data repository is an example of such a change, as it may affect the confidentiality, integrity, or availability of the data stored or processed by the application or system. Therefore, a new data repository is a trigger for when a threat model should be revised. The other options are not triggers for revising a threat model, as they do not affect the application or system directly or significantly. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, pp. 1383-1384; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2017-2018.
A client has reviewed a vulnerability assessment report and has stated it is Inaccurate. The client states that the vulnerabilities listed are not valid because the host’s Operating System (OS) was not properly detected.
Where in the vulnerability assessment process did the erra MOST likely occur?
Detection
Enumeration
Reporting
Discovery
Detection is the stage in the vulnerability assessment process where the error most likely occurred. Detection is the process of identifying the vulnerabilities that exist on the target system or network, using various tools and techniques, such as scanners, sniffers, or exploit frameworks. Detection relies on the accurate identification of the host’s operating system, as different operating systems may have different vulnerabilities. If the host’s operating system was not properly detected, the detection process may produce false positives or false negatives, resulting in an inaccurate vulnerability assessment report. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 284; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 410]
Which of the (ISC)? Code of Ethics canons is MOST reflected when preserving the value of systems, applications, and entrusted information while avoiding conflicts of interest?
Act honorably, honestly, justly, responsibly, and legally.
Protect society, the commonwealth, and the infrastructure.
Provide diligent and competent service to principles.
Advance and protect the profession.
The (ISC)* Code of Ethics is a set of principles and guidelines that govern the professional and ethical conduct of (ISC)* certified members and associates. The Code of Ethics consists of four mandatory canons, which are: Protect society, the common good, necessary public trust and confidence, and the infrastructure. Act honorably, honestly, justly, responsibly, and legally. Provide diligent and competent service to principals. Advance and protect the profession. The canon that is most reflected when preserving the value of systems, applications, and entrusted information while avoiding conflicts of interest is the second one: act honorably, honestly, justly, responsibly, and legally. This canon requires the (ISC)* certified members and associates to uphold the highest standards of integrity, fairness, responsibility, and lawfulness in their professional activities. This includes preserving the value of the systems, applications, and entrusted information that they work with, and avoiding any conflicts of interest that may compromise their objectivity, impartiality, or loyalty. The other canons are not as directly related to the scenario as the second one, although they may also have some relevance. The first canon: protect society, the common good, necessary public trust and confidence, and the infrastructure, requires the (ISC)* certified members and associates to safeguard the public interest, the common welfare, and the critical infrastructure from harm or misuse. This includes protecting the confidentiality, integrity, and availability of the systems, applications, and entrusted information that they work with, and reporting any incidents or breaches that may affect them. The third canon: provide diligent and competent service to principals, requires the (ISC)* certified members and associates to serve their clients, employers, or stakeholders with diligence and competence. This includes delivering quality work, meeting the expectations and requirements, and respecting the rights and interests of the principals. The fourth canon: advance and protect the profession, requires the (ISC)* certified members and associates to promote and enhance the information security profession. This includes maintaining and improving their knowledge and skills, sharing their expertise and experience, and adhering to the Code of Ethics and the professional standards. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 24-25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 19-20.
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
An international medical organization with headquarters in the United States (US) and branches in France
wants to test a drug in both countries. What is the organization allowed to do with the test subject’s data?
Aggregate it into one database in the US
Process it in the US, but store the information in France
Share it with a third party
Anonymize it and process it in the US
Anonymizing the test subject’s data means removing or masking any personally identifiable information (PII) that could be used to identify or trace the individual. This can help to protect the privacy and confidentiality of the test subjects, as well as comply with the data protection laws and regulations of both countries. Processing the anonymized data in the US can also help to reduce the costs and risks of transferring the data across borders. Aggregating the data into one database in the US, processing it in the US but storing it in France, or sharing it with a third party could all pose potential privacy and security risks, as well as legal and ethical issues, for the organization and the test subjects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 67; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 62.
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
Which of the following provides the MOST comprehensive filtering of Peer-to-Peer (P2P) traffic?
Application proxy
Port filter
Network boundary router
Access layer switch
An application proxy provides the most comprehensive filtering of Peer-to-Peer (P2P) traffic. P2P traffic is a type of network traffic that involves direct communication and file sharing between peers, without the need for a central server. P2P traffic can be used for legitimate purposes, such as distributed computing, content delivery, or collaboration, but it can also be used for illegal or malicious purposes, such as piracy, malware distribution, or denial-of-service attacks. P2P traffic can also consume a lot of bandwidth and degrade the performance of other network applications. Therefore, it may be desirable to filter or block P2P traffic on a network. An application proxy is a type of firewall that operates at the application layer of the OSI model, and acts as an intermediary between the client and the server. An application proxy can inspect the content and the behavior of the network traffic, and apply granular filtering rules based on the specific application protocol, such as HTTP, FTP, or SMTP. An application proxy can also perform authentication, encryption, caching, and logging functions. An application proxy can provide the most comprehensive filtering of P2P traffic, as it can identify and block the P2P applications and protocols, regardless of the port number or the payload. An application proxy can also prevent P2P traffic from bypassing the firewall by using encryption or tunneling techniques. The other options are not as effective as an application proxy for filtering P2P traffic. A port filter is a type of firewall that operates at the transport layer of the OSI model, and blocks or allows traffic based on the source and destination port numbers. A port filter cannot inspect the content or the behavior of the traffic, and cannot distinguish between different applications that use the same port number. A port filter can also be easily evaded by P2P traffic that uses random or well-known port numbers, such as port 80 for HTTP. A network boundary router is a router that connects a network to another network, such as the Internet. A network boundary router can perform some basic filtering functions, such as access control lists (ACLs) or packet filtering, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. A network boundary router can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. An access layer switch is a switch that connects end devices, such as PCs, printers, or servers, to the network. An access layer switch can perform some basic filtering functions, such as MAC address filtering or port security, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. An access layer switch can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. References: Why and how to control peer-to-peer traffic | Network World; Detection and Management of P2P Traffic in Networks using Artificial Neural Networksa | Journal of Network and Systems Management; Blocking P2P And File Sharing - Cisco Meraki Documentation.
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
A security practitioner is tasked with securing the organization’s Wireless Access Points (WAP). Which of these is the MOST effective way of restricting this environment to authorized users?
Enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point
Disable the broadcast of the Service Set Identifier (SSID) name
Change the name of the Service Set Identifier (SSID) to a random value not associated with the organization
Create Access Control Lists (ACL) based on Media Access Control (MAC) addresses
The most effective way of restricting the wireless environment to authorized users is to enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point. WPA2 is a security protocol that provides confidentiality, integrity, and authentication for wireless networks. WPA2 uses Advanced Encryption Standard (AES) to encrypt the data transmitted over the wireless network, and prevents unauthorized users from intercepting or modifying the traffic. WPA2 also uses a pre-shared key (PSK) or an Extensible Authentication Protocol (EAP) to authenticate the users who want to join the wireless network, and prevents unauthorized users from accessing the network resources. WPA2 is the current standard for wireless security and is widely supported by most wireless devices. The other options are not as effective as WPA2 encryption for restricting the wireless environment to authorized users. Disabling the broadcast of the SSID name is a technique that hides the name of the wireless network from being displayed on the list of available networks, but it does not prevent unauthorized users from discovering the name by using a wireless sniffer or a brute force tool. Changing the name of the SSID to a random value not associated with the organization is a technique that reduces the likelihood of being targeted by an attacker who is looking for a specific network, but it does not prevent unauthorized users from joining the network if they know the name and the password. Creating ACLs based on MAC addresses is a technique that allows or denies access to the wireless network based on the physical address of the wireless device, but it does not prevent unauthorized users from spoofing a valid MAC address or bypassing the ACL by using a wireless bridge or a repeater. References: Secure Wireless Access Points - Fortinet; Configure Wireless Security Settings on a WAP - Cisco; Best WAP of 2024 | TechRadar.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
What is the MAIN purpose of a change management policy?
To assure management that changes to the Information Technology (IT) infrastructure are necessary
To identify the changes that may be made to the Information Technology (IT) infrastructure
To verify that changes to the Information Technology (IT) infrastructure are approved
To determine the necessary for implementing modifications to the Information Technology (IT) infrastructure
The main purpose of a change management policy is to ensure that all changes made to the IT infrastructure are approved, documented, and communicated effectively across the organization. This helps to minimize the risks associated with unauthorized or poorly planned changes, such as security breaches, system failures, or compliance issues. A change management policy does not assure management that changes are necessary, identify the changes that may be made, or determine the necessity for implementing modifications, although these may be part of the change management process. References: CISSP CBK Reference
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of which phase?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of the system implementation phase. The SDLC is a framework that describes the stages and activities involved in the development, deployment, and maintenance of a system. The SDLC typically consists of the following phases: system initiation, system acquisition and development, system implementation, system operations and maintenance, and system disposal. The security accreditation task is the process of formally authorizing a system to operate in a specific environment, based on the security requirements, controls, and risks. The security accreditation task is part of the security certification and accreditation (C&A) process, which also includes the security certification task, which is the process of technically evaluating and testing the security controls and functionality of a system. The security accreditation task is completed at the end of the system implementation phase, which is the phase where the system is installed, configured, integrated, and tested in the target environment. The security accreditation task involves reviewing the security certification results and documentation, such as the security plan, the security assessment report, and the plan of action and milestones, and making a risk-based decision to grant, deny, or conditionally grant the authorization to operate (ATO) the system. The security accreditation task is usually performed by a senior official, such as the authorizing official (AO) or the designated approving authority (DAA), who has the authority and responsibility to accept the security risks and approve the system operation. The security accreditation task is not completed at the end of the system acquisition and development, system operations and maintenance, or system initiation phases. The system acquisition and development phase is the phase where the system requirements, design, and development are defined and executed, and the security controls are selected and implemented. The system operations and maintenance phase is the phase where the system is used and supported in the operational environment, and the security controls are monitored and updated. The system initiation phase is the phase where the system concept, scope, and objectives are established, and the security categorization and planning are performed.
When developing a business case for updating a security program, the security program owner MUST do
which of the following?
Identify relevant metrics
Prepare performance test reports
Obtain resources for the security program
Interview executive management
When developing a business case for updating a security program, the security program owner must identify relevant metrics that can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. A business case is a document or a presentation that provides the rationale or the argument for initiating or continuing a project or a program, such as a security program, by analyzing and comparing the costs and the benefits, the risks and the opportunities, and the alternatives and the recommendations of the project or the program. A business case can provide some benefits for security, such as enhancing the visibility and the accountability of the security program, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A business case can involve various elements and steps, such as:
Identifying relevant metrics is a key element or step of developing a business case for updating a security program, as it can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. Metrics are measures or indicators that can quantify or qualify the attributes or the outcomes of a process or an activity, such as the security program, and that can provide the information or the feedback that can facilitate the decision making or the improvement of the process or the activity. Metrics can provide some benefits for security, such as enhancing the accuracy and the reliability of the security program, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. Identifying relevant metrics can involve various tasks or duties, such as:
Preparing performance test reports, obtaining resources for the security program, and interviewing executive management are not the tasks or duties that the security program owner must do when developing a business case for updating a security program, although they may be related or possible tasks or duties. Preparing performance test reports is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to verify or validate the functionality and the quality of the security program, according to the standards and the criteria of the security program, and to detect and report any errors, bugs, or vulnerabilities in the security program. Obtaining resources for the security program is a task or a technique that can be used by the security program owner, the security program sponsor, or the security program manager, to acquire or allocate the necessary or the sufficient resources for the security program, such as the financial, human, or technical resources, and to manage or optimize the use or the distribution of the resources for the security program. Interviewing executive management is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to collect and analyze the information and the feedback about the security program, from the executive management, who are the primary users or recipients of the security program, and who have the authority and the accountability to implement or execute the security program.
Who has the PRIMARY responsibility to ensure that security objectives are aligned with organization goals?
Senior management
Information security department
Audit committee
All users
Senior management has the primary responsibility to ensure that security objectives are aligned with organizational goals. Senior management is the highest level of authority and decision-making in an organization, and it sets the vision, mission, strategy, and objectives for the organization. Senior management is also responsible for establishing the security governance framework, which defines the roles, responsibilities, policies, standards, and procedures for security management. Senior management should ensure that the security function supports and enables the organizational goals, and that the security objectives are consistent, measurable, and achievable. Senior management should also provide adequate resources, guidance, and oversight for the security function, and communicate the security expectations and requirements to all stakeholders. The information security department, the audit committee, and all users have some roles and responsibilities in ensuring that security objectives are aligned with organizational goals, but they are not the primary ones. The information security department is responsible for implementing, maintaining, and monitoring the security controls and processes, and reporting on the security performance and incidents. The audit committee is responsible for reviewing and verifying the effectiveness and compliance of the security controls and processes, and providing recommendations for improvement. All users are responsible for following the security policies and procedures, and reporting any security issues or violations.
At a MINIMUM, audits of permissions to individual or group accounts should be scheduled
annually
to correspond with staff promotions
to correspond with terminations
continually
The minimum frequency for audits of permissions to individual or group accounts is continually. Audits of permissions are the processes of reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. Audits of permissions can provide some benefits for security, such as enhancing the accuracy and the reliability of the user accounts and access rights, identifying and removing any excessive, obsolete, or unauthorized access rights, and supporting the audit and the compliance activities. Audits of permissions should be performed continually, which means that they should be conducted on a regular and consistent basis, without any interruption or delay. Continual audits of permissions can help to maintain the security and the integrity of the system or the network, by detecting and addressing any changes or issues that may affect the user accounts and access rights, such as role changes, transfers, promotions, or terminations. Continual audits of permissions can also help to ensure the effectiveness and the feasibility of the audit process, by reducing the workload and the complexity of the audit tasks, and by providing timely and relevant feedback and results. Annually, to correspond with staff promotions, and to correspond with terminations are not the minimum frequencies for audits of permissions to individual or group accounts, although they may be related or possible frequencies. Annually means that the audits of permissions are performed once a year, which may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated more frequently than that, due to various factors, such as role changes, transfers, promotions, or terminations. Annually audits of permissions may also increase the workload and the complexity of the audit process, as they may involve a large number of user accounts and access rights to review and verify, and they may not provide timely and relevant feedback and results. To correspond with staff promotions means that the audits of permissions are performed whenever a staff member is promoted to a higher or a different position within the organization, which may affect their user accounts and access rights. To correspond with staff promotions audits of permissions can help to ensure that the user accounts and access rights are aligned with the current roles or functions of the staff members, and that they follow the principle of least privilege. However, to correspond with staff promotions audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or terminations, and they may not be performed on a regular and consistent basis. To correspond with terminations means that the audits of permissions are performed whenever a staff member leaves the organization, which may affect their user accounts and access rights. To correspond with terminations audits of permissions can help to ensure that the user accounts and access rights are revoked or removed from the system or the network, and that they prevent any unauthorized or improper access or use. However, to correspond with terminations audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or promotions, and they may not be performed on a regular and consistent basis.
Which of the following entails identification of data and links to business processes, applications, and data
stores as well as assignment of ownership responsibilities?
Security governance
Risk management
Security portfolio management
Risk assessment
Risk assessment is the process that entails identification of data and links to business processes, applications, and data stores as well as assignment of ownership responsibilities. Risk assessment is a key component of risk management, which is the process of identifying, analyzing, and treating the risks that affect the security and objectives of an organization. Risk assessment involves the following steps: identifying the assets and their value, identifying the threats and their sources, identifying the vulnerabilities and their causes, identifying the existing controls and their effectiveness, identifying the impact and likelihood of the risk events, and identifying the risk owners and their roles. Risk assessment helps to determine the level of risk and the appropriate risk response for each asset and process. Security governance, risk management, and security portfolio management are not the same as risk assessment, although they are related or complementary concepts. Security governance is the framework that defines the roles, responsibilities, policies, standards, and procedures for security management within an organization. Security governance provides the direction, oversight, and accountability for security activities and decisions. Risk management is the process of identifying, analyzing, and treating the risks that affect the security and objectives of an organization. Risk management includes risk assessment, risk mitigation, risk monitoring, and risk communication. Security portfolio management is the process of managing the security investments and initiatives within an organization. Security portfolio management involves aligning the security projects and programs with the organizational strategy, prioritizing the security resources and budget, and measuring the security performance and value.
Proven application security principles include which of the following?
Minimizing attack surface area
Hardening the network perimeter
Accepting infrastructure security controls
Developing independent modules
Minimizing attack surface area is a proven application security principle that aims to reduce the exposure or the vulnerability of an application to potential attacks, by limiting or eliminating the unnecessary or unused features, functions, or services of the application, as well as the access or the interaction of the application with other applications, systems, or networks. Minimizing attack surface area can provide some benefits for security, such as enhancing the performance and the functionality of the application, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Hardening the network perimeter, accepting infrastructure security controls, and developing independent modules are not proven application security principles, although they may be related or useful concepts or techniques. Hardening the network perimeter is a network security concept or technique that aims to protect the network from external or unauthorized attacks, by strengthening or enhancing the security controls or mechanisms at the boundary or the edge of the network, such as firewalls, routers, or gateways. Hardening the network perimeter can provide some benefits for security, such as enhancing the performance and the functionality of the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, hardening the network perimeter is not an application security principle, as it is not specific or applicable to the application layer, and it does not address the internal or the inherent security of the application. Accepting infrastructure security controls is a risk management concept or technique that involves accepting the residual risk of an application after applying the security controls or mechanisms provided by the underlying infrastructure, such as the hardware, the software, the network, or the cloud. Accepting infrastructure security controls can provide some benefits for security, such as reducing the cost and the complexity of the security implementation, leveraging the expertise and the resources of the infrastructure providers, and supporting the audit and the compliance activities. However, accepting infrastructure security controls is not an application security principle, as it is not a proactive or a preventive measure to enhance the security of the application, and it may introduce or increase the dependency or the vulnerability of the application on the infrastructure. Developing independent modules is a software engineering concept or technique that involves designing or creating the application as a collection or a composition of discrete or separate components or units, each with a specific function or purpose, and each with a well-defined interface or contract. Developing independent modules can provide some benefits for security, such as enhancing the usability and the maintainability of the application, preventing or isolating some types of errors or bugs, and supporting the testing and the verification activities. However, developing independent modules is not an application security principle, as it is not a direct or a deliberate measure to improve the security of the application, and it may not address or prevent some types of attacks or vulnerabilities that affect the application as a whole or the interaction between the modules.
The core component of Role Based Access Control (RBAC) must be constructed of defined data elements.
Which elements are required?
Users, permissions, operations, and protected objects
Roles, accounts, permissions, and protected objects
Users, roles, operations, and protected objects
Roles, operations, accounts, and protected objects
Role Based Access Control (RBAC) is a model of access control that assigns permissions to users based on their roles, rather than their individual identities. The core component of RBAC is the role, which is a collection of permissions that define what operations a user can perform on what protected objects. The required data elements for RBAC are:
Which of the following is the BEST reason for the use of security metrics?
They ensure that the organization meets its security objectives.
They provide an appropriate framework for Information Technology (IT) governance.
They speed up the process of quantitative risk assessment.
They quantify the effectiveness of security processes.
The best reason for the use of security metrics is to quantify the effectiveness of security processes. Security metrics are measurable indicators that provide information about the performance, efficiency, and quality of security activities, controls, and outcomes. Security metrics can help to evaluate the current state of security, identify strengths and weaknesses, monitor progress and trends, and support decision making and improvement. Security metrics can also help to demonstrate the value and return on investment of security to the stakeholders, and to communicate the security objectives and expectations to the users. Security metrics can be based on various criteria, such as compliance, risk, cost, time, or customer satisfaction. Security metrics can be classified into different types, such as implementation metrics, effectiveness/efficiency metrics, and impact metrics. Security metrics can be collected, analyzed, and reported using various methods and tools, such as surveys, audits, logs, dashboards, or scorecards. Ensuring that the organization meets its security objectives, providing an appropriate framework for IT governance, and speeding up the process of quantitative risk assessment are all possible benefits or uses of security metrics, but they are not the best reason for the use of security metrics. Security metrics are not the only means to ensure that the organization meets its security objectives, as security objectives can also be influenced by other factors, such as policies, standards, procedures, or culture. Security metrics are not the only component of IT governance, as IT governance also involves other elements, such as leadership, strategy, structure, processes, or roles. Security metrics are not the only factor that can speed up the process of quantitative risk assessment, as quantitative risk assessment also depends on other inputs, such as asset value, threat frequency, vulnerability severity, or control effectiveness.
Which of the following is the MOST appropriate action when reusing media that contains sensitive data?
Erase
Sanitize
Encrypt
Degauss
The most appropriate action when reusing media that contains sensitive data is to sanitize the media. Sanitization is the process of removing or destroying all data from the media in such a way that it cannot be recovered by any means. Sanitization can be achieved by various methods, such as overwriting, degaussing, or physical destruction. Sanitization ensures that the sensitive data is not exposed or compromised when the media is reused or disposed of. Erase, encrypt, and degauss are not the most appropriate actions when reusing media that contains sensitive data, although they may be related or useful steps. Erase is the process of deleting data from the media by using the operating system or application commands or functions. Erase does not guarantee that the data is completely removed from the media, as it may leave traces or remnants that can be recovered by using special tools or techniques. Encrypt is the process of transforming data into an unreadable form by using a cryptographic algorithm and a key. Encrypt can protect the data from unauthorized access or disclosure, but it does not remove the data from the media. Encrypt also requires that the key is securely managed and stored, and that the encryption algorithm is strong and reliable. Degauss is the process of applying a strong magnetic field to the media to erase or scramble the data. Degauss can effectively sanitize magnetic media, such as hard disks or tapes, but it does not work on optical media, such as CDs or DVDs. Degauss also renders the media unusable, as it destroys the servo tracks and the firmware that are needed for the media to function properly.
From a security perspective, which of the following assumptions MUST be made about input to an
application?
It is tested
It is logged
It is verified
It is untrusted
From a security perspective, the assumption that must be made about input to an application is that it is untrusted. Untrusted input is any data or information that is provided by an external or an unknown source, such as a user, a client, a network, or a file, and that is not validated or verified by the application before being processed or used by the application. Untrusted input can pose a serious security risk for the application, as it can contain or introduce malicious or harmful content or commands, such as malware, viruses, worms, trojans, or SQL injection, that can compromise or damage the confidentiality, the integrity, or the availability of the application, or the data or the systems that are connected to the application. Therefore, from a security perspective, the assumption that must be made about input to an application is that it is untrusted, and that it should be treated with caution and suspicion, and that it should be subjected to various security controls or mechanisms, such as input validation, input sanitization, input filtering, or input encoding, before being processed or used by the application. Input validation is the process or the technique of checking or verifying that the input meets the expected or the required format, type, length, range, or value, and that it does not contain or introduce any invalid or illegal characters, symbols, or commands. Input sanitization is the process or the technique of removing or modifying any invalid or illegal characters, symbols, or commands from the input, or replacing them with valid or legal ones, to prevent or mitigate any potential attacks or vulnerabilities. Input filtering is the process or the technique of allowing or blocking the input based on a predefined or a configurable set of rules or criteria, such as a whitelist or a blacklist, to prevent or mitigate any unwanted or unauthorized input. Input encoding is the process or the technique of transforming or converting the input into a different or a standard format or representation, such as HTML, URL, or Base64, to prevent or mitigate any interpretation or execution of the input by the application or the system. It is tested, it is logged, and it is verified are not the assumptions that must be made about input to an application from a security perspective, although they may be related or possible aspects or outcomes of input to an application. It is tested is an aspect or an outcome of input to an application, as it implies that the input has been subjected to various tests or evaluations, such as unit testing, integration testing, or penetration testing, to verify or validate the functionality and the quality of the input, as well as to detect or report any errors, bugs, or vulnerabilities in the input. However, it is tested is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is logged is an aspect or an outcome of input to an application, as it implies that the input has been recorded or stored in a log file or a database, along with other relevant information or metadata, such as the source, the destination, the timestamp, or the status of the input, to provide a trace or a history of the input, as well as to support the audit and the compliance activities. However, it is logged is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is verified is an aspect or an outcome of input to an application, as it implies that the input has been confirmed or authenticated by the application or the system, using various security controls or mechanisms, such as digital signatures, certificates, or tokens, to ensure the integrity and the authenticity of the input, as well as to prevent or mitigate any tampering or spoofing of the input. However, it is verified is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application.
What is the foundation of cryptographic functions?
Encryption
Cipher
Hash
Entropy
The foundation of cryptographic functions is entropy. Entropy is a measure of the randomness or unpredictability of a system or a process. Entropy is essential for cryptographic functions, such as encryption, decryption, hashing, or key generation, as it provides the security and the strength of the cryptographic algorithms and keys. Entropy can be derived from various sources, such as physical phenomena, user input, or software applications. Entropy can also be quantified in terms of bits, where higher entropy means higher randomness and higher security. Encryption, cipher, and hash are not the foundation of cryptographic functions, although they are related or important concepts or techniques. Encryption is the process of transforming plaintext or cleartext into ciphertext or cryptogram, using a cryptographic algorithm and a key, to protect the confidentiality and the integrity of the data. Encryption can be symmetric or asymmetric, depending on whether the same or different keys are used for encryption and decryption. Cipher is another term for a cryptographic algorithm, which is a mathematical function that performs encryption or decryption. Cipher can be classified into various types, such as substitution, transposition, stream, or block, depending on how they operate on the data. Hash is the process of generating a fixed-length and unique output, called a hash or a digest, from a variable-length and arbitrary input, using a one-way function, to verify the integrity and the authenticity of the data. Hash can be used for various purposes, such as digital signatures, message authentication codes, or password storage.
What is the PRIMARY role of a scrum master in agile development?
To choose the primary development language
To choose the integrated development environment
To match the software requirements to the delivery plan
To project manage the software delivery
The primary role of a scrum master in agile development is to match the software requirements to the delivery plan. A scrum master is a facilitator who helps the development team and the product owner to collaborate and deliver the software product incrementally and iteratively, following the agile principles and practices. A scrum master is responsible for ensuring that the team follows the scrum framework, which includes defining the product backlog, planning the sprints, conducting the daily stand-ups, reviewing the deliverables, and reflecting on the process. A scrum master is not responsible for choosing the primary development language, the integrated development environment, or project managing the software delivery, although they may provide guidance and support to the team on these aspects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 933; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 855.
An organization has outsourced its financial transaction processing to a Cloud Service Provider (CSP) who will provide them with Software as a Service (SaaS). If there was a data breach who is responsible for monetary losses?
The Data Protection Authority (DPA)
The Cloud Service Provider (CSP)
The application developers
The data owner
The data owner is the person who has the authority and responsibility for the data stored, processed, or transmitted by an Information System (IS). The data owner is responsible for the monetary losses if there was a data breach, as the data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The Data Protection Authority (DPA) is not responsible for the monetary losses, but for the enforcement of the data protection laws and regulations. The Cloud Service Provider (CSP) is not responsible for the monetary losses, but for the provision of the cloud services and the protection of the cloud infrastructure. The application developers are not responsible for the monetary losses, but for the development and maintenance of the software applications. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
What protocol is often used between gateway hosts on the Internet?
Exterior Gateway Protocol (EGP)
Border Gateway Protocol (BGP)
Open Shortest Path First (OSPF)
Internet Control Message Protocol (ICMP)
Border Gateway Protocol (BGP) is a protocol that is often used between gateway hosts on the Internet. A gateway host is a network device that connects two or more different networks, such as a router or a firewall. BGP is a routing protocol that exchanges routing information between autonomous systems (ASes), which are groups of networks under a single administrative control. BGP is used to determine the best path to reach a destination network on the Internet, based on various factors such as hop count, bandwidth, latency, and policy. BGP is also used to implement interdomain routing policies, such as traffic engineering, load balancing, and security. BGP is the de facto standard for Internet routing and is widely deployed by Internet service providers (ISPs) and large enterprises. The other options are not protocols that are often used between gateway hosts on the Internet. Exterior Gateway Protocol (EGP) is an obsolete protocol that was used to exchange routing information between ASes before BGP. Open Shortest Path First (OSPF) is a protocol that is used to exchange routing information within an AS, not between ASes. Internet Control Message Protocol (ICMP) is a protocol that is used to send error and control messages between hosts and routers, not to exchange routing information. References: Border Gateway Protocol - Wikipedia; What is Border Gateway Protocol (BGP)? - Definition from WhatIs.com; What is BGP? | How BGP Routing Works | Cloudflare.
A post-implementation review has identified that the Voice Over Internet Protocol (VoIP) system was designed
to have gratuitous Address Resolution Protocol (ARP) disabled.
Why did the network architect likely design the VoIP system with gratuitous ARP disabled?
Gratuitous ARP requires the use of Virtual Local Area Network (VLAN) 1.
Gratuitous ARP requires the use of insecure layer 3 protocols.
Gratuitous ARP requires the likelihood of a successful brute-force attack on the phone.
Gratuitous ARP requires the risk of a Man-in-the-Middle (MITM) attack.
Gratuitous ARP is a special type of ARP message that a sender device broadcasts on the network without any other device requesting it. It can be useful for updating the ARP table, changing the address of an interface, or informing the network of the sender’s own MAC address. However, it also introduces the risk of a Man-in-the-Middle (MITM) attack, where an attacker can send a spoofed gratuitous ARP message to trick other devices into associating a legitimate IP address with a malicious MAC address. This way, the attacker can intercept, modify, or redirect the traffic intended for the legitimate device. Therefore, the network architect likely designed the VoIP system with gratuitous ARP disabled to prevent such attacks and ensure the security and integrity of the voice communication. References: Gratuitous ARP – Definition and Use Cases - Practical Networking .net; Gratuitous_ARP - Wireshark
Why is planning in Disaster Recovery (DR) an interactive process?
It details off-site storage plans
It identifies omissions in the plan
It defines the objectives of the plan
It forms part of the awareness process
Planning in Disaster Recovery (DR) is an interactive process because it identifies omissions in the plan. DR planning is the process of developing and implementing procedures and processes to ensure that an organization can quickly resume its critical functions after a disaster or a disruption. DR planning involves various steps, such as conducting a risk assessment, performing a business impact analysis, defining the recovery objectives and strategies, designing and developing the DR plan, testing and validating the DR plan, and maintaining and updating the DR plan. DR planning is an interactive process because it requires constant feedback and communication among the stakeholders, such as the management, the employees, the customers, the suppliers, and the regulators. DR planning also requires regular reviews and evaluations of the plan to identify and address any gaps, errors, or changes that may affect the effectiveness or the feasibility of the plan. DR planning is not an interactive process because it details off-site storage plans, defines the objectives of the plan, or forms part of the awareness process, although these may be related or important aspects of DR planning. Detailing off-site storage plans is a technique that involves storing copies of the essential data, documents, or equipment at a secure and remote location, such as a vault, a warehouse, or a cloud service. Detailing off-site storage plans can provide some benefits for DR planning, such as enhancing the availability and the integrity of the data, documents, or equipment, preventing data loss or corruption, and facilitating the recovery and the restoration process. However, detailing off-site storage plans is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Defining the objectives of the plan is a step that involves establishing the goals and the priorities of the DR plan, such as the recovery time objective (RTO), the recovery point objective (RPO), the maximum tolerable downtime (MTD), or the minimum operating level (MOL). Defining the objectives of the plan can provide some benefits for DR planning, such as aligning the DR plan with the business needs and expectations, setting the scope and the boundaries of the DR plan, and measuring the performance and the outcomes of the DR plan. However, defining the objectives of the plan is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Forming part of the awareness process is a technique that involves educating and informing the stakeholders about the DR plan, such as the purpose, the scope, the roles, the responsibilities, or the procedures of the DR plan. Forming part of the awareness process can provide some benefits for DR planning, such as improving the knowledge and the skills of the stakeholders, changing the attitudes and the behaviors of the stakeholders, and empowering the stakeholders to make informed and secure decisions regarding the DR plan. However, forming part of the awareness process is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan.
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
A minimal implementation of endpoint security includes which of the following?
Trusted platforms
Host-based firewalls
Token-based authentication
Wireless Access Points (AP)
A minimal implementation of endpoint security includes host-based firewalls. Endpoint security is the practice of protecting the devices that connect to a network, such as laptops, smartphones, tablets, or servers, from malicious attacks or unauthorized access. Endpoint security can involve various technologies and techniques, such as antivirus, encryption, authentication, patch management, or device control. Host-based firewalls are one of the basic and essential components of endpoint security, as they provide network-level protection for the individual devices. Host-based firewalls are software applications that monitor and filter the incoming and outgoing network traffic on a device, based on a set of rules or policies. Host-based firewalls can prevent or mitigate some types of attacks, such as denial-of-service, port scanning, or unauthorized connections, by blocking or allowing the packets that match or violate the firewall rules. Host-based firewalls can also provide some benefits for endpoint security, such as enhancing the visibility and the auditability of the network activities, enforcing the compliance and the consistency of the firewall policies, and reducing the reliance and the burden on the network-based firewalls. Trusted platforms, token-based authentication, and wireless access points (AP) are not the components that are included in a minimal implementation of endpoint security, although they may be related or useful technologies. Trusted platforms are hardware or software components that provide a secure and trustworthy environment for the execution of applications or processes on a device. Trusted platforms can involve various mechanisms, such as trusted platform modules (TPM), secure boot, or trusted execution technology (TXT). Trusted platforms can provide some benefits for endpoint security, such as enhancing the confidentiality and integrity of the data and the code, preventing unauthorized modifications or tampering, and enabling remote attestation or verification. However, trusted platforms are not a minimal or essential component of endpoint security, as they are not widely available or supported on all types of devices, and they may not be compatible or interoperable with some applications or processes. Token-based authentication is a technique that uses a physical or logical device, such as a smart card, a one-time password generator, or a mobile app, to generate or store a credential that is used to verify the identity of the user who accesses a network or a system. Token-based authentication can provide some benefits for endpoint security, such as enhancing the security and reliability of the authentication process, preventing password theft or reuse, and enabling multi-factor authentication (MFA). However, token-based authentication is not a minimal or essential component of endpoint security, as it does not provide protection for the device itself, but only for the user access credentials, and it may require additional infrastructure or support to implement and manage. Wireless access points (AP) are hardware devices that allow wireless devices, such as laptops, smartphones, or tablets, to connect to a wired network, such as the Internet or a local area network (LAN). Wireless access points (AP) can provide some benefits for endpoint security, such as extending the network coverage and accessibility, supporting the encryption and authentication mechanisms, and enabling the segmentation and isolation of the wireless network. However, wireless access points (AP) are not a component of endpoint security, as they are not installed or configured on the individual devices, but on the network infrastructure, and they may introduce some security risks, such as signal interception, rogue access points, or unauthorized connections.
An organization adopts a new firewall hardening standard. How can the security professional verify that the technical staff correct implemented the new standard?
Perform a compliance review
Perform a penetration test
Train the technical staff
Survey the technical staff
A compliance review is a process of checking whether the systems and processes meet the established standards, policies, and regulations. A compliance review can help to verify that the technical staff has correctly implemented the new firewall hardening standard, as well as to identify and correct any deviations or violations. A penetration test, a training session, or a survey are not as effective as a compliance review, as they may not cover all the aspects of the firewall hardening standard or provide sufficient evidence of compliance. References: CISSP Exam Outline
Which Identity and Access Management (IAM) process can be used to maintain the principle of least
privilege?
identity provisioning
access recovery
multi-factor authentication (MFA)
user access review
The Identity and Access Management (IAM) process that can be used to maintain the principle of least privilege is user access review. User access review is the process of periodically reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. User access review can help to maintain the principle of least privilege by identifying and removing any excessive, obsolete, or unauthorized access rights that may pose a security risk or violate the regulations. User access review can also help to support the audit and compliance activities, as well as the identity lifecycle management activities. Identity provisioning, access recovery, and multi-factor authentication (MFA) are not the IAM processes that can be used to maintain the principle of least privilege, although they may be related or useful processes. Identity provisioning is the process of creating, modifying, or deleting the user accounts and access rights on a system or a network. Identity provisioning can help to establish the principle of least privilege by granting the user accounts and access rights that are aligned with the user roles or functions within the organization. However, identity provisioning is not sufficient to maintain the principle of least privilege, as the user accounts and access rights may change or become outdated over time, due to various factors, such as role changes, transfers, promotions, or terminations. Access recovery is the process of restoring the user accounts and access rights on a system or a network, after they have been lost, corrupted, or compromised. Access recovery can help to ensure the availability and integrity of the user accounts and access rights, as well as to mitigate the impact of a security incident or a disaster. However, access recovery is not a process that can be used to maintain the principle of least privilege, as it does not involve reviewing or verifying the appropriateness or necessity of the user accounts and access rights. Multi-factor authentication (MFA) is a technique that uses two or more factors of authentication to verify the identity of the user who accesses a system or a network. MFA can help to enhance the security and reliability of the authentication process, by requiring the user to provide something they know (e.g., password), something they have (e.g., token), or something they are (e.g., biometric). However, MFA is not a process that can be used to maintain the principle of least privilege, as it does not affect the user accounts and access rights, but only the user access credentials.
Transport Layer Security (TLS) provides which of the following capabilities for a remote access server?
Transport layer handshake compression
Application layer negotiation
Peer identity authentication
Digital certificate revocation
Transport Layer Security (TLS) provides peer identity authentication as one of its capabilities for a remote access server. TLS is a cryptographic protocol that provides secure communication over a network. It operates at the transport layer of the OSI model, between the application layer and the network layer. TLS uses asymmetric encryption to establish a secure session key between the client and the server, and then uses symmetric encryption to encrypt the data exchanged during the session. TLS also uses digital certificates to verify the identity of the client and the server, and to prevent impersonation or spoofing attacks. This process is known as peer identity authentication, and it ensures that the client and the server are communicating with the intended parties and not with an attacker. TLS also provides other capabilities for a remote access server, such as data integrity, confidentiality, and forward secrecy. References: Enable TLS 1.2 on servers - Configuration Manager; How to Secure Remote Desktop Connection with TLS 1.2. - Microsoft Q&A; Enable remote access from intranet with TLS/SSL certificate (Advanced …
A security compliance manager of a large enterprise wants to reduce the time it takes to perform network,
system, and application security compliance audits while increasing quality and effectiveness of the results.
What should be implemented to BEST achieve the desired results?
Configuration Management Database (CMDB)
Source code repository
Configuration Management Plan (CMP)
System performance monitoring application
A Configuration Management Database (CMDB) is a database that stores information about configuration items (CIs) for use in change, release, incident, service request, problem, and configuration management processes. A CI is any component or resource that is part of a system or a network, such as hardware, software, documentation, or personnel. A CMDB can provide some benefits for security compliance audits, such as:
A source code repository, a configuration management plan (CMP), and a system performance monitoring application are not the best options to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, although they may be related or useful tools or techniques. A source code repository is a database or a system that stores and manages the source code of a software or an application, and that supports version control, collaboration, and documentation of the code. A source code repository can provide some benefits for security compliance audits, such as:
However, a source code repository is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the application layer, and it does not provide information about the other CIs that are part of the system or the network, such as hardware, documentation, or personnel. A configuration management plan (CMP) is a document or a policy that defines and describes the objectives, scope, roles, responsibilities, processes, and procedures of configuration management, which is the process of identifying, controlling, tracking, and auditing the changes to the CIs. A CMP can provide some benefits for security compliance audits, such as:
However, a CMP is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is not a database or a system that stores and provides information about the CIs, but rather a document or a policy that defines and describes the configuration management process. A system performance monitoring application is a software or a tool that collects and analyzes data and metrics about the performance and the behavior of a system or a network, such as availability, reliability, throughput, response time, or resource utilization. A system performance monitoring application can provide some benefits for security compliance audits, such as:
However, a system performance monitoring application is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the network and system layers, and it does not provide information about the other CIs that are part of the system or the network, such as software, documentation, or personnel.
Which of the following is MOST appropriate for protecting confidentially of data stored on a hard drive?
Triple Data Encryption Standard (3DES)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
Secure Hash Algorithm 2(SHA-2)
The most appropriate method for protecting the confidentiality of data stored on a hard drive is to use the Advanced Encryption Standard (AES). AES is a symmetric encryption algorithm that uses the same key to encrypt and decrypt data. AES can provide strong and efficient encryption for data at rest, as it uses a block cipher that operates on fixed-size blocks of data, and it supports various key sizes, such as 128, 192, or 256 bits. AES can protect the confidentiality of data stored on a hard drive by transforming the data into an unreadable form that can only be accessed by authorized parties who possess the correct key. AES can also provide some degree of integrity and authentication, as it can detect any modification or tampering of the encrypted data. Triple Data Encryption Standard (3DES), Message Digest 5 (MD5), and Secure Hash Algorithm 2 (SHA-2) are not the most appropriate methods for protecting the confidentiality of data stored on a hard drive, although they may be related or useful cryptographic techniques. 3DES is a symmetric encryption algorithm that uses three iterations of the Data Encryption Standard (DES) algorithm with two or three different keys to encrypt and decrypt data. 3DES can provide encryption for data at rest, but it is not as strong or efficient as AES, as it uses a smaller key size (56 bits per iteration), and it is slower and more complex than AES. MD5 is a hash function that produces a fixed-length output (128 bits) from a variable-length input. MD5 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. MD5 can provide some integrity for data at rest, as it can verify if the data has been changed or corrupted, but it is not secure or reliable, as it is vulnerable to collisions and pre-image attacks. SHA-2 is a hash function that produces a fixed-length output (224, 256, 384, or 512 bits) from a variable-length input. SHA-2 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. SHA-2 can provide integrity for data at rest, as it can verify if the data has been changed or corrupted, and it is more secure and reliable than MD5, as it is resistant to collisions and pre-image attacks.
Which factors MUST be considered when classifying information and supporting assets for risk management, legal discovery, and compliance?
System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements
Data stewardship roles, data handling and storage standards, data lifecycle requirements
Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements
System authorization roles and responsibilities, cloud computing standards, lifecycle requirements
The factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance are data stewardship roles, data handling and storage standards, and data lifecycle requirements. Data stewardship roles are the roles and responsibilities of the individuals or entities who are accountable for the creation, maintenance, protection, and disposal of the information and supporting assets. Data stewardship roles include data owners, data custodians, data users, and data stewards. Data handling and storage standards are the policies, procedures, and guidelines that define how the information and supporting assets should be handled and stored, based on their classification level, sensitivity, and value. Data handling and storage standards include data labeling, data encryption, data backup, data retention, and data disposal. Data lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Data lifecycle requirements include data collection, data processing, data analysis, data sharing, data archiving, and data deletion. System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System owner roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for the operation, performance, and security of the system that hosts or processes the information and supporting assets. System owner roles and responsibilities include system authorization, system configuration, system monitoring, and system maintenance. Data handling standards are the policies, procedures, and guidelines that define how the information should be handled, but not how the supporting assets should be stored. Data handling standards are a subset of data handling and storage standards. Storage and secure development lifecycle requirements are the requirements that specify the stages and processes that the storage and development systems should go through, from their inception to their decommissioning. Storage and secure development lifecycle requirements include storage design, storage implementation, storage testing, storage deployment, storage operation, storage maintenance, and storage disposal. Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. Compliance office roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for ensuring that the organization complies with the applicable laws, regulations, standards, and policies. Compliance office roles and responsibilities include compliance planning, compliance assessment, compliance reporting, and compliance improvement. Classified material handling standards are the policies, procedures, and guidelines that define how the information and supporting assets that are classified by the government or military should be handled and stored, based on their security level, such as top secret, secret, or confidential. Classified material handling standards are a subset of data handling and storage standards. Storage system lifecycle requirements are the requirements that specify the stages and processes that the storage system should go through, from its inception to its decommissioning. Storage system lifecycle requirements are a subset of storage and secure development lifecycle requirements. System authorization roles and responsibilities, cloud computing standards, lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System authorization roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for granting or denying access to the system that hosts or processes the information and supporting assets. System authorization roles and responsibilities include system identification, system authentication, system authorization, and system auditing. Cloud computing standards are the standards that define the requirements, specifications, and best practices for the delivery of computing services over the internet, such as infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS). Cloud computing standards include cloud service level agreements (SLAs), cloud interoperability, cloud portability, and cloud security. Lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Lifecycle requirements are the same as data lifecycle requirements.
Within the company, desktop clients receive Internet Protocol (IP) address over Dynamic Host Configuration
Protocol (DHCP).
Which of the following represents a valid measure to help protect the network against unauthorized access?
Implement path management
Implement port based security through 802.1x
Implement DHCP to assign IP address to server systems
Implement change management
Port based security through 802.1x is a valid measure to help protect the network against unauthorized access. 802.1x is an IEEE standard for port-based network access control (PNAC). It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN. 802.1x authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device that wishes to access the network. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point. The authentication server is a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client’s connection. By implementing port based security through 802.1x, the network can prevent unauthorized devices from accessing the network resources and ensure that only authenticated and authorized devices can communicate on the network. References: IEEE 802.1X - Wikipedia; What Is 802.1X Authentication? How Does 802.1x Work? - Fortinet; 802.1X: Port-Based Network Access Control - IEEE 802
A Security Operations Center (SOC) receives an incident response notification on a server with an active
intruder who has planted a backdoor. Initial notifications are sent and communications are established.
What MUST be considered or evaluated before performing the next step?
Notifying law enforcement is crucial before hashing the contents of the server hard drive
Identifying who executed the incident is more important than how the incident happened
Removing the server from the network may prevent catching the intruder
Copying the contents of the hard drive to another storage device may damage the evidence
Before performing the next step in an incident response, it must be considered or evaluated that removing the server from the network may prevent catching the intruder who has planted a backdoor. This is because the intruder may still be connected to the server or may try to reconnect later, and disconnecting the server may alert the intruder or lose the opportunity to trace the intruder’s source or identity. Therefore, it may be better to isolate the server from the network or monitor the network traffic to gather more evidence and information about the intruder. Notifying law enforcement, identifying who executed the incident, and copying the contents of the hard drive are not the factors that must be considered or evaluated before performing the next step, as they are either irrelevant or premature at this stage of the incident response. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 973; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1019.
Who must approve modifications to an organization's production infrastructure configuration?
Technical management
Change control board
System operations
System users
A change control board (CCB) is a group of stakeholders who are responsible for reviewing, approving, and monitoring changes to an organization’s production infrastructure configuration. A production infrastructure configuration is the set of hardware, software, network, and environmental components that support the operation of an information system. Changes to the production infrastructure configuration can affect the security, performance, availability, and functionality of the system. Therefore, changes must be carefully planned, tested, documented, and authorized before implementation. A CCB ensures that changes are aligned with the organization’s objectives, policies, and standards, and that changes do not introduce any adverse effects or risks to the system or the organization. A CCB is not the same as technical management, system operations, or system users, who may be involved in the change management process, but do not have the authority to approve changes.
How can a forensic specialist exclude from examination a large percentage of operating system files residing on a copy of the target system?
Take another backup of the media in question then delete all irrelevant operating system files.
Create a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level.
Generate a message digest (MD) or secure hash on the drive image to detect tampering of the media being examined.
Discard harmless files for the operating system, and known installed programs.
A forensic specialist can exclude from examination a large percentage of operating system files residing on a copy of the target system by creating a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level. This method is also known as known file filtering or file signature analysis. It allows the forensic specialist to quickly identify and eliminate the files that are part of the standard operating system installation and focus on the files that are unique or relevant to the investigation. This makes the process of exclusion much faster and more accurate than manually deleting or discarding files12. References: 1: Computer Forensics: Forensic Techniques, Part 1 [Updated 2019]32: Point Checklist: cissp book4
Contingency plan exercises are intended to do which of the following?
Train personnel in roles and responsibilities
Validate service level agreements
Train maintenance personnel
Validate operation metrics
Contingency plan exercises are intended to train personnel in roles and responsibilities. Contingency plan exercises are simulated scenarios that test the preparedness and effectiveness of the contingency plan, which is a document that outlines the actions and procedures to be followed in the event of a disruption or disaster. Contingency plan exercises help to train the personnel involved in the contingency plan, such as the incident response team, the recovery team, and the business continuity team, in their roles and responsibilities, such as communication, coordination, decision making, and execution. Contingency plan exercises also help to identify and resolve any issues or gaps in the contingency plan, and to improve the skills and confidence of the personnel5 . References: 5: Contingency Plan Testing : Contingency Planning Guide for Federal Information Systems
Internet Protocol (IP) source address spoofing is used to defeat
address-based authentication.
Address Resolution Protocol (ARP).
Reverse Address Resolution Protocol (RARP).
Transmission Control Protocol (TCP) hijacking.
Internet Protocol (IP) source address spoofing is used to defeat address-based authentication, which is a method of verifying the identity of a user or a system based on their IP address. IP source address spoofing involves forging the IP header of a packet to make it appear as if it came from a trusted or authorized source, and bypassing the authentication check. IP source address spoofing can be used for various malicious purposes, such as denial-of-service attacks, man-in-the-middle attacks, or session hijacking34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5274: CISSP For Dummies, 7th Edition, Chapter 5, page 153.
Which of the following actions should be performed when implementing a change to a database schema in a production system?
Test in development, determine dates, notify users, and implement in production
Apply change to production, run in parallel, finalize change in production, and develop a back-out strategy
Perform user acceptance testing in production, have users sign off, and finalize change
Change in development, perform user acceptance testing, develop a back-out strategy, and implement change
The best practice for implementing a change to a database schema in a production system is to follow a change management process that includes the following steps: Change in development, perform user acceptance testing, develop a back-out strategy, and implement change. This ensures that the change is properly tested, approved, documented, and communicated, and that there is a contingency plan in case of failure or unexpected results12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 8232: CISSP For Dummies, 7th Edition, Chapter 8, page 263.
Which of the following is an essential element of a privileged identity lifecycle management?
Regularly perform account re-validation and approval
Account provisioning based on multi-factor authentication
Frequently review performed activities and request justification
Account information to be provided by supervisor or line manager
A privileged identity lifecycle management is a process of managing the access rights and activities of users who have elevated permissions to access sensitive data or resources in an organization2. An essential element of a privileged identity lifecycle management is to regularly perform account re-validation and approval, which means verifying that the privileged users still need their access rights and have them approved by the appropriate authority. This can help prevent unauthorized or excessive access, reduce the risk of insider threats, and ensure compliance with policies and regulations. Account provisioning based on multi-factor authentication, frequently review performed activities and request justification, and account information to be provided by supervisor or line manager are also important aspects of a privileged identity lifecycle management, but they are not as essential as account re-validation and approval. References: 2: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 283.
Which of the following is the MOST important consideration when storing and processing Personally Identifiable Information (PII)?
Encrypt and hash all PII to avoid disclosure and tampering.
Store PII for no more than one year.
Avoid storing PII in a Cloud Service Provider.
Adherence to collection limitation laws and regulations.
The most important consideration when storing and processing PII is to adhere to the collection limitation laws and regulations that apply to the jurisdiction and context of the data processing. Collection limitation is a principle that states that PII should be collected only for a specific, legitimate, and lawful purpose, and only to the extent that is necessary for that purpose1. By following this principle, the data processor can minimize the amount of PII that is stored and processed, and reduce the risk of data breaches, misuse, or unauthorized access. Encrypting and hashing all PII, storing PII for no more than one year, and avoiding storing PII in a cloud service provider are also good practices for protecting PII, but they are not as important as adhering to the collection limitation laws and regulations. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 290.
An advantage of link encryption in a communications network is that it
makes key management and distribution easier.
protects data from start to finish through the entire network.
improves the efficiency of the transmission.
encrypts all information, including headers and routing information.
An advantage of link encryption in a communications network is that it encrypts all information, including headers and routing information. Link encryption is a type of encryption that is applied at the data link layer of the OSI model, and encrypts the entire packet or frame as it travels from one node to another1. Link encryption can protect the confidentiality and integrity of the data, as well as the identity and location of the nodes. Link encryption does not make key management and distribution easier, as it requires each node to have a separate key for each link. Link encryption does not protect data from start to finish through the entire network, as it only encrypts the data while it is in transit, and decrypts it at each node. Link encryption does not improve the efficiency of the transmission, as it adds overhead and latency to the communication. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 419.
What principle requires that changes to the plaintext affect many parts of the ciphertext?
Diffusion
Encapsulation
Obfuscation
Permutation
Diffusion is the principle that requires that changes to the plaintext affect many parts of the ciphertext. Diffusion is a property of a good encryption algorithm that aims to spread the influence of each plaintext bit over many ciphertext bits, so that a small change in the plaintext results in a large change in the ciphertext2. Diffusion can increase the security of the encryption by making it harder for an attacker to analyze the statistical patterns or correlations between the plaintext and the ciphertext. Encapsulation, obfuscation, and permutation are not principles that require that changes to the plaintext affect many parts of the ciphertext, as they are related to different aspects of encryption or security. References: 2: CISSP For Dummies, 7th Edition, Chapter 3, page 65.
What should be the INITIAL response to Intrusion Detection System/Intrusion Prevention System (IDS/IPS) alerts?
Ensure that the Incident Response Plan is available and current.
Determine the traffic's initial source and block the appropriate port.
Disable or disconnect suspected target and source systems.
Verify the threat and determine the scope of the attack.
The initial response to Intrusion Detection System/Intrusion Prevention System (IDS/IPS) alerts should be to verify the threat and determine the scope of the attack, as this will help to confirm the validity and severity of the alert, and to identify the affected systems, networks, and data. This step is essential to avoid false positives, false negatives, and overreactions, and to prepare for the appropriate mitigation and recovery actions. Ensuring that the Incident Response Plan is available and current is a preparatory step that should be done before any IDS/IPS alert occurs, not after. Determining the traffic’s initial source and blocking the appropriate port, and disabling or disconnecting suspected target and source systems are possible mitigation steps that should be done after verifying the threat and determining the scope of the attack, not before . References: 5: IDS vs IPS - What’s the Difference & Which do You Need? - Comparitech 6: IDS vs. IPS: Definitions, Comparisons & Why You Need Both | Okta 7: IDS and IPS: Understanding Similarities and Differences - EC-Council
Which of the following is ensured when hashing files during chain of custody handling?
Availability
Accountability
Integrity
Non-repudiation
Hashing files during chain of custody handling ensures integrity, which means that the files have not been altered or tampered with during the collection, preservation, or analysis of digital evidence1. Hashing is a process of applying a mathematical function to a file to generate a unique value, called a hash or a digest, that represents the file’s content. By comparing the hash values of the original and the copied files, the integrity of the files can be verified. Availability, accountability, and non-repudiation are not ensured by hashing files during chain of custody handling, as they are related to different aspects of information security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633.
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
Which of the following is the best practice for testing a Business Continuity Plan (BCP)?
Test before the IT Audit
Test when environment changes
Test after installation of security patches
Test after implementation of system patches
The best practice for testing a Business Continuity Plan (BCP) is to test it when the environment changes, such as when there are new business processes, technologies, threats, or regulations. This ensures that the BCP is updated, relevant, and effective for the current situation. Testing the BCP before the IT audit, after installation of security patches, or after implementation of system patches are not the best practices, as they may not reflect the actual changes in the business environment or the potential disruptions that may occur. References: 5: Comprehensive Guide to Business Continuity Testing67: Maximizing Your BCP Testing Efforts: Best Practices8
The process of mutual authentication involves a computer system authenticating a user and authenticating the
user to the audit process.
computer system to the user.
user's access to all authorized objects.
computer system to the audit process.
Mutual authentication is the process of verifying the identity of both parties in a communication. The computer system authenticates the user by verifying their credentials, such as username and password, biometrics, or tokens. The user authenticates the computer system by verifying its identity, such as a digital certificate, a trusted third party, or a challenge-response mechanism34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5154: CISSP For Dummies, 7th Edition, Chapter 5, page 151.
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to what?
Interface with the Public Key Infrastructure (PKI)
Improve the quality of security software
Prevent Denial of Service (DoS) attacks
Establish a secure initial state
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to establish a secure initial state. A TPM is a hardware device that provides cryptographic functions and secure storage for keys, certificates, passwords, and other sensitive data. A TPM can also measure and verify the integrity of the system components, such as the BIOS, boot loader, operating system, and applications, before they are executed. This process is known as trusted boot or measured boot, and it ensures that the system is in a known and trusted state before allowing access to the user or network. A TPM can also enable features such as disk encryption, remote attestation, and platform authentication12. References: 1: What is a Trusted Platform Module (TPM)?32: Trusted Platform Module (TPM) Fundamentals4
Which one of the following is a threat related to the use of web-based client side input validation?
Users would be able to alter the input after validation has occurred
The web server would not be able to validate the input after transmission
The client system could receive invalid input from the web server
The web server would not be able to receive invalid input from the client
A threat related to the use of web-based client side input validation is that users would be able to alter the input after validation has occurred. Client side input validation is performed on the user’s browser using JavaScript or other scripting languages. It can provide a faster and more user-friendly feedback to the user, but it can also be easily bypassed or manipulated by an attacker who disables JavaScript, uses a web proxy, or modifies the source code of the web page. Therefore, client side input validation should not be relied upon as the sole or primary method of preventing malicious or malformed input from reaching the web server. Server side input validation is also necessary to ensure the security and integrity of the web application56. References: 5: Input Validation - OWASP Cheat Sheet Series76: Input Validation vulnerabilities and how to fix them
Which of the following is a potential risk when a program runs in privileged mode?
It may serve to create unnecessary code complexity
It may not enforce job separation duties
It may create unnecessary application hardening
It may allow malicious code to be inserted
A potential risk when a program runs in privileged mode is that it may allow malicious code to be inserted. Privileged mode, also known as kernel mode or supervisor mode, is a mode of operation that grants the program full access and control over the hardware and software resources of the system, such as memory, disk, CPU, and devices. A program that runs in privileged mode can perform any action or instruction without any restriction or protection. This can be exploited by an attacker who can inject malicious code into the program, such as a rootkit, a backdoor, or a keylogger, and gain unauthorized access or control over the system . References: : What is Privileged Mode? : Privilege Escalation - OWASP Cheat Sheet Series
In Disaster Recovery (DR) and business continuity training, which BEST describes a functional drill?
A full-scale simulation of an emergency and the subsequent response functions
A specific test by response teams of individual emergency response functions
A functional evacuation of personnel
An activation of the backup site
A functional drill is a type of disaster recovery and business continuity training that involves a specific test by response teams of individual emergency response functions, such as fire suppression, medical assistance, or data backup. A functional drill is designed to evaluate the performance, coordination, and effectiveness of the response teams and the emergency procedures. A functional drill is not the same as a full-scale simulation, a functional evacuation, or an activation of the backup site. A full-scale simulation is a type of disaster recovery and business continuity training that involves a realistic and comprehensive scenario of an emergency and the subsequent response functions, involving all the stakeholders, resources, and equipment. A functional evacuation is a type of disaster recovery and business continuity training that involves the orderly and safe movement of personnel from a threatened or affected area to a safe location. An activation of the backup site is a type of disaster recovery and business continuity action that involves the switching of operations from the primary site to the secondary site in the event of a disaster or disruption.
What is the ultimate objective of information classification?
To assign responsibility for mitigating the risk to vulnerable systems
To ensure that information assets receive an appropriate level of protection
To recognize that the value of any item of information may change over time
To recognize the optimal number of classification categories and the benefits to be gained from their use
The ultimate objective of information classification is to ensure that information assets receive an appropriate level of protection in accordance with their importance and sensitivity to the organization. Information classification is the process of assigning labels or categories to information based on criteria such as confidentiality, integrity, availability, and value. Information classification helps the organization to identify the risks and threats to the information, and to apply the necessary controls and safeguards to protect it. Information classification also helps the organization to comply with the legal, regulatory, and contractual obligations related to the information12. References: 1: Information Classification - Why it matters?32: ISO 27001 & Information Classification: Free 4-Step Guide4
The goal of software assurance in application development is to
enable the development of High Availability (HA) systems.
facilitate the creation of Trusted Computing Base (TCB) systems.
prevent the creation of vulnerable applications.
encourage the development of open source applications.
The goal of software assurance in application development is to prevent the creation of vulnerable applications. Software assurance is the process of ensuring that the software is designed, developed, and maintained in a secure, reliable, and trustworthy manner. Software assurance involves applying security principles, standards, and best practices throughout the software development life cycle, such as security requirements, design, coding, testing, deployment, and maintenance. Software assurance aims to prevent or reduce the introduction of vulnerabilities, defects, or errors in the software that could compromise its security, functionality, or quality . References: : Software Assurance : Software Assurance - OWASP Cheat Sheet Series
Which of the following is a method used to prevent Structured Query Language (SQL) injection attacks?
Data compression
Data classification
Data warehousing
Data validation
Data validation is a method used to prevent Structured Query Language (SQL) injection attacks, which are a type of web application attack that exploit the input fields of a web form to inject malicious SQL commands into the underlying database. Data validation involves checking the input data for any illegal or unexpected characters, such as quotes, semicolons, or keywords, and rejecting or sanitizing them before passing them to the database34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6604: CISSP For Dummies, 7th Edition, Chapter 6, page 199.
An Intrusion Detection System (IDS) is generating alarms that a user account has over 100 failed login attempts per minute. A sniffer is placed on the network, and a variety of passwords for that user are noted. Which of the following is MOST likely occurring?
A dictionary attack
A Denial of Service (DoS) attack
A spoofing attack
A backdoor installation
A dictionary attack is a type of brute-force attack that attempts to guess a user’s password by trying a large number of possible words or phrases, often derived from a dictionary or a list of commonly used passwords. A dictionary attack can be detected by an Intrusion Detection System (IDS) if it generates a high number of failed login attempts per minute, as well as a variety of passwords for the same user. A sniffer can capture the network traffic and reveal the passwords being tried by the attacker34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6574: CISSP For Dummies, 7th Edition, Chapter 6, page 197.
Which of the following statements is TRUE for point-to-point microwave transmissions?
They are not subject to interception due to encryption.
Interception only depends on signal strength.
They are too highly multiplexed for meaningful interception.
They are subject to interception by an antenna within proximity.
They are subject to interception by an antenna within proximity. Point-to-point microwave transmissions are line-of-sight media, which means that they can be intercepted by any antenna that is in the direct path of the signal. The interception does not depend on encryption, multiplexing, or signal strength, as long as the antenna is close enough to receive the signal.
Which one of the following is the MOST important in designing a biometric access system if it is essential that no one other than authorized individuals are admitted?
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Crossover Error Rate (CER)
Rejection Error Rate
The most important factor in designing a biometric access system if it is essential that no one other than authorized individuals are admitted is the False Acceptance Rate (FAR). FAR is the probability that a biometric system will incorrectly accept an unauthorized user or reject an authorized user2. FAR is a measure of the security or accuracy of the biometric system, and it should be as low as possible to prevent unauthorized access. False Rejection Rate (FRR), Crossover Error Rate (CER), and Rejection Error Rate are not as important as FAR, as they are related to the usability or convenience of the biometric system, rather than the security. FRR is the probability that a biometric system will incorrectly reject an authorized user or accept an unauthorized user. CER is the point where FAR and FRR are equal, and it is used to compare the performance of different biometric systems. Rejection Error Rate is the probability that a biometric system will fail to capture or process a biometric sample. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 95.
The stringency of an Information Technology (IT) security assessment will be determined by the
system's past security record.
size of the system's database.
sensitivity of the system's datA.
age of the system.
The stringency of an Information Technology (IT) security assessment will be determined by the sensitivity of the system’s data, as this reflects the level of risk and impact that a security breach could have on the organization and its stakeholders. The more sensitive the data, the more stringent the security assessment should be, as it should cover more aspects of the system, use more rigorous methods and tools, and provide more detailed and accurate results and recommendations. The system’s past security record, size of the system’s database, and age of the system are not the main factors that determine the stringency of the security assessment, as they do not directly relate to the value and importance of the data that the system processes, stores, or transmits . References: 3: Common Criteria for Information Technology Security Evaluation 4: Information technology security assessment - Wikipedia
What is the MOST important purpose of testing the Disaster Recovery Plan (DRP)?
Evaluating the efficiency of the plan
Identifying the benchmark required for restoration
Validating the effectiveness of the plan
Determining the Recovery Time Objective (RTO)
The most important purpose of testing the Disaster Recovery Plan (DRP) is to validate the effectiveness of the plan. A DRP is a document that outlines the procedures and steps to be followed in the event of a disaster that disrupts the normal operations of an organization. A DRP aims to minimize the impact of the disaster, restore the critical functions and systems, and resume the normal operations as soon as possible. Testing the DRP is essential to ensure that the plan is feasible, reliable, and up-to-date. Testing the DRP can reveal any errors, gaps, or weaknesses in the plan, and provide feedback and recommendations for improvement. Testing the DRP can also increase the confidence and readiness of the staff, and ensure compliance with the regulatory and contractual requirements97. References: 9: What Is Disaster Recovery Testing and Why Is It Important?107: Disaster Recovery Plan Testing in IT
A disadvantage of an application filtering firewall is that it can lead to
a crash of the network as a result of user activities.
performance degradation due to the rules applied.
loss of packets on the network due to insufficient bandwidth.
Internet Protocol (IP) spoofing by hackers.
A disadvantage of an application filtering firewall is that it can lead to performance degradation due to the rules applied. An application filtering firewall is a type of firewall that inspects the content and context of the data packets at the application layer of the OSI model. It can block or allow traffic based on the application protocol, the source and destination addresses, the user identity, the time of day, and other criteria. An application filtering firewall provides a high level of security and control, but it also requires more processing power and memory than other types of firewalls. This can result in slower network performance and increased latency56. References: 5: Application Layer Filtering (ALF): What is it and How does it Fit into your Security Plan?76: Different types of Firewalls: Their advantages and disadvantages
Which type of control recognizes that a transaction amount is excessive in accordance with corporate policy?
Detection
Prevention
Investigation
Correction
A detection control is a type of control that identifies and reports the occurrence of an unwanted event, such as a violation of a policy or a threshold. A detection control does not prevent or correct the event, but rather alerts the appropriate personnel or system to take action34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 294: CISSP For Dummies, 7th Edition, Chapter 1, page 21.
The birthday attack is MOST effective against which one of the following cipher technologies?
Chaining block encryption
Asymmetric cryptography
Cryptographic hash
Streaming cryptography
The birthday attack is most effective against cryptographic hash, which is one of the cipher technologies. A cryptographic hash is a function that takes an input of any size and produces an output of a fixed size, called a hash or a digest, that represents the input. A cryptographic hash has several properties, such as being one-way, collision-resistant, and deterministic3. A birthday attack is a type of brute-force attack that exploits the mathematical phenomenon known as the birthday paradox, which states that in a set of randomly chosen elements, there is a high probability that some pair of elements will have the same value. A birthday attack can be used to find collisions in a cryptographic hash, which means finding two different inputs that produce the same hash. Finding collisions can compromise the integrity or the security of the hash, as it can allow an attacker to forge or modify the input without changing the hash. Chaining block encryption, asymmetric cryptography, and streaming cryptography are not as vulnerable to the birthday attack, as they are different types of encryption algorithms that use keys and ciphers to transform the input into an output. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 3, page 133. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 143.
At a MINIMUM, a formal review of any Disaster Recovery Plan (DRP) should be conducted
monthly.
quarterly.
annually.
bi-annually.
A formal review of any Disaster Recovery Plan (DRP) should be conducted at a minimum annually, or more frequently if there are significant changes in the business environment, the IT infrastructure, the security threats, or the regulatory requirements. A formal review involves evaluating the DRP against the current business needs, objectives, and risks, and ensuring that the DRP is updated, accurate, complete, and consistent. A formal review also involves testing the DRP to verify its effectiveness and feasibility, and identifying any gaps or weaknesses that need to be addressed12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10352: CISSP For Dummies, 7th Edition, Chapter 10, page 351.
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
Copyright provides protection for which of the following?
Ideas expressed in literary works
A particular expression of an idea
New and non-obvious inventions
Discoveries of natural phenomena
Copyright is a form of intellectual property that grants the author or creator of an original work the exclusive right to reproduce, distribute, perform, display, or license the work. Copyright does not protect ideas, concepts, facts, discoveries, or methods, but only the particular expression of an idea in a tangible medium, such as a book, a song, a painting, or a software program12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 2872: CISSP For Dummies, 7th Edition, Chapter 3, page 87.
Which one of the following transmission media is MOST effective in preventing data interception?
Microwave
Twisted-pair
Fiber optic
Coaxial cable
Fiber optic is the most effective transmission media in preventing data interception, as it uses light signals to transmit data over thin glass or plastic fibers1. Fiber optic cables are immune to electromagnetic interference, which means that they cannot be tapped or eavesdropped by external devices or signals. Fiber optic cables also have a low attenuation rate, which means that they can transmit data over long distances without losing much signal strength or quality. Microwave, twisted-pair, and coaxial cable are less effective transmission media in preventing data interception, as they use electromagnetic waves or electrical signals to transmit data over metal wires or air2. These media are susceptible to interference, noise, or tapping, which can compromise the confidentiality or integrity of the data. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 4062: CISSP For Dummies, 7th Edition, Chapter 4, page 85.
Which of the following is the FIRST step of a penetration test plan?
Analyzing a network diagram of the target network
Notifying the company's customers
Obtaining the approval of the company's management
Scheduling the penetration test during a period of least impact
The first step of a penetration test plan is to obtain the approval of the company’s management, as well as the consent of the target network’s owner or administrator. This is essential to ensure the legality, ethics, and scope of the test, as well as to define the objectives, expectations, and deliverables of the test. Without proper authorization, a penetration test could be considered as an unauthorized or malicious attack, and could result in legal or reputational consequences . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 758. : CISSP For Dummies, 7th Edition, Chapter 7, page 234.
The three PRIMARY requirements for a penetration test are
A defined goal, limited time period, and approval of management
A general objective, unlimited time, and approval of the network administrator
An objective statement, disclosed methodology, and fixed cost
A stated objective, liability waiver, and disclosed methodology
The three primary requirements for a penetration test are a defined goal, a limited time period, and an approval of management. A penetration test is a type of security assessment that simulates a malicious attack on an information system or network, with the permission of the owner, to identify and exploit vulnerabilities and evaluate the security posture of the system or network. A penetration test requires a defined goal, which is the specific objective or scope of the test, such as testing a particular system, network, application, or function. A penetration test also requires a limited time period, which is the duration or deadline of the test, such as a few hours, days, or weeks. A penetration test also requires an approval of management, which is the formal authorization and consent from the senior management of the organization that owns the system or network to be tested, as well as the management of the organization that conducts the test. A general objective, unlimited time, and approval of the network administrator are not the primary requirements for a penetration test, as they may not provide a clear and realistic direction, scope, and authorization for the test.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.