Weekend Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Isaca AAIA ISACA Advanced in AI Audit (AAIA) Exam Practice Test

Page: 1 / 9
Total 90 questions

ISACA Advanced in AI Audit (AAIA) Questions and Answers

Question 1

Which of the following is the MOST important task when gathering data during the AI system development process?

Options:

A.

Stratifying the data

B.

Isolating the system

C.

Cleaning the data

D.

Training the system

Question 2

While evaluating a complex machine learning (ML) model used for regulatory compliance in a financial institution, which of the following should the IS auditor do to BEST ensure transparency?

Options:

A.

Document sources and data processes.

B.

Create dashboards to show outputs.

C.

Provide periodic model audit reports.

D.

Use tools that explain model decisions.

Question 3

An organization is using information gathered from customer accounts to train its AI chatbot. Which of the following is the GREATEST risk associated with this practice?

Options:

A.

Disclosure of personal information

B.

AI bias

C.

Transparency

D.

AI model hallucinations

Question 4

For a sales promotion, an AI system sorts customer attributes into several categories by analyzing transaction history. Verifying which of the following would BEST validate the effectiveness of this process?

Options:

A.

Stress tests are regularly conducted to maintain consistent AI performance.

B.

The applied methodology adequately reflects business objectives.

C.

Sensitive attributes are converted to other data types prior to input.

D.

Sampling of AI output is conducted to identify unusual decisions.

Question 5

From a data appropriateness and bias perspective, which of the following should be of GREATEST concern when reviewing an AI model used in a credit scoring system?

Options:

A.

The model incorporates the applicant's loan history to assess spending habits.

B.

The model utilizes historical credit data to predict future credit behavior.

C.

The model considers the applicant's income level as a key factor in the credit decision.

D.

The model uses postal codes as a primary factor in determining creditworthiness.

Question 6

Which of the following is the PRIMARY purpose of an AI acceptable use policy?

Options:

A.

Establishing guidance on the ethical use of AI

B.

Outlining AI usage monitoring procedures

C.

Educating employees on where to find and how to use AI tools

D.

Explaining the distinction between different types of AI

Question 7

Which of the following is an IS auditor's MOST important course of action when determining whether source data should be entered into approved generative AI tools to assist with an audit?

Options:

A.

Validate that the tool is leveraging the latest model.

B.

Validate that the tool provides a privacy notice.

C.

Determine whether any AI model hallucinations have occurred.

D.

Determine whether the information is reliable.

Question 8

When auditing a research agency's use of generative AI models for analyzing scientific data, which of the following is MOST critical to evaluate in order to prevent hallucinatory results and ensure the accuracy of outputs?

Options:

A.

The effectiveness of data anonymization processes that help preserve data quality

B.

The algorithms for generative AI models designed to detect and correct data bias before processing

C.

The frequency of data audits verifying the integrity and accuracy of inputs

D.

The measures in place to ensure the appropriateness and relevance of input data for generative AI models

Question 9

Which of the following is an IS auditor MOST likely to use in order to ensure an AI model has the ability to make correct predictions?

Options:

A.

Adversarial testing

B.

Group analysis

C.

Latency testing

D.

Confusion matrix

Question 10

Which of the following is the GREATEST challenge facing IS auditors evaluating the explainability of generative AI models?

Options:

A.

Differences of opinion regarding model types

B.

Difficulties in preventing the input of biased data

C.

Performance issues due to excessive computation

D.

Algorithms changing as AI continues to learn

Question 11

An IS auditor is testing an AI-based fraud detection system that flags suspicious transactions and finds that the system has a high false positive rate. Which of the following testing methods should be prioritized to BEST optimize the detection rate?

Options:

A.

Regression testing

B.

Cross-validation testing

C.

Substantive testing

D.

Benford's Law analysis

Question 12

Which of the following is the PRIMARY objective of AI governance?

Options:

A.

Implementing compliance and ethics controls for AI initiatives

B.

Defining clear roles and responsibilities for AI development, use, and oversight

C.

Ensuring controls over AI are designed well and operate effectively

D.

Promoting a positive return on investment (ROI) from AI projects

Question 13

The PRIMARY purpose of maintaining an audit trail in AI systems is to:

Options:

A.

Facilitate transparency and traceability of decisions.

B.

Analyze model accuracy and fairness.

C.

Measure computational efficiency.

D.

Ensure compliance with regulatory standards for AI.

Question 14

When auditing the transparency of an AI system, which of the following would be the MOST effective way to understand the model's decision-making process?

Options:

A.

Evaluating the diversity of the training data set

B.

Analyzing the complexity of the algorithms used

C.

Assessing the computational cost of the model

D.

Reviewing the explainability of AI outputs

Question 15

When converting data categories before training an AI model, which of the following scenarios represents the GREATEST risk?

Options:

A.

One-hot encoding the data attribute car colors for the options red, blue, green, black, white

B.

Creating dummy variables for the data attribute dog breed for the options labrador, terrier, beagle

C.

One-hot encoding the data attribute customer rewards category for the options economy, business, first class

D.

Creating dummy variables for the data attribute product flavor for the options vanilla, chocolate, strawberry, banana

Question 16

The PRIMARY objective of auditing AI systems is to:

Options:

A.

Identify biases and decision transparency.

B.

Maximize system efficiency and throughput.

C.

Optimize user experience and interface satisfaction.

D.

Minimize algorithm latency and information storage impacts.

Question 17

Which of the following is the BEST way to support the development and design of high-risk AI systems?

Options:

A.

Regularly back up the AI system's data to a secure, offsite location.

B.

Conduct regular training sessions for users on data privacy.

C.

Ensure the availability of trustworthy data sets.

D.

Implement multi-factor authentication (MFA) for all users accessing the AI system.

Question 18

When utilizing a machine learning (ML) model to predict whether a wind turbine electricity generator will fail, which model evaluation metric should be the PRIMARY focus?

Options:

A.

Precision

B.

Specificity

C.

Accuracy

D.

Recall

Question 19

When auditing a machine learning (ML) solution, false positives can BEST be assessed by examining the level of:

Options:

A.

Precision

B.

Completeness

C.

Accuracy

D.

Recall

Question 20

Which of the following is the MOST effective way an IS auditor could use generative AI to plan an audit of a new database storing transactional data?

Options:

A.

Identifying separation of duties conflicts for database data changes

B.

Developing architecture diagrams

C.

Identifying technology-specific risk and considerations

D.

Summarizing meeting transcripts from interviews with database administrators (DBAs)

Question 21

A generative AI system has a validation control in place to reject inappropriate questions by checking them against built-in ethical standards. Which of the following enables malicious actors to circumvent this control through prompt engineering?

Options:

A.

Submitting the same questions in a foreign language translated by another AI-based system

B.

Presenting theoretical situations to justify the reason for asking the questions

C.

Asking the same questions later when the algorithm has changed after further learning

D.

Randomly placing keywords unrelated to the main topic

Question 22

Which of the following controls would MOST effectively mitigate worst-case service disruption scenarios affecting an AI-based application system?

Options:

A.

Performing periodic tabletop exercises

B.

Implementing a kill chain process in the event of disruption

C.

Updating key risk indicators (KRIs) regularly

D.

Including a range of AI disruption scenarios in the disaster recovery plan (DRP)

Question 23

Which of the following controls MOST effectively helps to ensure an AI model is resilient against external threats?

Options:

A.

AI data set anonymization

B.

Monitoring of AI model developers

C.

Monitoring of AI access logs

D.

AI model configuration testing

Question 24

An IS auditor notes that an AI model achieved significantly better results on training data than on test data. Which of the following problems with the model has the IS auditor identified?

Options:

A.

Underfitting

B.

Overfitting

C.

Generalization

D.

Bias

Question 25

An organization's system development process has been enhanced with AI. Which of the following features presents the GREATEST risk?

Options:

A.

The AI allocates resources for new system development projects.

B.

Non-technical users are validating AI results.

C.

The AI personalizes applications for the user.

D.

All codes are generated by AI without human oversight.

Question 26

An IS auditor reviewing documentation for an AI model notes that the modeler utilized a K-means clustering algorithm, which clusters data into categories for correlations and analysis. Which of the following is the MOST important risk for the auditor to consider?

Options:

A.

K-means clustering is not a common data clustering method due to its complexity and difficulty categorizing data correctly.

B.

K-means clustering requires the modeler to supervise the learning analysis, which can introduce bias.

C.

K-means clustering algorithms are significantly sensitive to outliers and dependent on the similarity of units of measure.

D.

K-means clustering determines the number of clusters for the modeler without supervision.

Question 27

When an IS auditor is reviewing results from an AI system, which of the following would cause the GREATEST risk?

Options:

A.

Inability to identify where an AI system is housed

B.

System output not being checked for inconsistencies

C.

Cascading failures of AI system outputs

D.

Difficulty of documenting AI algorithm processes

Page: 1 / 9
Total 90 questions