Summer Sale- Special Discount Limited Time 65% Offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

IBM C1000-173 IBM Cloud Pak for Data V4.7 Architect Exam Practice Test

Page: 1 / 6
Total 63 questions

IBM Cloud Pak for Data V4.7 Architect Questions and Answers

Question 1

What is the purpose of configuring access to a Git repository associated with a project in Cloud Pak for Data?

Options:

A.

To collaborate with others, manage file versions, and enable branching.

B.

To manage the deployment of a model to a project space.

C.

To enhance data visualization in JupyterLab or RStudio.

D.

To delete the repository and create a new one.

Question 2

Which component must be enabled in order to render business lineage when installing IBM Knowledge Catalog?

Options:

A.

Automated metadata lineage

B.

Knowledge graph

C.

Data graph

D.

Data quality

Question 3

How are Knowledge Accelerators deployed?

Options:

A.

Deployed as part of the sample assets.

B.

Deploy from the Cloud Pak for Data marketplace.

C.

Deploy by IBM support upon request.

D.

Deploy from the IBM Knowledge Accelerator API.

Question 4

A customer wants to manage Cloud Pak for Data secrets via an existing supported vault system. What is needed to integrate any supported vault systems into Cloud Pak for Data?

Options:

A.

Username and password of the vault.

B.

Fully qualified URL of the vault.

C.

Client certificate to authenticate to the vault.

D.

Client key in .key or .pem format to authenticate to the vault.

Question 5

Which plug-in is used by the Cloud Pak for Data Audit Logging service to forward audit records to a SIEM system?

Options:

A.

OSS/J

B.

Logstash output

C.

Fluentd output

D.

Apache Kafka output

Question 6

Which two of the following can be used with Watson Pipelines?

Options:

A.

Postgres

B.

Notebooks

C.

PowerShell

D.

Bash scripts

E.

Db2 Big SQL

Question 7

When upgrading to Cloud Pak for Data v4.7, why must an export/import of governance data be performed?

Options:

A.

Containers are ephemeral and cannot persist the data.

B.

The underlying Db2 repository has been altered.

C.

Components of Information Server have been removed from the product.

D.

Catalog data must be relocated to persistent storage.

Question 8

When creating a Db2 Big SQL service instance, which two service resource items should be taken into account when sizing the cluster?

Options:

A.

Number of physical cores

B.

Amount of memory

C.

Number of virtual cores

D.

Number of workers

E.

Maximum expected throughput

Question 9

What is a Data Refinery flow in Cloud Pak for Data?

Options:

A.

A data storage location.

B.

A machine learning model.

C.

An ordered set of data operations.

D.

A visualization tool.

Question 10

After importing IBM Knowledge Accelerator assets using an API endpoint, what change must be made before the assets can be used by the appropriate users?

Options:

A.

Ensure a shared credential has been configured.

B.

Add users to the Knowledge Accelerators security group.

C.

Add collaborators to the Knowledge Accelerators categories.

D.

Ensure the user permission role is active.

Question 11

Which Db2 Big SQL component uses system resources efficiently to maximize throughput and minimize response time?

Options:

A.

Hive

B.

Scheduler

C.

Analyzer

D.

StreamThrough

Question 12

What capability does the Watson OpenScale API provide?

Options:

A.

Build conversational interfaces.

B.

Manage data-related assets in Watson Studio.

C.

Extract answers from business documents.

D.

Measure AI model outcomes and ensuring fairness.

Question 13

What endpoint will an application use to interact with Db2 Big SQL?

Options:

A.

Representative State Transfer (REST) Endpoint

B.

System Local Efficient Endpoint Pathways (SLEEP)

C.

Simple Normalized Optimum Representative Endpoint (SNORE)

D.

Dynamic Representative Endpoint Activation Mobility (DREAM)

Question 14

How does the IBM Data Virtualization service virtualize files in shared directories?

Options:

A.

Users add shared file systems in the UI.

B.

It scans its network for open file shares.

C.

Administrators configure FTP connections to the file source.

D.

A remote connector is installed and run on the source server.

Question 15

An enterprise architect in a financial institute is deciding on the deployment option for Cloud Pak for Data on their existing OpenShift Container Platform cluster.

They have decided to use an automated deployment option and install Cloud Pak for Data from the cloud provider's marketplace. What are the limitations they may face with this decision?

Options:

A.

Cloud Pak for Data cannot be installed on an existing cluster.

B.

Automatic installation cannot be done for any of the Cloud Pak for Data services.

C.

Cloud Pak for Data operators cannot be co-located with the IBM Cloud Pak foundational services operators.

D.

Partial installation of the Cloud Pak for Data has to be done manually for the first time installation.

Question 16

Which Watson Pipeline component puts a value in columns so it can be consumed by DataStage?

Options:

A.

Instantiate User Columns

B.

Prepare User Parameters

C.

Set User Variables

D.

Initialize User Values

Question 17

When granting a user access to the Data Engineer role, which two permissions will the user be associated with as part of this role?

Options:

A.

Monitor project workload

B.

Create projects

C.

Manage data protection rules

D.

Manage platform

E.

Manage workflows

Question 18

What are two considerations when choosing the type of storage for Cloud Pak for Data?

Options:

A.

The storage network must support a minimum transmission speed of 4 Gbps.

B.

The storage data has a throughput minimum of 16 MB/s per disk.

C.

The storage supports the services that will be installed.

D.

The storage provides sufficient I/O performance.

E.

NFS storage supports all Cloud Pak for Data services.

Page: 1 / 6
Total 63 questions