Weekend Sale Special Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Oracle 1z0-1110-23 Oracle Cloud Infrastructure Data Science 2023 Professional Exam Practice Test

Page: 1 / 8
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Question 1

As you are working in your notebook session, you find that your notebook session does not have

enough compute CPU and memory for your workload.

How would you scale up your notebook session without losing your work?

Options:

A.

Create a temporary bucket on Object Storage, write all your files and data to Object Storage,

delete your notebook session, provision a new notebook session on a larger compute shape,

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

and copy your files and data from your temporary bucket onto your new notebook session.

B.

Ensure your files and environments are written to the block volume storage under the

/home/datascience directory, deactivate the notebook session, and activate the notebook

session with a larger compute shape selected.

C.

Download all your files and data to your local machine, delete your notebook session,

provision a new notebook session on a larger compute shape, and upload your files from

your local machine to the new notebook session.

D.

Deactivate your notebook session, provision a new notebook session on a larger compute

shape and re-create all of your file changes.

Question 2

As a data scientist, you are tasked with creating a model training job that is expected to take

different hyperparameter values on every run. What is the most efficient way to set those

parameters with Oracle Data Science Jobs?

Options:

A.

Create a new job every time you need to run your code and pass the parameters as

environment variables.

B.

Create a new job by setting the required parameters in your code and create a new job for

every code change.

C.

Create your code to expect different parameters either as environment variables or as

command line arguments, which are set on every job run with different values.

D.

Create your code to expect different parameters as command line arguments and create a

new job every time you run the code.

Question 3

data scientist, you use the Oracle Cloud Infrastructure (OCI) Language service to train custom

models. Which types of custom models can be trained?

Options:

A.

Image classification, Named Entity Recognition (NER)

B.

Text classification, Named Entity Recognition (NER)

C.

Sentiment Analysis, Named Entity Recognition (NER)

D.

Object detection, Text classification

Question 4

You have just received a new data set from a colleague. You want to quickly find out summary

information about the data set, such as the types of features, the total number of observations, and

distributions of the data. Which Accelerated Data Science (ADS) SDK method from the ADSDataset

class would you use?

Options:

A.

show_corr()

B.

to_xgb ()

C.

compute ()

D.

show_in_notebook ()

Question 5

You want to make your model more frugal to reduce the cost of collecting and processing data.

You plan to do this by removing features that are highly correlated. You would like to create a heat

map that displays the correlation so that you can identify candidate features to remove.

Which Accelerated Data Science (ADS) SDK method is appropriate to display the comparability

between Continuous and Categorical features?

Options:

A.

pearson_plot()

B.

cramersv_plot()

C.

correlation_ratio_plot()

D.

corr()

Question 6

Using Oracle AutoML, you are tuning hyperparameters on a supported model class and have

specified a time budget. AutoML terminates computation once the time budget is exhausted. What

would you expect AutoML to return in case the time budget is exhausted before hyperparameter

tuning is completed?

Options:

A.

The current best-known hyperparameter configuration is returned.

B.

A random hyperparameter configuration is returned.

C.

A hyperparameter configuration with a minimum learning rate is returned.

D.

The last generated hyperparameter configuration is returned

Question 7

You have trained a machine learning model on Oracle Cloud Infrastructure (OCI) Data Science,

and you want to save the code and associated pickle file in a Git repository. To do this, you have to

create a new SSH key pair to use for authentication. Which SSH command would you use to create

the public/private algorithm key pair in the notebook session?

Options:

A.

ssh-agent

B.

ssh-copy-id

C.

ssh-add

D.

ssh-Keygen

Question 8

You have just completed analyzing a set of images by using Oracle Cloud Infrastructure (OCI) Data

Labelling, and you want to export the annotated data. Which two formats are supported?

Options:

A.

CONLL V2003

B.

COCO

C.

Data Labelling Service Proprietary JSON

D.

Spacy

Question 9

What preparation steps are required to access an Oracle AI service SDK from a Data Science

notebook session?

Options:

A.

Create and upload score.py and runtime.yaml.

B.

Create and upload the APIsigning key and config file.

C.

Import the REST API.

D.

Call the ADS command to enable AI integration

Question 10

During a job run, you receive an error message that no space is left on your disk device. To solve the problem, you must increase the size of the job storage. What would be the most effi-cient way to do this with Data Science Jobs?

Options:

A.

On the job run, set the environment variable that helps increase the size of the storage.

B.

Your code using too much disk space. Refactor the code to identify the problem.

C.

Edit the job, change the size of the storage of your job, and start a new job run.

D.

Create a new job with increased storage size and then run the job.

Question 11

You are a data scientist building a pipeline in the Oracle Cloud Infrastructure (OCI) Data Science

service for your machine learning project. You want to optimize the pipeline completion time by

running some steps in parallel. Which statement is true about running pipeline steps in parallel?

Options:

A.

Steps in a pipeline can be run only sequentially.

B.

Pipeline steps can be run in sequence or in parallel, as long as they create a directed acyclic

graph (DAG).

C.

All pipeline steps are always run in parallel.

D.

Parallel steps cannot be run if they are completely independent of each other.

Question 12

You want to write a program that performs document analysis tasks such as extracting text and

tables from a document. Which Oracle AI service would you use?

Options:

A.

OCI Language

B.

Oracle Digital Assistant

C.

OCI Speech

D.

OCI Vision

Question 13

You want to write a Python script to create a collection of different projects for your data science team. Which Oracle Cloud Infrastructure (OCI) Data Science Interface would you use?

Options:

A.

Programming Language Software Development Kit (SDK)

B.

Mobile App

C.

Command Line Interface (CLI)

D.

OCI Console

Question 14

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

Options:

A.

Define the compute scaling strategy.

B.

Configure the deployment infrastructure.

C.

Define the inference server dependencies.

D.

Execute the inference logic code

Question 15

While reviewing your data, you discover that your data set has a class imbalance. You are aware that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation tools for data set transformation. Which would be the right tool to correct any imbalance between the classes?

Options:

A.

sample()

B.

suggeste_recoomendations()

C.

auto_transform()

D.

visualize_transforms()

Question 16

You have just received a new data set from a colleague. You want to quickly find out summary information about the data set, such as the types of features, total number of observations, and data distributions, Which Accelerated Data Science (ADS) SDK method from the AD&Dataset class would you use?

Options:

A.

Show_in_notebook{}

B.

To_xgb{}

C.

Compute{}

D.

Show_corr{}

Question 17

Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from

reference libraries and index websites such as scikit-learn?

Options:

A.

DataLabeling

B.

DatasetBrowser

C.

SecretKeeper

D.

ADSTuner

Question 18

Select two reasons why it is important to rotate encryption keys when using Oracle Cloud

Infrastructure (OCI) Vault to store credentials or other secrets.

Options:

A.

Key rotation allows you to encrypt no more than five keys at a time.

B.

Key rotation improves encryption efficiency.

C.

Periodically rotating keys make it easier to reuse keys.

D.

Key rotation reduces risk if a key is ever compromised.

E.

Periodically rotating keys limits the amount of data encrypted by one key version.

Question 19

You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a

model and need some additional Python libraries for processing genome sequencing data. Which of

the following THREE statements are correct with respect to installing additional Python libraries to

process the data?

Options:

A.

You can only install libraries using yum and pip as a normal user.

B.

You can install private or custom libraries from your own internal repositories.

C.

OCI Data Science allows root privileges in notebook sessions.

D.

You can install any open source package available on a publicly accessible Python Package

Index (PyPI) repository.

E.

You cannot install a library that's not preinstalled in the provided image

Question 20

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

Options:

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Question 21

The Oracle AutoML pipeline automates hyperparameter tuning by training the model with different parameters in parallel. You have created an instance of Oracle AutoML as ora-cle_automl and now you want an output with all the different trials performed by Oracle Au-toML. Which of the following command gives you the results of all the trials?

Options:

A.

Oracle.automl.visualize_algorith_selection_trails()

B.

Oracle.automl.visualize_adaptive_sampling_trails()

C.

Oracle.automl.print_trials()

D.

Oracle.automl.visualize_tuning_trails()

Question 22

Where do calls to stdout and stderr from score.py go in a model deployment?

Options:

A.

The predict log in the Oracle Cloud Infrastructure (OCI) Logging service as defined in the deployment.

B.

The OCI Cloud Shell, which can be accessed from the console.

C.

The file that was defined for them on the Virtual stachine (VM).

D.

The OCI console.

Question 23

You want to use ADSTuner to tune the hyperparameters of a supported model you recently

trained. You have just started your search and want to reduce the computational cost as well as

access the quality of the model class that you are using.

What is the most appropriate search space strategy to choose?

Options:

A.

Detailed

B.

ADSTuner doesn't need a search space to tune the hyperparameters.

C.

Perfunctory

D.

Pass a dictionary that defines a search space

Question 24

You have created a Data Science project in a compartment called Development and shared it

with a group of collaborators. You now need to move the project to a different compartment called

Production after completing the current development iteration.

Which statement is correct?

Options:

A.

Moving a project to a different compartment also moves its associated notebook sessions

and models to the new compartment.

B.

Moving a project to a different compartment requires deleting all its associated notebook

sessions and models first.

C.

You cannot move a project to a different compartment after it has been created.

D.

You can move a project to a different compartment without affecting its associated

notebook sessions and models

Page: 1 / 8
Total 80 questions