Summer Sale- Special Discount Limited Time 65% Offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

Oracle 1z0-1127-25 Oracle Cloud Infrastructure 2025 Generative AI Professional Exam Practice Test

Page: 1 / 9
Total 88 questions

Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Question 1

What differentiates Semantic search from traditional keyword search?

Options:

A.

It relies solely on matching exact keywords in the content.

B.

It depends on the number of times keywords appear in the content.

C.

It involves understanding the intent and context of the search.

D.

It is based on the date and author of the content.

Question 2

How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?

Options:

A.

By incorporating additional layers to the base model

B.

By allowing updates across all layers of the model

C.

By excluding transformer layers from the fine-tuning process entirely

D.

By restricting updates to only a specific group of transformer layers

Question 3

What is the primary purpose of LangSmith Tracing?

Options:

A.

To generate test cases for language models

B.

To analyze the reasoning process of language models

C.

To debug issues in language model outputs

D.

To monitor the performance of language models

Question 4

What does in-context learning in Large Language Models involve?

Options:

A.

Pretraining the model on a specific domain

B.

Training the model using reinforcement learning

C.

Conditioning the model with task-specific instructions or demonstrations

D.

Adding more layers to the model

Question 5

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?

Options:

A.

"Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

B.

"Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.

C.

"Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.

D.

"Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

Question 6

Which is NOT a built-in memory type in LangChain?

Options:

A.

ConversationImageMemory

B.

ConversationBufferMemory

C.

ConversationSummaryMemory

D.

ConversationTokenBufferMemory

Question 7

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated

B.

Before user input and after chain execution

C.

After user input but before chain execution, and again after core logic but before output

D.

Continuously throughout the entire chain execution process

Question 8

What is LangChain?

Options:

A.

A JavaScript library for natural language processing

B.

A Python library for building applications with Large Language Models

C.

A Java library for text summarization

D.

A Ruby library for text generation

Question 9

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Question 10

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

Options:

A.

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

B.

Groundedness refers to contextual alignment, whereas Answer Relevance deals with syntactic accuracy.

C.

Groundedness measures relevance to the user query, whereas Answer Relevance evaluates data integrity.

D.

Groundedness focuses on data integrity, whereas Answer Relevance emphasizes lexical diversity.

Question 11

What is the role of temperature in the decoding process of a Large Language Model (LLM)?

Options:

A.

To increase the accuracy of the most likely word in the vocabulary

B.

To determine the number of words to generate in a single decoding step

C.

To decide to which part of speech the next word should belong

D.

To adjust the sharpness of probability distribution over vocabulary when selecting the next word

Question 12

How does a presence penalty function in language model generation?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It penalizes only tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Question 13

How are chains traditionally created in LangChain?

Options:

A.

By using machine learning algorithms

B.

Declaratively, with no coding required

C.

Using Python classes, such as LLMChain and others

D.

Exclusively through third-party software integrations

Question 14

How does the structure of vector databases differ from traditional relational databases?

Options:

A.

A vector database stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It is based on distances and similarities in a vector space.

D.

It uses simple row-based data storage.

Question 15

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

Options:

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Question 16

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

Options:

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

Question 17

What happens if a period (.) is used as a stop sequence in text generation?

Options:

A.

The model ignores periods and continues generating text until it reaches the token limit.

B.

The model generates additional sentences to complete the paragraph.

C.

The model stops generating text after it reaches the end of the current paragraph.

D.

The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.

Question 18

How does the integration of a vector database into Retrieval-Augmented Generation (RAG)-based Large Language Models (LLMs) fundamentally alter their responses?

Options:

A.

It transforms their architecture from a neural network to a traditional database system.

B.

It shifts the basis of their responses from pretrained internal knowledge to real-time data retrieval.

C.

It enables them to bypass the need for pretraining on large text corpora.

D.

It limits their ability to understand and generate natural language.

Question 19

Given the following code block:

history = StreamlitChatMessageHistory(key="chat_messages")

memory = ConversationBufferMemory(chat_memory=history)

Which statement is NOT true about StreamlitChatMessageHistory?

Options:

A.

StreamlitChatMessageHistory will store messages in Streamlit session state at the specified key.

B.

A given StreamlitChatMessageHistory will NOT be persisted.

C.

A given StreamlitChatMessageHistory will not be shared across user sessions.

D.

StreamlitChatMessageHistory can be used in any type of LLM application.

Question 20

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

Options:

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Question 21

Which statement best describes the role of encoder and decoder models in natural language processing?

Options:

A.

Encoder models and decoder models both convert sequences of words into vector representations without generating new text.

B.

Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation.

C.

Encoder models convert a sequence of words into a vector representation, and decoder models take this vector representation to generate a sequence of words.

D.

Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text.

Question 22

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

Options:

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Question 23

How are prompt templates typically designed for language models?

Options:

A.

As complex algorithms that require manual compilation

B.

As predefined recipes that guide the generation of language model prompts

C.

To be used without any modification or customization

D.

To work only with numerical data instead of textual content

Question 24

An LLM emits intermediate reasoning steps as part of its responses. Which of the following techniques is being utilized?

Options:

A.

In-context Learning

B.

Step-Back Prompting

C.

Least-to-Most Prompting

D.

Chain-of-Thought

Question 25

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Question 26

In which scenario is soft prompting especially appropriate compared to other training styles?

Options:

A.

When there is a significant amount of labeled, task-specific data available.

B.

When the model needs to be adapted to perform well in a different domain it was not originally trained on.

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.

D.

When the model requires continued pre-training on unlabeled data.

Page: 1 / 9
Total 88 questions