What differentiates Semantic search from traditional keyword search?
How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?
What is the primary purpose of LangSmith Tracing?
What does in-context learning in Large Language Models involve?
Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?
Which is NOT a built-in memory type in LangChain?
When does a chain typically interact with memory in a run within the LangChain framework?
What is LangChain?
What does a cosine distance of 0 indicate about the relationship between two embeddings?
How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?
What is the role of temperature in the decoding process of a Large Language Model (LLM)?
How does a presence penalty function in language model generation?
How are chains traditionally created in LangChain?
How does the structure of vector databases differ from traditional relational databases?
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?
What happens if a period (.) is used as a stop sequence in text generation?
How does the integration of a vector database into Retrieval-Augmented Generation (RAG)-based Large Language Models (LLMs) fundamentally alter their responses?
Given the following code block:
history = StreamlitChatMessageHistory(key="chat_messages")
memory = ConversationBufferMemory(chat_memory=history)
Which statement is NOT true about StreamlitChatMessageHistory?
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
Which statement best describes the role of encoder and decoder models in natural language processing?
Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?
How are prompt templates typically designed for language models?
An LLM emits intermediate reasoning steps as part of its responses. Which of the following techniques is being utilized?
Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?
In which scenario is soft prompting especially appropriate compared to other training styles?