Weekend Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Huawei H13-321_V2.5 HCIP - AI EI Developer V2.5 Exam Exam Practice Test

Page: 1 / 6
Total 60 questions

HCIP - AI EI Developer V2.5 Exam Questions and Answers

Question 1

Maximum likelihood estimation (MLE) requires knowledge of the sample data's distribution type.

Options:

A.

TRUE

B.

FALSE

Question 2

In NLP tasks, transformer models perform well in multiple tasks due to their self-attention mechanism and parallel computing capability. Which of the following statements about transformer models are true?

Options:

A.

Transformer models outperform RNN and CNN in processing long texts because they can effectively capture global dependencies.

B.

Multi-head attention is the core component of a transformer model. It computes multiple attention heads in parallel to capture semantic information in different subspaces.

C.

A transformer model directly captures the dependency between different positions in the input sequence through the self-attention mechanism, without using the recurrent neural network (RNN) or convolutional neural network (CNN).

D.

Positional encoding is optional in a transformer model because the self-attention mechanism can naturally process the order information of sequences.

Question 3

Which of the following statements are true about the differences between using convolutional neural networks (CNNs) in text tasks and image tasks?

Options:

A.

Color image input is multi-channel, whereas text input is single-channel.

B.

When the CNN is used for text tasks, the kernel size must be the same as the number of word vector dimensions. This constraint, however, does not apply to image tasks.

C.

For CNN, there is no difference in handling text or image tasks.

D.

CNNs are suitable for image tasks, but they perform poorly in text tasks.

Question 4

In the deep neural network (DNN)–hidden Markov model (HMM), the DNN is mainly used for feature processing, while the HMM is mainly used for sequence modeling.

Options:

A.

TRUE

B.

FALSE

Question 5

If OpenCV is used to read an image and save it to variable "img" during image preprocessing, (h, w) = img.shape[:2] can be used to obtain the image size.

Options:

A.

TRUE

B.

FALSE

Question 6

What are the adjacency relationships between two pixels whose coordinates are (21,13) and (22,12)?

Options:

A.

8-adjacency

B.

No adjacency relationship

C.

4-adjacency

D.

Diagonal adjacency

Question 7

Which of the following statements about the standard normal distribution are true?

Options:

A.

The variance is 0.

B.

The mean is 1.

C.

The variance is 1.

D.

The mean is 0.

Question 8

The attention mechanism in foundation model architectures allows the model to focus on specific parts of the input data. Which of the following steps are key components of a standard attention mechanism?

Options:

A.

Calculate the dot product similarity between the query and key vectors to obtain attention scores.

B.

Compute the weighted sum of the value vectors using the attention weights.

C.

Apply a non-linear mapping to the result obtained after the weighted summation.

D.

Normalize the attention scores to obtain attention weights.

Question 9

The development of large models should comply with ethical principles to ensure the legal, fair, and transparent use of data.

Options:

A.

TRUE

B.

FALSE

Question 10

The jieba ------() method can be used for word segmentation.

Options:

Question 11

The deep neural network (DNN)–hidden Markov model (HMM) does not require the HMM–Gaussian mixture model (GMM) as an auxiliary.

Options:

A.

TRUE

B.

FALSE

Question 12

In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. The Transformer consists of an encoder and a(n) --------. (Fill in the blank.)

Options:

Question 13

Transformer models outperform LSTM when analyzing and processing long-distance dependencies, making them more effective for sequence data processing.

Options:

A.

TRUE

B.

FALSE

Question 14

Which of the following is a learning algorithm used for Markov chains?

Options:

A.

Baum-Welch algorithm

B.

Viterbi algorithm

C.

Exhaustive search

D.

Forward-backward algorithm

Question 15

The basic operations of morphological processing include dilation and erosion. These operations can be combined to achieve practical algorithms such as opening and closing operations.

Options:

A.

TRUE

B.

FALSE

Question 16

Huawei Cloud ModelArts is a one-stop AI development platform that supports multiple AI scenarios. Which of the following scenarios are supported by ModelArts?

Options:

A.

Image classification

B.

Object detection

C.

Speech recognition

D.

Video analytics

Question 17

In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. In a Transformer model, there is customized LSTM with CNN layers.

Options:

A.

TRUE

B.

FALSE

Question 18

------- is a model that uses a convolutional neural network (CNN) to classify texts.

Options:

Page: 1 / 6
Total 60 questions