Maximum likelihood estimation (MLE) requires knowledge of the sample data's distribution type.
In NLP tasks, transformer models perform well in multiple tasks due to their self-attention mechanism and parallel computing capability. Which of the following statements about transformer models are true?
Which of the following statements are true about the differences between using convolutional neural networks (CNNs) in text tasks and image tasks?
In the deep neural network (DNN)–hidden Markov model (HMM), the DNN is mainly used for feature processing, while the HMM is mainly used for sequence modeling.
If OpenCV is used to read an image and save it to variable "img" during image preprocessing, (h, w) = img.shape[:2] can be used to obtain the image size.
What are the adjacency relationships between two pixels whose coordinates are (21,13) and (22,12)?
Which of the following statements about the standard normal distribution are true?
The attention mechanism in foundation model architectures allows the model to focus on specific parts of the input data. Which of the following steps are key components of a standard attention mechanism?
The development of large models should comply with ethical principles to ensure the legal, fair, and transparent use of data.
The jieba ------() method can be used for word segmentation.
The deep neural network (DNN)–hidden Markov model (HMM) does not require the HMM–Gaussian mixture model (GMM) as an auxiliary.
In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. The Transformer consists of an encoder and a(n) --------. (Fill in the blank.)
Transformer models outperform LSTM when analyzing and processing long-distance dependencies, making them more effective for sequence data processing.
Which of the following is a learning algorithm used for Markov chains?
The basic operations of morphological processing include dilation and erosion. These operations can be combined to achieve practical algorithms such as opening and closing operations.
Huawei Cloud ModelArts is a one-stop AI development platform that supports multiple AI scenarios. Which of the following scenarios are supported by ModelArts?
In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. In a Transformer model, there is customized LSTM with CNN layers.
------- is a model that uses a convolutional neural network (CNN) to classify texts.