1.
فرض کنید تصویر و فیلتر رو به رو را داریم. با فرض stride برابر با 2 و padding برابر با valid
ماتریس خروجی را بکشید.
Correct Answer
B. Option2
2.
لایه ی avg pooling پارامترهای کمتری نسبت به لایههای max pool دارد.
Correct Answer
B. غلط
Explanation
غلط - لایه ی pooling پارامتر ندارد
3.
در الگوریتم KNN همواره مقدار K بیشتر باعث افزایش دقت میشود، اما سرعت اجرای الگوریتم را کاهش میدهد
Correct Answer
B. غلط
Explanation
The given statement is incorrect. In the KNN algorithm, increasing the value of K does not always result in an increase in accuracy. While a higher value of K may reduce the impact of noise in the data, it can also lead to over-smoothing and loss of important details. Additionally, increasing K can also increase the computational complexity and slow down the execution speed of the algorithm. Therefore, the statement that increasing K always improves accuracy but reduces the algorithm's execution speed is incorrect.
4.
اگر پس از آموزش یک شبکه عصبی کانولوشنالی عمیق به بررسی نورونها بپردازیم، برخی از نورون ها قابل تفسیر هستند و برخی از آنها تفسیری ندارند. اما اهمیت این نورونها یکسان بوده و اولویتی نسبت به هم ندارد.
Correct Answer
A. صحیح
Explanation
اطلاعات بیشتر:
https://t.me/class_vision/159
5.
) معمولا با جلو رفتن در سلسه مراتب شبکه عصبی کانولوشنالی، طول و عرض Activation هر لایه کمتر و به عمق یا تعداد کانال هر لایه افزوده میشود.
Correct Answer
A. صحیح
Explanation
As we progress through the layers of a convolutional neural network, the length and width of the activation of each layer usually decrease, while the depth or number of channels of each layer increases. This is because the convolutional layers apply filters to the input data, which reduces its spatial dimensions but increases its depth or number of channels. This allows the network to extract more complex and abstract features from the input data as we go deeper into the network. Therefore, the given answer is correct.
6.
فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است.
الف) اگر از شبکه های کانولوشنالی استفاده نکنیم. اگر لایه ی hidden اول 100 نورون داشته باشد. و هر نورون به صورت تمام-متصل به لایه ورودی وصل باشد، چه تعداد پارامتر این برای این لایه خواهیم داشت؟ (با احتساب بایاس).
Correct Answer
A. 27000100
Explanation
(300 x 300 x 3) x 100 + 100
7.
فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است.
اگر از شبکه عصبی کانولوشنالی با 100 فیلتر با اندازه 5x5 استفاده کنیم این لایه hidden چه تعداد پارامتر خواهد داشت؟
Correct Answer
B. 7600
Explanation
(5 x 5 x 3) x 100 + 100
8.
فرض کنید حجم ورودی یک لایه 16x63x63 باشد . اگر با 32 فیلتر 7*7 با stride برابر با 2 و بدون padding کانوالو شود، حجم خروجی چه خواهد شد؟
Correct Answer
C. 29x29x32
Explanation
(63 – 7) / 2 + 1 = 29 => 29 x 29 x 32
9.
فرض کنید حجم ورودی یک لایه 16x32x32 باشد . اگر max pool با strideی برابر با 2 و اندازه فیلتر 2 روی آن اعمال شود حجم خروجی چه خواهد شد؟
Correct Answer
D. 16x16x16
Explanation
When a max pool operation is applied with a stride of 2 and a filter size of 2 on a 16x32x32 input volume, the output volume will have a size of 16x16x16. This is because the filter moves across the input volume in steps of 2, taking the maximum value within each 2x2 region and creating a new output element. The resulting output volume will have a reduced spatial dimension of 16x16, while the depth remains the same at 16.
10.
کدام یک در مورد شبکه های عصبی کاولوشنالی یا CNN ها صحیح است؟
Correct Answer
B. CNN can be applied on ANY 2D and 3D array of data.
Explanation
CNNs can be applied on any 2D and 3D array of data, not just limited to image and text data. This is because CNNs are designed to effectively capture spatial and temporal dependencies in data, making them suitable for various applications such as computer vision, natural language processing, and speech recognition. By using convolutional layers, pooling layers, and fully connected layers, CNNs can learn and extract meaningful features from different types of data, enabling them to perform tasks like image classification, object detection, and sentiment analysis.
11.
کدام یک از واحدهای زیر میتواند به عنوان بخشی از شبکه های عصبی کانولوشنالی باشد؟
Correct Answer
E. تمام موارد
12.
چرا از activation function ها (مثلا relu یا sigmoid) استفاده میکنیم؟
Correct Answer
B. برای غیر خطی کردن یا Non-Linearity
Explanation
Activation functions like ReLU or sigmoid are used to introduce non-linearity in neural networks. Without activation functions, the neural network would only be able to learn and represent linear relationships between the input and output. However, in many real-world problems, the relationships are non-linear. Activation functions allow the neural network to learn and represent complex non-linear relationships, making it more powerful and capable of solving a wider range of problems.
13.
کدام تابع نمیتواند به عنوان activation function استفاده شود؟
Correct Answer
C. Sin()
Explanation
The activation function sin() cannot be used as an activation function because it does not have the necessary properties required for an activation function. Activation functions are typically nonlinear and have a bounded range. The sin() function is periodic and does not have a bounded range, making it unsuitable for use as an activation function in neural networks.
14.
What is an Activation Function
Correct Answer
B. A function that triggers a neuron and generate the outputs
Explanation
An activation function is a mathematical function that determines the output of a neuron in a neural network. It takes the weighted sum of the inputs and applies a non-linear transformation to it, generating the output of the neuron. This output is then used as input for the next layer of the neural network. Therefore, the correct answer is "A function that triggers a neuron and generate the outputs".
15.
کدام مورد در مورد شبکه های کانولوشنالی غلط است؟
Correct Answer
A. Fully connects to all neurons in all the layers
Explanation
Convolutional neural networks (CNNs) do not fully connect to all neurons in all the layers. Instead, they connect only to neurons in the local region (kernel size) of the input image. This local connectivity allows CNNs to efficiently extract features from images. Additionally, CNNs build feature maps hierarchically in every layer, meaning that they learn and extract increasingly complex features as they go deeper into the network. CNNs are inspired by the human visual system, which also processes visual information in a hierarchical and localized manner.
16.
What does "Strides" in Maxpooling Mean
Correct Answer
B. The number of pixels, kernel should be moved.
Explanation
In maxpooling, "strides" refers to the number of pixels that the kernel should be moved by. This means that the kernel will move a certain number of pixels horizontally and vertically to cover the input image or feature map. By adjusting the stride value, we can control the amount of overlap between the kernel's receptive fields and the amount of downsampling that occurs in the output.
17.
What is TRUE about "Padding" in Convolution
Correct Answer
A. Size of Input Image is reduced for "VALID" padding.
Explanation
The statement "size of Input Image is reduced for 'VALID' padding" is true. Padding is a technique used in convolutional neural networks to preserve the spatial dimensions of the input image. In 'VALID' padding, no padding is added to the input image, resulting in a smaller output size compared to the input size. This is because the convolution operation only considers the valid positions of the kernel within the input image, without extending beyond the boundaries. Therefore, the output size is reduced when using 'VALID' padding.