Deep Learning With Keras - Part I

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By Alireza Akhavan
A
Alireza Akhavan
Community Contributor
Quizzes Created: 1 | Total Attempts: 1,440
| Attempts: 1,440 | Questions: 17
Please wait...

Question 1 / 17
0 %
0/100
Score 0/100
1. کدام تابع نمیتواند به عنوان activation function استفاده شود؟

Explanation

The activation function sin() cannot be used as an activation function because it does not have the necessary properties required for an activation function. Activation functions are typically nonlinear and have a bounded range. The sin() function is periodic and does not have a bounded range, making it unsuitable for use as an activation function in neural networks.

Submit
Please wait...
About This Quiz
Deep Learning With Keras - Part I - Quiz

Tell us your name to personalize your report, certificate & get on the leaderboard!
2. ) معمولا با جلو رفتن در سلسه مراتب شبکه عصبی کانولوشنالی، طول و عرض Activation هر لایه کمتر و به عمق یا تعداد کانال هر لایه افزوده میشود.

Explanation

As we progress through the layers of a convolutional neural network, the length and width of the activation of each layer usually decrease, while the depth or number of channels of each layer increases. This is because the convolutional layers apply filters to the input data, which reduces its spatial dimensions but increases its depth or number of channels. This allows the network to extract more complex and abstract features from the input data as we go deeper into the network. Therefore, the given answer is correct.

Submit
3. چرا از activation function ها (مثلا relu یا sigmoid) استفاده میکنیم؟

Explanation

Activation functions like ReLU or sigmoid are used to introduce non-linearity in neural networks. Without activation functions, the neural network would only be able to learn and represent linear relationships between the input and output. However, in many real-world problems, the relationships are non-linear. Activation functions allow the neural network to learn and represent complex non-linear relationships, making it more powerful and capable of solving a wider range of problems.

Submit
4. لایه ی avg pooling پارامترهای کمتری نسبت به لایه‌های max pool دارد.

Explanation

غلط - لایه ی pooling پارامتر ندارد

Submit
5. کدام یک از واحدهای زیر می‌تواند به عنوان بخشی از شبکه های عصبی کانولوشنالی باشد؟

Explanation

not-available-via-ai

Submit
6. فرض کنید  حجم ورودی یک لایه 16x63x63 باشد . اگر  با 32 فیلتر 7*7 با stride برابر با 2 و بدون padding کانوالو شود، حجم خروجی چه خواهد شد؟

Explanation

(63 – 7) / 2 + 1 = 29 => 29 x 29 x 32

Submit
7. فرض کنید تصویر و فیلتر رو به رو را داریم. با فرض stride برابر با 2 و padding برابر با valid ماتریس خروجی را بکشید.  

Explanation

not-available-via-ai

Submit
8. What does "Strides" in Maxpooling Mean

Explanation

In maxpooling, "strides" refers to the number of pixels that the kernel should be moved by. This means that the kernel will move a certain number of pixels horizontally and vertically to cover the input image or feature map. By adjusting the stride value, we can control the amount of overlap between the kernel's receptive fields and the amount of downsampling that occurs in the output.

Submit
9. کدام یک در مورد شبکه های عصبی کاولوشنالی یا CNN ها صحیح است؟

Explanation

CNNs can be applied on any 2D and 3D array of data, not just limited to image and text data. This is because CNNs are designed to effectively capture spatial and temporal dependencies in data, making them suitable for various applications such as computer vision, natural language processing, and speech recognition. By using convolutional layers, pooling layers, and fully connected layers, CNNs can learn and extract meaningful features from different types of data, enabling them to perform tasks like image classification, object detection, and sentiment analysis.

Submit
10. فرض کنید  حجم ورودی یک لایه 16x32x32 باشد . اگر max pool با strideی برابر با 2  و اندازه فیلتر 2 روی آن اعمال شود حجم خروجی چه خواهد شد؟

Explanation

When a max pool operation is applied with a stride of 2 and a filter size of 2 on a 16x32x32 input volume, the output volume will have a size of 16x16x16. This is because the filter moves across the input volume in steps of 2, taking the maximum value within each 2x2 region and creating a new output element. The resulting output volume will have a reduced spatial dimension of 16x16, while the depth remains the same at 16.

Submit
11.  فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است. الف) اگر از شبکه های کانولوشنالی استفاده نکنیم. اگر لایه ی hidden اول 100 نورون داشته باشد. و هر نورون به صورت تمام-متصل به لایه ورودی وصل باشد، چه تعداد پارامتر این برای این لایه خواهیم داشت؟ (با احتساب بایاس).

Explanation

(300 x 300 x 3) x 100 + 100

Submit
12. کدام مورد در مورد شبکه های کانولوشنالی غلط است؟

Explanation

Convolutional neural networks (CNNs) do not fully connect to all neurons in all the layers. Instead, they connect only to neurons in the local region (kernel size) of the input image. This local connectivity allows CNNs to efficiently extract features from images. Additionally, CNNs build feature maps hierarchically in every layer, meaning that they learn and extract increasingly complex features as they go deeper into the network. CNNs are inspired by the human visual system, which also processes visual information in a hierarchical and localized manner.

Submit
13. فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است.  اگر از شبکه عصبی کانولوشنالی با 100 فیلتر با اندازه 5x5 استفاده کنیم این لایه hidden چه تعداد پارامتر خواهد داشت؟  

Explanation

(5 x 5 x 3) x 100 + 100

Submit
14. در الگوریتم KNN همواره مقدار K بیشتر باعث افزایش دقت میشود، اما سرعت اجرای الگوریتم را کاهش می‌دهد

Explanation

The given statement is incorrect. In the KNN algorithm, increasing the value of K does not always result in an increase in accuracy. While a higher value of K may reduce the impact of noise in the data, it can also lead to over-smoothing and loss of important details. Additionally, increasing K can also increase the computational complexity and slow down the execution speed of the algorithm. Therefore, the statement that increasing K always improves accuracy but reduces the algorithm's execution speed is incorrect.

Submit
15. What is TRUE about "Padding" in Convolution

Explanation

The statement "size of Input Image is reduced for 'VALID' padding" is true. Padding is a technique used in convolutional neural networks to preserve the spatial dimensions of the input image. In 'VALID' padding, no padding is added to the input image, resulting in a smaller output size compared to the input size. This is because the convolution operation only considers the valid positions of the kernel within the input image, without extending beyond the boundaries. Therefore, the output size is reduced when using 'VALID' padding.

Submit
16.   اگر پس از آموزش یک شبکه عصبی کانولوشنالی عمیق به بررسی نورون‌ها بپردازیم، برخی از نورون ها قابل تفسیر هستند و برخی از آن‌ها تفسیری ندارند. اما اهمیت این نورون‌ها یکسان بوده و اولویتی نسبت به هم ندارد.

Explanation

اطلاعات بیشتر:
https://t.me/class_vision/159

Submit
17. What is an Activation Function

Explanation

An activation function is a mathematical function that determines the output of a neuron in a neural network. It takes the weighted sum of the inputs and applies a non-linear transformation to it, generating the output of the neuron. This output is then used as input for the next layer of the neural network. Therefore, the correct answer is "A function that triggers a neuron and generate the outputs".

Submit
View My Results

Quiz Review Timeline (Updated): Jun 2, 2025 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Jun 02, 2025
    Quiz Edited by
    ProProfs Editorial Team
  • Feb 11, 2019
    Quiz Created by
    Alireza Akhavan
Cancel
  • All
    All (17)
  • Unanswered
    Unanswered ()
  • Answered
    Answered ()
کدام تابع نمیتواند به عنوان activation function...
) معمولا با جلو رفتن در سلسه مراتب شبکه...
چرا از activation function ها (مثلا relu یا sigmoid)...
لایه ی avg pooling پارامترهای کمتری نسبت به...
کدام یک از واحدهای زیر می‌تواند به...
فرض کنید  حجم ورودی یک لایه 16x63x63...
فرض کنید تصویر و فیلتر رو به رو را...
What does "Strides" in Maxpooling Mean
کدام یک در مورد شبکه های عصبی...
فرض کنید  حجم ورودی یک لایه 16x32x32...
 فرض کنید وردی یک تصویر 300x300 رنگی(RGB)...
کدام مورد در مورد شبکه های کانولوشنالی...
فرض کنید وردی یک تصویر 300x300 رنگی(RGB)...
در الگوریتم KNN همواره مقدار K بیشتر...
What is TRUE about "Padding" in Convolution
  ...
What is an Activation Function
Alert!

Advertisement