Neural Networks Questions and Answers – Random Initialization

This set of Neural Networks Multiple Choice Questions & Answers (MCQs) focuses on “Neural Networks – Random Initialization”.

1. Arrange the following steps in sequence used in the course of training a neural network after initializing weight and biases.
I. Backward propagation
II. Forward propagation
III. Compute the loss function
IV. Repeat the steps for n iterations till we have minimized the loss function, without overfitting the train data
a) I → II → III → IV
b) II → III → I → IV
c) III → II → I → IV
d) II → I → III → IV
View Answer

Answer: b
Explanation: After initializing the weight and biases in the course of training a neural network we follow the following sequence of steps in order to train the neural network:
II. Forward propagation
III. Compute the loss function
I. Backward Propagation
IV. Repeat the steps for n iterations till we have minimized the loss function, without overfitting the train data

2. Which of the following statement is not true about random initialization?
a) Weights are randomly distributed with a mean of zero
b) Weights are randomly distributed with a standard deviation of one
c) To break symmetry the weights are initialized randomly
d) Following random initialization, each neuron can then proceed to learn the same function of its
input
View Answer

Answer: d
Explanation: In random initialization, the weights are randomly distributed with a mean of zero and a standard deviation of one. To break the symmetry the weights are also initialized randomly and following random initialization, each neuron can then proceed to learn the different function of its inputs.

3. Which of the following initialization method is used often in order to recommend the weights during weight initialization?
a) Zero initialization method
b) Random initialization method
c) Xavier initialization method
d) He-et-al initialization method
View Answer

Answer: d
Explanation: He-et-al initialization method has the highest average accuracy and performs better than the Zero, Random and Xavier initialization methods in most of the test run, and hence it is often used in order to recommend weights during weight initialization.
advertisement
advertisement

4. Among the below distributions pair which one can be used to attain random initialization?
a) Uniform distribution and Normal distribution
b) Standard distribution and Normal distribution
c) Uniform distribution and Zero distribution
d) Zero distribution and Normal distribution
View Answer

Answer: a
Explanation: Random initialization, initializes the weights randomly. There are two methods, uniform distribution and normal distribution through which we can attain it. Also, there are some issues which we can face with zero initialization and high value initialization, so to overcome these issues we follow the random initialization method.

5. What will be the result if we use the identical set of weights whenever we train the network?
a) The learning algorithm would work fine
b) The learning algorithm would make changes to the network weights
c) The learning algorithm would fail to make changes to the network weights, and the model will be
stuck
d) The learning algorithm would make changes to the network weights, and the model will be stuck
View Answer

Answer: c
Explanation: After assigning the identical set of weights whenever we train the network, the functions of the learning algorithm would fail to make changes to the network weights and the model will get stuck.
Sanfoundry Certification Contest of the Month is Live. 100+ Subjects. Participate Now!

6. A too-large weight initialized on the neural network can lead to vanishing gradient whereas a too-small weight initialized on the neural network can lead to exploding gradient.
a) True
b) False
View Answer

Answer: b
Explanation: When the gradients of the cost with respect to parameters are too big then it can lead to exploding gradient problem. When the gradients of the cost with respect to the parameters are too small then it can lead to vanishing gradient problem.

7. Initialization is particularly important in neural networks.
a) True
b) False
View Answer

Answer: a
Explanation: Initialization is particularly important in neural networks because of the stability issues associated with neural network training. In neural networks, weights represent the strength of connections between units in adjacent network layers, and initializing the weights improperly can lead to vanishing or exploding gradient problem.
advertisement

Sanfoundry Global Education & Learning Series – Neural Networks.

To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers.

advertisement

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.