# Neural Network Questions and Answers – Backpropagation Algorithm

This set of Neural Networks Multiple Choice Questions & Answers (MCQs) focuses on “Backpropagation Algorithm″.

1. What is the objective of backpropagation algorithm?
a) to develop learning algorithm for multilayer feedforward neural network
b) to develop learning algorithm for single layer feedforward neural network
c) to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
d) none of the mentioned

Explanation: The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.

2. The backpropagation law is also known as generalized delta rule, is it true?
a) yes
b) no

Explanation: Because it fulfils the basic condition of delta rule.

3. What is true regarding backpropagation rule?
a) it is also called generalized delta rule
b) error in output is propagated backwards only to determine weight updates
c) there is no feedback of signal at nay stage
d) all of the mentioned

Explanation: These all statements defines backpropagation algorithm.

4. There is feedback in final stage of backpropagation algorithm?
a) yes
b) no

Explanation: No feedback is involved at any stage as it is a feedforward neural network.

5. What is true regarding backpropagation rule?
a) it is a feedback neural network
b) actual output is determined by computing the outputs of units for each hidden layer
c) hidden layers output is not all important, they are only meant for supporting input and output layers
d) none of the mentioned

Explanation: In backpropagation rule, actual output is determined by computing the outputs of units for each hidden layer.

6. What is meant by generalized in statement “backpropagation is a generalized delta rule” ?
a) because delta rule can be extended to hidden layer units
b) because delta is applied to only input and output layers, thus making it more simple and generalized
c) it has no significance
d) none of the mentioned

Explanation: The term generalized is used because delta rule could be extended to hidden layer units.

7. What are general limitations of back propagation rule?
a) local minima problem
b) slow convergence
c) scaling
d) all of the mentioned

Explanation: These all are limitations of backpropagation algorithm in general.

8. What are the general tasks that are performed with backpropagation algorithm?
a) pattern mapping
b) function approximation
c) prediction
d) all of the mentioned

Explanation: These all are the tasks that can be performed with backpropagation algorithm in general.

9. Does backpropagaion learning is based on gradient descent along error surface?
a) yes
b) no
c) cannot be said
d) it depends on gradient descent but not error surface

Explanation: Weight adjustment is proportional to negative gradient of error with respect to weight.

10. How can learning process be stopped in backpropagation rule?
a) there is convergence involved
b) no heuristic criteria exist
c) on basis of average gradient value
d) none of the mentioned

Explanation: If average gadient value fall below a preset threshold value, the process may be stopped.

Sanfoundry Global Education & Learning Series – Neural Networks.

To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]