This set of Neural Networks Multiple Choice Questions & Answers (MCQs) focuses on “Learning – 1″.
1. On what parameters can change in weight vector depend?
a) learning parameters
b) input vector
c) learning signal
d) all of the mentioned
View Answer
Explanation: Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.
2. If the change in weight vector is represented by ∆wij, what does it mean?
a) describes the change in weight vector for ith processing unit, taking input vector jth into account
b) describes the change in weight vector for jth processing unit, taking input vector ith into account
c) describes the change in weight vector for jth & ith processing unit.
d) none of the mentioned
View Answer
Explanation: ∆wij= µf(wi a)aj, where a is the input vector.
3. What is learning signal in this equation ∆wij= µf(wi a)aj?
a) µ
b) wi a
c) aj
d) f(wi a)
View Answer
Explanation: This the non linear representation of output of the network.
4. State whether Hebb’s law is supervised learning or of unsupervised type?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) can be both supervised & unsupervised
View Answer
Explanation: No desired output is required for it’s implementation.
5. Hebb’s law can be represented by equation?
a) ∆wij= µf(wi a)aj
b) ∆wij= µ(si) aj, where (si) is output signal of ith input
c) both way
d) none of the mentioned
View Answer
Explanation: (si)= f(wi a), in Hebb’s law.
6. State which of the following statements hold foe perceptron learning law?
a) it is supervised type of learning law
b) it requires desired output for each input
c) ∆wij= µ(bi – si) aj
d) all of the mentioned
View Answer
Explanation: all statements follow from ∆wij= µ(bi – si) aj, where bi is the target output & hence supervised learning.
7. Delta learning is of unsupervised type?
a) yes
b) no
View Answer
Explanation: Change in weight is based on the error between the desired & the actual output values for a given input.
8. widrow & hoff learning law is special case of?
a) hebb learning law
b) perceptron learning law
c) delta learning law
d) none of the mentioned
View Answer
Explanation: Output function in this law is assumed to be linear , all other things same.
9. What’s the other name of widrow & hoff learning law?
a) Hebb
b) LMS
c) MMS
d) None of the mentioned
View Answer
Explanation: LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.
10. Which of the following equation represent perceptron learning law?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(bi – (wi a)) aj
View Answer
Explanation: Perceptron learning law is supervised, nonlinear type of learning.
Sanfoundry Global Education & Learning Series – Neural Networks.
To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers.