Neural Network Questions and Answers – Learning – 1

This set of Neural Networks Multiple Choice Questions & Answers (MCQs) focuses on “Learning – 1″.

1. On what parameters can change in weight vector depend?
a) learning parameters
b) input vector
c) learning signal
d) all of the mentioned
View Answer

Answer: d
Explanation: Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.

2. If the change in weight vector is represented by ∆wij, what does it mean?
a) describes the change in weight vector for ith processing unit, taking input vector jth into account
b) describes the change in weight vector for jth processing unit, taking input vector ith into account
c) describes the change in weight vector for jth & ith processing unit.
d) none of the mentioned
View Answer

Answer: a
Explanation: ∆wij= µf(wi a)aj, where a is the input vector.

3. What is learning signal in this equation ∆wij= µf(wi a)aj?
a) µ
b) wi a
c) aj
d) f(wi a)
View Answer

Answer: d
Explanation: This the non linear representation of output of the network.
advertisement
advertisement

4. State whether Hebb’s law is supervised learning or of unsupervised type?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) can be both supervised & unsupervised
View Answer

Answer: b
Explanation: No desired output is required for it’s implementation.

5. Hebb’s law can be represented by equation?
a) ∆wij= µf(wi a)aj
b) ∆wij= µ(si) aj, where (si) is output signal of ith input
c) both way
d) none of the mentioned
View Answer

Answer: c
Explanation: (si)= f(wi a), in Hebb’s law.

6. State which of the following statements hold foe perceptron learning law?
a) it is supervised type of learning law
b) it requires desired output for each input
c) ∆wij= µ(bi – si) aj
d) all of the mentioned
View Answer

Answer: d
Explanation: all statements follow from ∆wij= µ(bi – si) aj, where bi is the target output & hence supervised learning.

7. Delta learning is of unsupervised type?
a) yes
b) no
View Answer

Answer: b
Explanation: Change in weight is based on the error between the desired & the actual output values for a given input.
advertisement

8. widrow & hoff learning law is special case of?
a) hebb learning law
b) perceptron learning law
c) delta learning law
d) none of the mentioned
View Answer

Answer: c
Explanation: Output function in this law is assumed to be linear , all other things same.

9. What’s the other name of widrow & hoff learning law?
a) Hebb
b) LMS
c) MMS
d) None of the mentioned
View Answer

Answer: b
Explanation: LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.
advertisement

10. Which of the following equation represent perceptron learning law?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(bi – (wi a)) aj
View Answer

Answer: b
Explanation: Perceptron learning law is supervised, nonlinear type of learning.

Sanfoundry Global Education & Learning Series – Neural Networks.

To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers.

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.