Neural Network Questions and Answers – Learning – 2

This set of Neural Networks Multiple Choice Questions and Answers for freshers focuses on “Learning – 2”.

1. Correlation learning law is special case of?
a) Hebb learning law
b) Perceptron learning law
c) Delta learning law
d) LMS learning law
View Answer

Answer: a
Explanation: Since in hebb is replaced by bi(target output) in correlation.

2. Correlation learning law is what type of learning?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) both supervised or unsupervised
View Answer

Answer: a
Explanation: Supervised, since depends on target output.

3. Correlation learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wij= µ bi aj
View Answer

Answer: d
Explanation: Correlation learning law depends on target output(bi).
advertisement
advertisement

4. The other name for instar learning law?
a) looser take it all
b) winner take it all
c) winner give it all
d) looser give it all
View Answer

Answer: b
Explanation: The unit which gives maximum output, weight is adjusted for that unit.

5. The instar learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wk= µ (a-wk), unit k with maximum output is identified
View Answer

Answer: d
Explanation: Follows from basic definition of instar learning law.

6. Is instar a case of supervised learning?
a) yes
b) no
View Answer

Answer: b
Explanation: Since weight adjustment don’t depend on target output, it is unsupervised learning.

7. The instar learning law can be represented by equation?
a) ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(si) aj
View Answer

Answer: a
Explanation: Follows from basic definition of outstar learning law.
advertisement

8. Is outstar a case of supervised learning?
a) yes
b) no
View Answer

Answer: a
Explanation: Since weight adjustment depend on target output, it is supervised learning.

9. Which of the following learning laws belongs to same category of learning?
a) hebbian, perceptron
b) perceptron, delta
c) hebbian, widrow-hoff
d) instar, outstar
View Answer

Answer: b
Explanation: They both belongs to supervised type learning.
advertisement

10. In hebbian learning intial weights are set?
a) random
b) near to zero
c) near to target value
d) near to target value
View Answer

Answer: b
Explanation: Hebb law lead to sum of correlations between input & output, inorder to achieve this, the starting initial weight values must be small.

Sanfoundry Global Education & Learning Series – Neural Networks.

To practice all areas of Neural Networks for Freshers, here is complete set on 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.