Hadoop Questions and Answers – Chuckwa with Hadoop – 1

«
»

This set of Hadoop Multiple Choice Questions & Answers (MCQs) focuses on “Chuckwa with Hadoop – 1”.

1. ________ includes a flexible and powerful toolkit for displaying monitoring and analyzing results.
a) Imphala
b) Chukwa
c) BigTop
d) Oozie
View Answer

Answer: b
Explanation: Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and MapReduce framework and inherits Hadoop’s scalability and robustness.

2. Point out the correct statement.
a) Log processing was one of the original purposes of MapReduce
b) Chukwa is a Hadoop subproject devoted to bridging that gap between logs processing and Hadoop ecosystem
c) HICC stands for Hadoop Infrastructure Care Center
d) None of the mentioned
View Answer

Answer: b
Explanation: Chukwa is a scalable distributed monitoring and analysis system, particularly logs from Hadoop and other large systems.

3. The items stored on _______ are organized in a hierarchy of widget category.
a) HICE
b) HICC
c) HIEC
d) All of the mentioned
View Answer

Answer: b
Explanation: HICC stands for Hadoop Infrastructure Care Center. It is the central dashboard for visualize and monitoring of metrics collected by Chukwa.

4. HICC, the Chukwa visualization interface, requires HBase version _____________
a) 0.90.5+.
b) 0.10.4+.
c) 0.90.4+.
d) None of the mentioned
View Answer

Answer: c
Explanation: The Chukwa cluster management scripts rely on ssh; these scripts, however, are not required if you have some alternate mechanism for starting and stopping daemons.

advertisement
advertisement

5. Point out the wrong statement.
a) Using Hadoop for MapReduce processing of logs is easy
b) Chukwa should work on any POSIX platform
c) Chukwa is a system for large-scale reliable log collection and processing with Hadoop
d) All of the mentioned
View Answer

Answer: a
Explanation: Logs are generated incrementally across many machines, but Hadoop MapReduce works best on a small number of large files.

6. __________ are the Chukwa processes that actually produce data.
a) Collectors
b) Agents
c) HBase Table
d) HCatalog
View Answer

Answer: b
Explanation: Setting the option chukwaAgent.control.remote will disallow remote connections to the agent control socket.

Participate in Hadoop Certification Contest of the Month Now!

7. Chukwa ___________ are responsible for accepting incoming data from Agents, and storing the data.
a) HBase Table
b) Agents
c) Collectors
d) None of the mentioned
View Answer

Answer: c
Explanation: Most commonly, collectors simply write all received to HBase or HDFS.

8. For enabling streaming data to _________ chukwa collector writer class can be configured in chukwa-collector-conf.xml.
a) HCatalog
b) HBase
c) Hive
d) All of the mentioned
View Answer

Answer: b
Explanation: In this mode, the filesystem to write to is determined by the option writer.hdfs.filesystem in chukwa-collector-conf.xml.

advertisement

9. By default, collector’s listen on port _________
a) 8008
b) 8070
c) 8080
d) None of the mentioned
View Answer

Answer: c
Explanation: Port number can be configured in chukwa-collector.conf.xml

10. _________ class allows other programs to get incoming chunks fed to them over a socket by the collector.
a) PipelineStageWriter
b) PipelineWriter
c) SocketTeeWriter
d) None of the mentioned
View Answer

Answer: c
Explanation: PipelineStageWriter lets you string together a series of PipelineableWriters for pre-processing or post-processing incoming data.

advertisement

Sanfoundry Global Education & Learning Series – Hadoop.

Here’s the list of Best Books in Hadoop.

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & technical discussions at Telegram SanfoundryClasses.