This set of Hadoop Multiple Choice Questions & Answers (MCQs) focuses on “Flume with Hadoop”.
1. Apache Flume 1.3.0 is the fourth release under the auspices of Apache of the so-called ________ codeline.
a) NG
b) ND
c) NF
d) NR
View Answer
Explanation: Flume 1.3.0 has been put through many stress and regression tests, is stable, production-ready software, and is backwards-compatible with Flume 1.2.0.
2. Point out the correct statement.
a) Flume is a distributed, reliable, and available service
b) Version 1.5.2 is the eighth Flume release as an Apache top-level project
c) Flume 1.5.2 is production-ready software for integration with hadoop
d) All of the mentioned
View Answer
Explanation: Flume is used for efficiently collecting, aggregating, and moving large amounts of streaming event data.
3. ___________ was created to allow you to flow data from a source into your Hadoop environment.
a) Imphala
b) Oozie
c) Flume
d) All of the mentioned
View Answer
Explanation: In Flume, the entities you work with are called sources, decorators, and sinks.
4. A ____________ is an operation on the stream that can transform the stream.
a) Decorator
b) Source
c) Sinks
d) All of the mentioned
View Answer
Explanation: A source can be any data source, and Flume has many predefined source adapters.
5. Point out the wrong statement.
a) Version 1.4.0 is the fourth Flume release as an Apache top-level project
b) Apache Flume 1.5.2 is a security and maintenance release that disables SSLv3 on all components in Flume that support SSL/TLS
c) Flume is backwards-compatible with previous versions of the Flume 1.x codeline
d) None of the mentioned
View Answer
Explanation: Apache Flume 1.3.1 is a maintenance release for the 1.3.0 release, and includes several bug fixes and performance enhancements.
6. A number of ____________ source adapters give you the granular control to grab a specific file.
a) multimedia file
b) text file
c) image file
d) none of the mentioned
View Answer
Explanation: A number of predefined source adapters are built into Flume.
7. ____________ is used when you want the sink to be the input source for another operation.
a) Collector Tier Event
b) Agent Tier Event
c) Basic
d) All of the mentioned
View Answer
Explanation: All agents in a specific tier could be given the same name; One configuration file with … Clients send Events to Agents; Agents hosts number Flume components.
8. ___________ is where you would land a flow (or possibly multiple flows joined together) into an HDFS-formatted file system.
a) Collector Tier Event
b) Agent Tier Event
c) Basic
d) All of the mentioned
View Answer
Explanation: A number of other predefined source adapters, as well as a command exit, allow you to use any executable command to feed the flow of data.
9. ____________ sink can be a text file, the console display, a simple HDFS path, or a null bucket where the data is simply deleted.
a) Collector Tier Event
b) Agent Tier Event
c) Basic
d) None of the mentioned
View Answer
Explanation: Flume will also ensure the integrity of the flow by sending back acknowledgments that data has actually arrived at the sink.
10. Flume deploys as one or more agents, each contained within its own instance of _________
a) JVM
b) Channels
c) Chunks
d) None of the mentioned
View Answer
Explanation: An agent must have at least one of each in order to run.
Sanfoundry Global Education & Learning Series – Hadoop.
Here’s the list of Best Books in Hadoop.
- Practice Programming MCQs
- Check Programming Books
- Apply for Computer Science Internship
- Check Hadoop Books