Hadoop Online Exam - US Data Technologioes

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Exam US
E
Exam US
Community Contributor
Quizzes Created: 1 | Total Attempts: 226
Questions: 15 | Attempts: 226

SettingsSettingsSettings
Hadoop Online Exam - US Data Technologioes - Quiz

* Kindly fill valid information, your result will be send on registered email id.


Questions and Answers
  • 1. 

    What does commodity Hardware in Hadoop world mean?

    • A.

      Very cheap hardware

    • B.

      Industry standard hardware

    • C.

      Discarded hardware

    • D.

      Low specifications Industry grade hardware

    Correct Answer
    A. Very cheap hardware
    Explanation
    Commodity hardware in the Hadoop world refers to very cheap hardware. This means that the hardware used in a Hadoop cluster is inexpensive and readily available, as opposed to high-end or specialized hardware. The use of commodity hardware allows for cost-effective scalability and fault tolerance in Hadoop systems, as individual hardware components can be easily replaced or upgraded without significant financial investment.

    Rate this question:

  • 2. 

    Which one do you like?

    • A.

      Parsing 5 MB XML file every 5 minutes

    • B.

      Processing IPL tweet sentiments

    • C.

      Processing online bank transactions

    • D.

      both (a) and (c)

    Correct Answer
    A. Parsing 5 MB XML file every 5 minutes
    Explanation
    The given correct answer is "Parsing 5 MB XML file every 5 minutes." This answer suggests that the person prefers the task of parsing a 5 MB XML file every 5 minutes over the other options.

    Rate this question:

  • 3. 

     What is HBase used as?

    • A.

      Tool for Random and Fast Read/Write operations in Hadoop

    • B.

      Faster Read only query engine in Hadoop

    • C.

      MapReduce alternative in Hadoop

    • D.

      Fast MapReduce layer in Hadoop

    Correct Answer
    A. Tool for Random and Fast Read/Write operations in Hadoop
    Explanation
    HBase is used as a tool for random and fast read/write operations in Hadoop. It provides a distributed, scalable, and consistent database for storing and retrieving large amounts of structured and semi-structured data. HBase is designed to handle high volumes of data with low-latency access, making it suitable for applications that require real-time access to data. It is often used for use cases such as real-time analytics, log processing, and recommendation systems.

    Rate this question:

  • 4. 

    What is Hive used as?

    • A.

      Hadoop query engine

    • B.

      MapReduce wrapper

    • C.

      Hadoop SQL interface

    • D.

      All of the above

    Correct Answer
    A. Hadoop query engine
    Explanation
    Hive is used as a Hadoop query engine. It provides a SQL-like interface to query and analyze data stored in Hadoop. It translates SQL queries into MapReduce jobs, allowing users to leverage the power of Hadoop for data processing and analysis. Hive also provides a schema on read feature, which allows users to apply structure to data stored in Hadoop, making it easier to query and analyze. Therefore, the correct answer is "Hadoop query engine".

    Rate this question:

  • 5. 

     Which of the following are NOT true for Hadoop?

    • A.

      It’s a tool for OLTP

    • B.

      It supports structured and unstructured data analysis

    • C.

      It aims for vertical scaling out/in scenarios

    • D.

      Both (a) and (c)

    Correct Answer
    A. It’s a tool for OLTP
    Explanation
    Hadoop is not a tool for OLTP (Online Transaction Processing). It is a tool for handling big data and performing batch processing on large datasets. It is designed for processing and analyzing structured and unstructured data, and it aims for horizontal scaling out/in scenarios, not vertical scaling. Therefore, the correct answer is "It's a tool for OLTP."

    Rate this question:

  • 6. 

    Hadoop is open source.

    • A.

      ALWAYS True

    • B.

      True only for Apache Hadoop

    • C.

      True only for IBM

    • D.

      ALWAYS False

    Correct Answer
    A. ALWAYS True
    Explanation
    Hadoop is an open-source framework for processing and storing large data sets. Being open source means that the source code of Hadoop is freely available to the public, allowing anyone to view, modify, and distribute it. Therefore, the statement "Hadoop is open source" is always true, regardless of any specific implementation or vendor.

    Rate this question:

  • 7. 

    What is the default HDFS block size?

    • A.

      32 MB

    • B.

      45 MB

    • C.

      128MB

    • D.

      128KB

    Correct Answer
    A. 32 MB
    Explanation
    The default HDFS block size is 32 MB. This means that when a file is stored in HDFS, it will be divided into blocks of 32 MB each. This block size is configurable and can be changed based on the requirements of the system. A larger block size can improve performance by reducing the overhead of managing smaller blocks, but it can also result in wasted space if the file is smaller than the block size. Conversely, a smaller block size can reduce wasted space but may increase the overhead of managing a larger number of blocks.

    Rate this question:

  • 8. 

    Which of the following class is responsible for converting inputs to key-value Pairs of Map Reduce

    • A.

      FileInputFormat

    • B.

      InputSplit

    • C.

      RecordReader

    • D.

      Mapper

    Correct Answer
    A. FileInputFormat
    Explanation
    FileInputFormat is the correct answer because it is a class in Hadoop that is responsible for converting inputs, such as files, into key-value pairs for the MapReduce process. It is used as the input format for MapReduce jobs and handles the splitting of input files into input splits, which are then processed by the RecordReader. The RecordReader is responsible for reading the records within each input split and converting them into key-value pairs. Therefore, FileInputFormat plays a crucial role in the input phase of the MapReduce process.

    Rate this question:

  • 9. 

    NameNodes are usually high storage machines in the clusters

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    NameNodes in a Hadoop cluster are responsible for storing the metadata of the files and directories in the cluster. They keep track of the location of data blocks and manage the overall file system namespace. Since they handle such important tasks, NameNodes are typically high storage machines in the cluster to accommodate the large amount of metadata. Therefore, the statement that NameNodes are usually high storage machines in the clusters is true.

    Rate this question:

  • 10. 

    SaaS stands for:

    • A.

      Secret alternative accounting  standards

    • B.

      Short alert activation supplement

    • C.

      Software as a service

    • D.

      None of these

    Correct Answer
    A. Secret alternative accounting  standards
  • 11. 

    The HDFS command to create the cut of a file within HDFS?

    • A.

      cut

    • B.

      -mv

    • C.

      move

    • D.

      MV

    • E.

      Option 5

    Correct Answer
    A. cut
    Explanation
    The correct answer is "cut". The "cut" command in HDFS is used to create a cut of a file within HDFS. This command allows users to select specific fields or sections of a file and extract them into a new file. It is commonly used for data manipulation and analysis purposes, as it allows users to easily extract and work with specific portions of a file without modifying the original file.

    Rate this question:

  • 12. 

    The HDFS command to create the copy of a file from a local system is which of the following?

    • A.

      copyFromLocal

    • B.

      CopyFromLocal

    • C.

      copyfromlocal

    • D.

      copylocal

    Correct Answer
    A. copyFromLocal
    Explanation
    The correct HDFS command to create a copy of a file from a local system is "copyFromLocal". This command is used to copy a file or directory from the local file system to the HDFS file system.

    Rate this question:

  • 13. 

    Hive also support custom extensions written in :

    • A.

      C

    • B.

      C++

    • C.

      Java

    • D.

      C#

    Correct Answer
    A. C
    Explanation
    Hive supports custom extensions written in the C programming language.

    Rate this question:

  • 14. 

     Which of the following are true for Hadoop Pseudo Distributed Mode? 

    • A.

      It runs on multiple machines

    • B.

      Runs on multiple machines without  any daemons

    • C.

      Runs on Single Machine with all daemons

    • D.

      Runs on Single Machine without all daemons

    Correct Answer
    A. It runs on multiple machines
    Explanation
    Hadoop Pseudo Distributed Mode runs on multiple machines.

    Rate this question:

  • 15. 

    Hadoop was named after?

    • A.

      Creator Doug Cuttings favorite circus act

    • B.

      Cuttings high school rock band

    • C.

      The toy elephant of Cuttings son 

    • D.

      A sound Cuttings laptop made during Hadoops development

    Correct Answer
    A. Creator Doug Cuttings favorite circus act

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 22, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Aug 11, 2020
    Quiz Created by
    Exam US
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.