Metadata And Data Quality

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Arafatkazi
A
Arafatkazi
Community Contributor
Quizzes Created: 3 | Total Attempts: 858
Questions: 63 | Attempts: 357

SettingsSettingsSettings
Metadata And Data Quality - Quiz


Metadata is data that provides information about other data. Three distinct types of metadata exist: descriptive metadata, structural metadata, and administrative metadata. Metadata data should be correct, complete and relevant. Take the quiz to revise on what you know about metadata and how to ensure it is of good quality. All the best!


Questions and Answers
  • 1. 

    Metadata can be classified based on which factor?

    • A.

      Mutablity

    • B.

      Logical Function

    • C.

      Content

    • D.

      All the options

    Correct Answer
    D. All the options
    Explanation
    Metadata can be classified based on various factors, including mutability, logical function, and content. Mutability refers to whether the metadata can be modified or not. Logical function refers to the purpose or role of the metadata within a system. Content classification involves categorizing metadata based on the type of information it represents. Therefore, all the given options are valid factors for classifying metadata.

    Rate this question:

  • 2. 

    Layers of metedata are

    • A.

      Symbolic Layer

    • B.

      Logical layer

    Correct Answer(s)
    A. Symbolic Layer
    B. Logical layer
    Explanation
    The layers of metadata are the symbolic layer and the logical layer. The symbolic layer refers to the metadata that is represented through symbols or codes, such as file names or identifiers. This layer helps in identifying and locating specific data or resources. On the other hand, the logical layer refers to the metadata that provides a logical structure or organization to the data, such as data relationships, data definitions, or data models. These layers work together to ensure efficient data management and retrieval.

    Rate this question:

  • 3. 

    Different types of DW(Datawarehutse) metatada are

    • A.

      Frontroom

    • B.

      Backroom

    • C.

      RDBMS

    • D.

      Source System

    • E.

      Data staging

    Correct Answer(s)
    A. Frontroom
    B. Backroom
    C. RDBMS
    D. Source System
    E. Data staging
    Explanation
    The given answer lists different types of DW metadata, which include Frontroom, Backroom, RDBMS, Source System, and Data staging. These types of metadata are commonly used in data warehousing. Frontroom and Backroom refer to different areas or layers of the data warehouse architecture. RDBMS stands for Relational Database Management System, which is a software used to manage and store data in a structured format. Source System refers to the system or application from where the data is extracted for the data warehouse. Data staging refers to the process of preparing and transforming the data before loading it into the data warehouse.

    Rate this question:

  • 4. 

    What are the use of frontroom metadata?

    • A.

      Label Screen and Reporting

    • B.

      Act on data present on DW

    • C.

      Used in ETL

    • D.

      Bring OLTP data in DW

    Correct Answer(s)
    A. Label Screen and Reporting
    B. Act on data present on DW
    Explanation
    Frontroom metadata is used for labeling, screening, and reporting purposes. It helps in organizing and categorizing data in the data warehouse (DW). This metadata allows users to easily identify and understand the data present in the DW. Additionally, frontroom metadata enables users to take actions and make decisions based on the data available in the DW. It is also used in the Extract, Transform, Load (ETL) process to bring operational data from OLTP systems into the DW.

    Rate this question:

  • 5. 

    When Metadata is hierarchically structured then it is called ontology

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    When metadata is hierarchically structured, it means that the metadata is organized in a hierarchical manner, with different levels of information and relationships between them. This type of structure is commonly found in ontologies, which are formal representations of knowledge that capture concepts, relationships, and properties within a specific domain. Therefore, when metadata is hierarchically structured, it can be considered as an ontology.

    Rate this question:

  • 6. 

    BI metadata are

    • A.

      Data Mining Metadata

    • B.

      OLAP Metadata

    • C.

      Reporting Metadata

    • D.

      OLTP metadata

    Correct Answer(s)
    A. Data Mining Metadata
    B. OLAP Metadata
    C. Reporting Metadata
    Explanation
    BI metadata refers to the metadata used in Business Intelligence systems. It includes data mining metadata, which is used to support the process of discovering patterns and relationships in large datasets. It also includes OLAP metadata, which is used to define the structure and relationships of multidimensional data cubes. Additionally, it includes reporting metadata, which is used to define the structure and layout of reports generated from the BI system. These three types of metadata are essential components of a comprehensive BI system, providing the necessary information and structure for data analysis and reporting.

    Rate this question:

  • 7. 

    OLAP metadata depends on

    • A.

      DW

    • B.

      RDBMS

    • C.

      Both DW and RDBMS

    • D.

      Does not depends on DW and RDBMS

    Correct Answer
    C. Both DW and RDBMS
    Explanation
    OLAP metadata depends on both Data Warehouses (DW) and Relational Database Management Systems (RDBMS). Data Warehouses are specifically designed to store and manage large amounts of data for OLAP purposes. They provide a structured and optimized environment for storing and querying data. On the other hand, RDBMS is responsible for managing the structured data within the Data Warehouse and providing efficient data retrieval and manipulation capabilities. Therefore, both DW and RDBMS play crucial roles in supporting and managing the metadata required for OLAP operations.

    Rate this question:

  • 8. 

    RDBMS metadata referred as Catalog

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    In a relational database management system (RDBMS), metadata refers to the information about the database structure, such as the names and types of tables, columns, indexes, and constraints. This metadata is commonly referred to as a catalog. Therefore, the statement that RDBMS metadata is referred to as a catalog is true.

    Rate this question:

  • 9. 

    Select correct options for RDBMS metadata

    • A.

      Exists as a part of Database

    • B.

      Referred as a catalog

    • C.

      Provides Information about Foreign key and primary key

    • D.

      Provides Information about views indexes.

    Correct Answer(s)
    A. Exists as a part of Database
    B. Referred as a catalog
    C. Provides Information about Foreign key and primary key
    D. Provides Information about views indexes.
    Explanation
    RDBMS metadata exists as a part of the database and is referred to as a catalog. It provides information about foreign keys and primary keys, as well as information about views and indexes.

    Rate this question:

  • 10. 

    Source system metadata acquired during source system analysis

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    During the source system analysis, the process involves gathering metadata from the source system. This metadata includes information about the structure, format, and content of the data in the source system. It helps in understanding the data sources, their relationships, and the quality of the data. Therefore, it is true that source system metadata is acquired during the source system analysis.

    Rate this question:

  • 11. 

    Select correct options about backroom metadata

    • A.

      It is used in ETL

    • B.

      Brings OLTP data in DW

    • C.

      Used for Label Creation

    • D.

      Used for Reporting

    Correct Answer(s)
    A. It is used in ETL
    B. Brings OLTP data in DW
    Explanation
    Backroom metadata is used in ETL (Extract, Transform, Load) processes to facilitate the extraction and transformation of data from various sources into a data warehouse (DW). It plays a crucial role in bringing operational transactional (OLTP) data into the data warehouse, where it can be stored and organized for reporting purposes. Additionally, backroom metadata is used for label creation, which involves assigning meaningful labels or tags to data elements for easier identification and analysis. Finally, the backroom metadata is also used for reporting, allowing users to generate meaningful insights and reports based on the data stored in the data warehouse.

    Rate this question:

  • 12. 

    Select Metadata creation tools

    • A.

      Templates

    • B.

      Mark-Up tool

    • C.

      Extraction

    • D.

      Conversion

    Correct Answer(s)
    A. Templates
    B. Mark-Up tool
    C. Extraction
    D. Conversion
    Explanation
    The correct answer includes four options: templates, mark-up tool, extraction, and conversion. These are all tools used in metadata creation. Templates provide a pre-designed structure for organizing metadata. A mark-up tool is used to add tags or labels to specific elements of a document or data. Extraction involves extracting relevant information from a source and converting it into metadata. Conversion refers to the process of transforming data into a different format or structure. These tools are essential for creating and organizing metadata effectively.

    Rate this question:

  • 13. 

    Which tool is used to convert the format of matadata from one fromat to another

    • A.

      Mark-up tool

    • B.

      Conversion Tool

    • C.

      Templates Tool

    • D.

      Extraction Tool

    Correct Answer
    B. Conversion Tool
    Explanation
    A conversion tool is used to convert the format of metadata from one format to another. This tool allows for the seamless transformation of metadata, ensuring compatibility and consistency across different systems and platforms. It simplifies the process of migrating data from one system to another by automatically converting the metadata format, saving time and effort. This tool is essential in maintaining data integrity and ensuring that metadata can be effectively utilized and understood in different environments.

    Rate this question:

  • 14. 

    Which tool is used to extract metadata from textual source

    • A.

      Extraction tool

    • B.

      Mark-Up tool

    • C.

      Conversion tool

    • D.

      Templates

    Correct Answer
    A. Extraction tool
    Explanation
    An extraction tool is used to extract metadata from textual sources. This tool is specifically designed to identify and extract relevant information from unstructured or semi-structured data. It helps in organizing and categorizing the extracted metadata, making it easier to analyze and utilize for various purposes such as data integration, data mining, or information retrieval.

    Rate this question:

  • 15. 

    Which tool is used to generate XML,SGML,DTD 

    • A.

      Markup tool

    • B.

      Conversion Tool

    • C.

      Extraction Tool

    • D.

      Templates Tool

    Correct Answer
    A. Markup tool
    Explanation
    A markup tool is used to generate XML, SGML, and DTD. Markup refers to the process of adding tags or annotations to a document to define its structure and format. XML (Extensible Markup Language), SGML (Standard Generalized Markup Language), and DTD (Document Type Definition) are all markup languages used to define the structure and format of documents. Therefore, a markup tool is the correct tool to use for generating XML, SGML, and DTD.

    Rate this question:

  • 16. 

    Select correct options for crosswalk

    • A.

      Crosswalk are labor intensive to maintain.

    • B.

      Allows metadata created by one community to be used by others.

    • C.

      Crosswalk is mapping of elements, semantics and syntax from one metadata scheme to another.

    • D.

      Less granularity to high granularity mapping is very complex.

    • E.

      Are important for virtual collection.

    • F.

      It is a metadata creation tool

    Correct Answer(s)
    A. Crosswalk are labor intensive to maintain.
    B. Allows metadata created by one community to be used by others.
    C. Crosswalk is mapping of elements, semantics and syntax from one metadata scheme to another.
    D. Less granularity to high granularity mapping is very complex.
    E. Are important for virtual collection.
    Explanation
    The correct answer options provide various explanations about crosswalks. They state that crosswalks are labor-intensive to maintain, allow metadata created by one community to be used by others, and involve mapping of elements, semantics, and syntax from one metadata scheme to another. The options also mention that mapping from less granularity to high granularity is very complex and that crosswalks are important for virtual collections. These explanations collectively describe the purpose, function, and challenges associated with crosswalks.

    Rate this question:

  • 17. 

    Success factor of Crosswalk depends on ______________________.

    • A.

      Similarity between Metadata

    • B.

      Granularity of Elements

    • C.

      Compatibility of the Contents

    • D.

      All the Options

    Correct Answer
    D. All the Options
    Explanation
    The success factor of Crosswalk depends on all the options provided, which include similarity between metadata, granularity of elements, and compatibility of the contents. This means that for Crosswalk to be successful, it is important for the metadata to be similar, the elements to have the appropriate level of granularity, and for the contents to be compatible. All of these factors contribute to the effectiveness and efficiency of Crosswalk, ensuring that it can accurately and seamlessly connect different systems or platforms.

    Rate this question:

  • 18. 

    Standard defined for Metadata repository 

    • A.

      ISO 11179

    • B.

      ANSI X3.285

    • C.

      Both ANSI X3.285 and ISO 11179

    • D.

      None

    Correct Answer
    C. Both ANSI X3.285 and ISO 11179
    Explanation
    Both ANSI X3.285 and ISO 11179 are standards that define the requirements and guidelines for a metadata repository. A metadata repository is a centralized system that stores and manages metadata, which provides information about data and its attributes. These standards ensure that the metadata repository follows a consistent structure, format, and naming conventions, making it easier for organizations to share, exchange, and understand metadata across different systems and platforms. By adhering to these standards, organizations can improve data quality, data integration, and data governance processes.

    Rate this question:

  • 19. 

    Vendors of metadata repositories are

    • A.

      Oracle Enterprise Metadata manager(EMM)

    • B.

      SAS Metadata Repositories

    • C.

      Masai technologies M:Scan and M:Grid

    • D.

      InfoLibrarian Metadata Integration Framework

    • E.

      Data Foundation Metadata Registry

    Correct Answer(s)
    A. Oracle Enterprise Metadata manager(EMM)
    B. SAS Metadata Repositories
    C. Masai technologies M:Scan and M:Grid
    D. InfoLibrarian Metadata Integration Framework
    E. Data Foundation Metadata Registry
    Explanation
    The answer lists various vendors of metadata repositories, including Oracle Enterprise Metadata manager (EMM), SAS Metadata Repositories, Masai technologies M:Scan and M:Grid, InfoLibrarian Metadata Integration Framework, and Data Foundation Metadata Registry. These vendors offer different solutions for managing metadata, which is essential for organizing and understanding data within an organization. These repositories provide a centralized location for storing and accessing metadata, allowing users to easily search, analyze, and govern their data assets. By using these repositories, organizations can improve data quality, enhance data governance, and enable better decision-making processes.

    Rate this question:

  • 20. 

    Metadata is stored in repository

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Metadata is indeed stored in a repository. A repository is a centralized location where data and information are stored and managed. In the context of metadata, a repository is used to store and organize information about data, such as its structure, format, source, and other relevant details. This allows for easy access and retrieval of metadata, which is crucial for effective data management and analysis.

    Rate this question:

  • 21. 

    Metadata should be merged if two sources merged together

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    When two sources are merged together, it is important to merge their metadata as well. Metadata provides information about the data, such as its source, format, and other relevant details. By merging the metadata, the combined dataset will have a comprehensive and accurate representation of the merged sources. This ensures that the merged data is properly organized, searchable, and can be effectively used for analysis or other purposes. Therefore, merging metadata is necessary to maintain data integrity and maximize the usefulness of the merged dataset.

    Rate this question:

  • 22. 

    Select correct options for metadata

    • A.

      Metadata must be adapt if base resource has changed

    • B.

      Metadata should be merged if two sources merged together

    • C.

      Metadata creator should be trained

    • D.

      It is not necessary to merge metadata if sources are merged

    Correct Answer(s)
    A. Metadata must be adapt if base resource has changed
    B. Metadata should be merged if two sources merged together
    C. Metadata creator should be trained
    Explanation
    Metadata must be adapted if the base resource has changed because metadata provides information about the resource, and if the resource has changed, the metadata needs to reflect those changes. Metadata should be merged if two sources are merged together because when multiple sources are combined, their respective metadata should also be merged to provide a comprehensive view. Metadata creator should be trained because creating accurate and relevant metadata requires knowledge and expertise. It is not necessary to merge metadata if sources are merged is incorrect because merging sources often requires merging their corresponding metadata as well.

    Rate this question:

  • 23. 

    Different groups working in the same project must follow 

    • A.

      Agreed standards of manipulating same standards.

    • B.

      Compatible methods of collecting Metadata.

    Correct Answer(s)
    A. Agreed standards of manipulating same standards.
    B. Compatible methods of collecting Metadata.
    Explanation
    Different groups working in the same project must follow agreed standards of manipulating the same standards in order to ensure consistency and interoperability. This means that all groups should use the same set of rules and procedures when manipulating the project's standards, ensuring that there is a unified approach throughout the project. Additionally, they must also use compatible methods of collecting metadata, which refers to the process of gathering and organizing information about the project. This ensures that the metadata collected by different groups can be easily shared and integrated, further promoting collaboration and efficiency in the project.

    Rate this question:

  • 24. 

    Metadata history should be maintained for changes even if the base source is deleted.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Maintaining metadata history for changes even if the base source is deleted is important because it allows for traceability and accountability. Even if the original source is no longer available, having a record of the metadata history can help in understanding the context and rationale behind the changes made. This can be useful for auditing purposes, data governance, and ensuring data integrity. Additionally, it provides a historical reference that can be valuable for future analysis or decision-making.

    Rate this question:

  • 25. 

    Metadata storage formats are 

    • A.

      Human Readable (xml)

    • B.

      Non Human Readable(binary)

    • C.

      Both XML and Binary

    • D.

      None

    Correct Answer
    C. Both XML and Binary
    Explanation
    The correct answer is Both XML and Binary. This means that metadata storage formats can be both in human-readable form (such as XML) and in non-human-readable form (such as binary). XML allows for easy interpretation and editing by humans, while binary formats are more efficient for storing large amounts of data and are not easily readable by humans.

    Rate this question:

  • 26. 

    Metadata storage types are

    • A.

      Internal Storage

    • B.

      External Storage

    • C.

      External Storage & Internal Storage

    • D.

      None

    Correct Answer
    C. External Storage & Internal Storage
    Explanation
    The correct answer is "External Storage & Internal Storage" because metadata storage can be done in both internal and external storage. Internal storage refers to the storage space within the device itself, such as the device's memory or hard drive. External storage, on the other hand, refers to storage devices that are connected to the device externally, such as USB drives or SD cards. Therefore, metadata can be stored in either the internal storage or external storage depending on the specific requirements and preferences of the system or application.

    Rate this question:

  • 27. 

    Data when processed becomes information.

    • A.

      False

    • B.

      True

    Correct Answer
    B. True
    Explanation
    When data is processed, it undergoes a series of operations such as sorting, organizing, and analyzing, which transforms it into meaningful and useful information. Raw data on its own may not hold any significance or convey any message, but once processed, it becomes valuable and provides insights that can be used for decision-making and understanding various phenomena. Therefore, the statement "Data when processed becomes information" is true.

    Rate this question:

  • 28. 

    Evaluate data quality 

    • A.

      Before building fully flagged data warehouse.

    • B.

      While building fully flagged data warehouse.

    • C.

      After Building Data Warehouse

    • D.

      Any Time

    Correct Answer
    A. Before building fully flagged data warehouse.
    Explanation
    The correct answer is "Before building fully flagged data warehouse" because evaluating data quality before building a fully flagged data warehouse is important to ensure that the data being used is accurate, complete, and reliable. This evaluation helps identify any inconsistencies or errors in the data, allowing for necessary corrections and improvements to be made before the data warehouse is built. By doing so, the data warehouse can be built on a solid foundation, resulting in better decision-making and analysis in the future.

    Rate this question:

  • 29. 

    Best practice in Data Quality is 

    • A.

      Fix data quality in source system.

    • B.

      Fix data quality in Data Warehouse

    • C.

      Both Fix data quality in source system and Fix data quality in Data Warehouse

    • D.

      None of the Options

    Correct Answer
    A. Fix data quality in source system.
    Explanation
    The best practice in data quality is to fix data quality issues in the source system. This means addressing and resolving any inaccuracies, inconsistencies, or errors at the point of data entry or creation. By fixing data quality in the source system, it ensures that the data being captured is accurate, reliable, and consistent from the beginning. This approach helps to prevent downstream issues and ensures that the data stored in the data warehouse or other systems is of high quality.

    Rate this question:

  • 30. 

    Data quality does not refer to

    • A.

      Accuracy

    • B.

      Consistency

    • C.

      Integrity

    • D.

      Uniqueness

    • E.

      Volume

    Correct Answer
    E. Volume
    Explanation
    Data quality refers to the accuracy, consistency, integrity, and uniqueness of data. Volume, on the other hand, refers to the amount of data that is being stored or processed. While volume is an important aspect of data management, it is not directly related to data quality. Therefore, volume does not fall under the category of factors that data quality refers to.

    Rate this question:

  • 31. 

    Poor data quality will?

    • A.

      Will affect the performance of the ETL

    • B.

      Will affect the performance of the Reporting

    Correct Answer
    A. Will affect the performance of the ETL
    Explanation
    Poor data quality can have a significant impact on the performance of the ETL (Extract, Transform, Load) process. When the data being extracted is of poor quality, it may contain errors, inconsistencies, or missing values, which can lead to issues during the transformation and loading stages. This can result in delays, errors, and inefficiencies in the ETL process, ultimately affecting its overall performance.

    Rate this question:

  • 32. 

    Which of the following is correct?

    • A.

      Only US has a detailed address level

    • B.

      Some countries have detailed address level

    • C.

      No countries have detailed address level

    • D.

      All countries have detailed address level

    Correct Answer
    B. Some countries have detailed address level
    Explanation
    The correct answer is "Some countries have detailed address level." This means that not all countries have a detailed address level, but there are some countries that do. This suggests that the level of detail in addresses varies from country to country, with some having more detailed information than others.

    Rate this question:

  • 33. 

    Which are the following is not an IBM product?

    • A.

      Meta stage

    • B.

      Quality Stage

    • C.

      Profile Stage

    • D.

      Analysis stage

    Correct Answer
    D. Analysis stage
    Explanation
    The Analysis stage is not an IBM product. This can be inferred from the question which asks for the product that is not an IBM product. The other options, Meta Stage, Quality Stage, and Profile Stage are not mentioned in the question and their status as IBM products or not is not specified. Therefore, the only option that can be confirmed as not an IBM product is the Analysis stage.

    Rate this question:

  • 34. 

    Reason for data quality issue are

    • A.

      Inaccurate entry of data

    • B.

      Lack of proper validation

    • C.

      Lack of MDM strategy

    Correct Answer(s)
    A. Inaccurate entry of data
    B. Lack of proper validation
    C. Lack of MDM strategy
    Explanation
    The answer states that the reasons for data quality issues are inaccurate entry of data, lack of proper validation, and lack of MDM (Master Data Management) strategy. This means that the data quality issues can occur when data is entered incorrectly or with errors, when there is insufficient validation to ensure the accuracy and integrity of the data, and when there is no proper strategy in place to manage and maintain the master data. These factors can lead to inconsistencies, errors, and unreliable data, affecting the overall quality of the data.

    Rate this question:

  • 35. 

    Master Data Management (MDM) is to be implemented 

    • A.

      To track the duplicate value of records.

    • B.

      To store data

    • C.

      To validate DW

    • D.

      None

    Correct Answer
    A. To track the duplicate value of records.
    Explanation
    Master Data Management (MDM) is implemented to track the duplicate value of records. MDM helps in identifying and eliminating duplicate data entries in a database, ensuring data accuracy and consistency. By tracking duplicate records, MDM enables organizations to maintain a single, reliable version of data across different systems and applications. This helps in improving data quality, reducing errors, and enhancing overall operational efficiency.

    Rate this question:

  • 36. 

    Different data cleansing operations are

    • A.

      Removing invalid character

    • B.

      Correcting data format

    • C.

      Identifying and removing duplicate records

    • D.

      Building data quality and reprocessing feedback with source system.

    Correct Answer(s)
    A. Removing invalid character
    B. Correcting data format
    C. Identifying and removing duplicate records
    D. Building data quality and reprocessing feedback with source system.
    Explanation
    The correct answer includes four different data cleansing operations. The first operation is removing invalid characters from the data. This is important because invalid characters can cause issues in data processing and analysis. The second operation is correcting data format, which involves ensuring that the data is in the correct format and structure for analysis. The third operation is identifying and removing duplicate records, as duplicates can skew the results and cause inaccuracies. Finally, building data quality and reprocessing feedback with the source system is crucial for maintaining and improving the overall data quality and integrity.

    Rate this question:

  • 37. 

    Household  matching and Individual matching are part of 

    • A.

      Customer matching

    • B.

      Name and Address Standardization

    • C.

      Customer Survivorship Identification

    • D.

      Customer Merging

    Correct Answer
    A. Customer matching
    Explanation
    Household matching and individual matching are techniques used in the process of customer matching. Customer matching refers to the process of identifying and merging duplicate customer records within a database. It involves comparing and matching various attributes such as names, addresses, contact information, and other relevant data to identify potential matches. Household matching focuses on identifying and grouping together individuals who share the same address or household, while individual matching focuses on identifying and merging duplicate records for individual customers. These techniques are essential for maintaining data accuracy, eliminating duplicates, and improving overall data quality in customer databases.

    Rate this question:

  • 38. 

    Which is not a data quality tool?

    • A.

      Quality stage

    • B.

      Trillium

    • C.

      DataStage

    • D.

      DfPower

    Correct Answer
    C. DataStage
    Explanation
    DataStage is a data integration tool, not a data quality tool. It is used for extracting, transforming, and loading data from various sources into a data warehouse or a data mart. While DataStage can help in improving data quality by performing data cleansing and transformation processes, it is not specifically designed as a data quality tool like Quality stage and Trillium. dfPower is not a known data quality tool, so it cannot be considered as the correct answer.

    Rate this question:

  • 39. 

    Rule repository contains 

    • A.

      Database or Flat File

    • B.

      Database or Excel

    • C.

      Database only

    • D.

      Excel and Flat File

    Correct Answer
    A. Database or Flat File
    Explanation
    The rule repository can contain either a database or a flat file. This means that the rules can be stored and accessed from either a traditional database system or a flat file system. This allows for flexibility in choosing the storage method based on the specific requirements and preferences of the organization.

    Rate this question:

  • 40. 

    Different data cleansing operations are

    • A.

      Removing invalid character

    • B.

      Correcting data format

    • C.

      Identifying and removing duplicate records

    • D.

      Building data quality and reprocessing feedback with source system.

    Correct Answer(s)
    A. Removing invalid character
    B. Correcting data format
    C. Identifying and removing duplicate records
    D. Building data quality and reprocessing feedback with source system.
    Explanation
    The correct answer includes four different data cleansing operations. The first operation is removing invalid characters from the data. This is important because invalid characters can cause errors or inconsistencies in the data. The second operation is correcting the data format. This involves ensuring that the data is in the correct format, such as converting dates to a standardized format. The third operation is identifying and removing duplicate records. Duplicate records can skew analysis and cause inaccuracies in the data. The fourth operation is building data quality and reprocessing feedback with the source system. This involves continuously improving the data quality and providing feedback to the source system for further improvement.

    Rate this question:

  • 41. 

    Select correct options for MDM

    • A.

      SOA(Service Oriented Architecture) based solution

    • B.

      Central database for storing master data.

    • C.

      Avoid over heads of maintaining data quality

    • D.

      No need of de duplicate master data and its related transaction

    Correct Answer(s)
    A. SOA(Service Oriented Architecture) based solution
    B. Central database for storing master data.
    C. Avoid over heads of maintaining data quality
    D. No need of de duplicate master data and its related transaction
    Explanation
    The correct answer options for MDM are:
    1. SOA(Service Oriented Architecture) based solution: This means that the MDM solution is built on a service-oriented architecture, which allows for the integration of different applications and systems.
    2. Central database for storing master data: This means that all the master data is stored in a centralized database, ensuring consistency and easy access.
    3. Avoid overheads of maintaining data quality: This means that the MDM solution helps in maintaining data quality by reducing the need for manual data entry and ensuring data consistency.
    4. No need to de-duplicate master data and its related transactions: This means that the MDM solution automatically takes care of removing duplicate master data and its related transactions, saving time and effort.

    Rate this question:

  • 42. 

    Customer name matching is done by fuzzy and intelligent logic 

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Customer name matching is a process that involves comparing and matching customer names based on their similarity or closeness. This is typically done using fuzzy and intelligent logic algorithms that take into account various factors such as spelling variations, abbreviations, phonetic similarity, and other patterns. These algorithms are designed to intelligently identify and match customer names even if there are slight differences or variations in the way they are written or recorded. Therefore, the statement "Customer name matching is done by fuzzy and intelligent logic" is true.

    Rate this question:

  • 43. 

    Household matching refers

    • A.

      Customer belongs to same family or same house

    • B.

      Customer’s duplicate record.

    • C.

      Both customer’s duplicate record and customer belongs to same family or same house

    • D.

      None

    Correct Answer
    A. Customer belongs to same family or same house
    Explanation
    The correct answer is "customer belongs to same family or same house." This means that household matching refers to identifying customers who are part of the same family or live in the same house. This can be useful for various purposes, such as identifying potential duplicate records or analyzing customer behavior within a household.

    Rate this question:

  • 44. 

    Individual Matching

    • A.

      Identifies customer’s duplicate record.

    • B.

      Indicates customer belongs to same family or same house.

    • C.

      Both Indicates customer belongs to same family or same house and Identifies customer’s duplicate record.

    • D.

      None

    Correct Answer
    A. Identifies customer’s duplicate record.
    Explanation
    The correct answer is "Identifies customer's duplicate record." This means that the individual matching process is used to identify if a customer's record already exists in the system, indicating a potential duplicate entry. It does not necessarily indicate if the customer belongs to the same family or house, as mentioned in the other option.

    Rate this question:

  • 45. 

    Data quality audit provides traceability between original and corrected values

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Data quality audit involves examining and evaluating the accuracy and reliability of data. One aspect of this process is to ensure traceability between the original and corrected values. This means that any changes or corrections made to the data can be traced back to their original values, allowing for transparency and accountability. Therefore, the statement that data quality audit provides traceability between original and corrected values is true.

    Rate this question:

  • 46. 

    Customer Survivorship 

    • A.

      Data Cleansing

    • B.

      Data profiling

    • C.

      Enrichment

    • D.

      Data de-duplication

    Correct Answer
    D. Data de-duplication
    Explanation
    Data de-duplication refers to the process of identifying and removing duplicate records or entries from a dataset. It helps to ensure data accuracy and integrity by eliminating redundant information. By removing duplicates, organizations can improve data quality, reduce storage costs, and enhance overall data management efficiency. Therefore, data de-duplication is an essential step in data cleansing, which involves various techniques to identify and resolve data quality issues.

    Rate this question:

  • 47. 

    Customer merging is matching the best attribute into the surviving records from duplicate records.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Customer merging is the process of combining duplicate records by selecting the best attribute from each duplicate and merging it into the surviving record. This helps to eliminate duplicate data and ensure that the most accurate and complete information is retained in the system. Therefore, the given statement is true.

    Rate this question:

  • 48. 

    Data masking and mask pattern analysis are used in substituting string patterns

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Data masking and mask pattern analysis are indeed used in substituting string patterns. Data masking is a technique used to protect sensitive data by replacing it with fictitious data or masking characters. This helps in preventing unauthorized access to sensitive information. Mask pattern analysis, on the other hand, involves the identification and analysis of specific patterns within the data that need to be masked. By using these techniques, organizations can ensure the privacy and security of their data while still being able to use it for various purposes.

    Rate this question:

  • 49. 

    MDM solution consists of

    • A.

      Master Data Store

    • B.

      Service for managing master data

    • C.

      Enterprise Service Bus

    Correct Answer(s)
    A. Master Data Store
    B. Service for managing master data
    C. Enterprise Service Bus
    Explanation
    The MDM solution consists of three components: a Master Data Store, a Service for managing master data, and an Enterprise Service Bus. The Master Data Store is where the master data is stored and managed. The Service for managing master data is responsible for handling the processes and operations related to the management of the master data. The Enterprise Service Bus is a middleware that facilitates communication and integration between different systems and applications within an enterprise. These three components work together to provide a comprehensive MDM solution.

    Rate this question:

  • 50. 

    In MDM, master data holds which of the following components

    • A.

      Master Data Store

    • B.

      Service for managing master

    • C.

      Enterprise Service Bus

    • D.

      None of the Options

    Correct Answer
    A. Master Data Store
    Explanation
    The correct answer is Master Data Store. In MDM, master data holds the components that are stored in the Master Data Store. This store is responsible for managing and storing the master data, which includes important information about key entities such as customers, products, employees, and suppliers. The Master Data Store acts as a central repository for all the master data, ensuring consistency and accuracy across different systems and applications within an organization. It allows for efficient data management, access, and sharing, enabling better decision-making and improving overall business processes.

    Rate this question:

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Jan 04, 2024
    Quiz Edited by
    ProProfs Editorial Team
  • Feb 10, 2017
    Quiz Created by
    Arafatkazi
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.