1.
What are the main features of HANA?
Correct Answer(s)
A. In memory
C. Parallelization
D. Column Store
Explanation
The main features of HANA include in-memory processing, parallelization, and column store. In-memory processing allows for faster data access and retrieval by storing data in RAM instead of traditional disk storage. Parallelization enables HANA to process multiple tasks simultaneously, increasing overall performance. The column store feature organizes data by columns rather than rows, optimizing data compression and query performance. These features combined enhance HANA's ability to handle large volumes of data and provide efficient data processing and analysis capabilities.
2.
When to use a column store table and what are the advantages?
Correct Answer(s)
A. Less distinct values
B. Compression,
D. Read optimized
G. Aggregation,
Explanation
A column store table is used when there are less distinct values in a dataset. This is because columnar databases store data in a column-wise manner, which allows for better compression of data. By compressing the data, column store tables can reduce storage requirements and improve query performance. Additionally, column store tables are optimized for aggregation queries, where calculations are performed on a large set of data. On the other hand, when there are many distinct values or when the requirement is to select all columns, a row store table may be more suitable. Row store tables are better for insert and update operations, as they are optimized for write operations.
3.
When to use a row store table and what are the advantages?
Correct Answer(s)
B. Select * (All columns)
D. Insert and update
E. Write optimized
G. Many distinct values
Explanation
A row store table is suitable when there are many distinct values, as it allows for efficient retrieval of specific rows based on their unique values. It is also advantageous when the query involves selecting all columns (select *), as it eliminates the need to access multiple columns from different storage locations. Additionally, row store tables are ideal for scenarios involving frequent insert and update operations, as they provide better performance for modifying individual rows. Lastly, row store tables are optimized for write operations, ensuring efficient data modification.
4.
What is delta merge + basic operating
Correct Answer
C. Merge from delta to main memory only for column store tables
Explanation
The correct answer is "Merge from delta to main memory only for column store tables." Delta merge is a process in which the changed data from the delta storage is merged into the main memory of a database. This process is specifically applicable to column store tables, which store data in a column-wise format rather than a row-wise format. By merging the delta data into the main memory, the column store tables can be updated and optimized for query performance. This process does not involve merging data from main memory to disk or merging at the transaction or statement level.
5.
What is savepoint + basic operating?
Correct Answer
D. A save point is where changed data is pushed from memory to disk
Explanation
A save point is a point in a transaction where all the changes made to the data are permanently saved from memory to disk. This ensures that the data is not lost in case of any system failure or error. By saving the data to disk, it becomes persistent and can be retrieved even after a system restart. This is an important mechanism to maintain data integrity and consistency in a database system.
6.
What is true of a transaction level snapshot?
Correct Answer
A. All statements of a transaction see the same snapshot (Isolation level repeatable read)
Explanation
In a transaction level snapshot with the isolation level set to repeatable read, all statements within a transaction see the same snapshot of the database. This means that any changes made by other transactions after the start of the current transaction are not visible to the statements within that transaction. The snapshot remains consistent throughout the transaction, ensuring that the data seen by each statement remains the same, regardless of any concurrent modifications made by other transactions.
7.
What is true of a statement level snapshot?
Correct Answer(s)
A. Transaction level snapshot is the default isolation level (Read Committed)
C. Different statements in a transaction may see different snapshots (Isolation level read committed)
D. Each statement sees the changes that were committed when the execution of the statement started.
Explanation
In a statement level snapshot, different statements in a transaction may see different snapshots. This means that each statement can see the changes that were committed when the execution of the statement started. Additionally, the transaction level snapshot is the default isolation level, which is Read Committed.
8.
Where can you check the load status of a table?
Correct Answer(s)
A. Hana Studio (Runtime Tab)
D. M_CS_TABLES view
Explanation
You can check the load status of a table in Hana Studio (Runtime Tab) or by using the M_CS_TABLES view. These options provide information about the current status of table loads, allowing you to monitor and track the progress of data loading operations. The Solution Manager Table Viewer and DBACockpit are not specifically designed for checking the load status of tables.
9.
What is a query execution plan?
Correct Answer
C. An attempt by a Query Optimizer to compute the most efficient way to fulfill a SQL request.
Explanation
A query execution plan refers to the attempt made by a Query Optimizer to compute the most efficient way to fulfill a SQL request. The Query Optimizer analyzes various factors such as available indexes, table statistics, and join methods to determine the optimal sequence of operations and access paths for retrieving the required data. By generating an effective execution plan, the Query Optimizer aims to minimize the query's execution time and resource utilization, ultimately improving the overall performance of the SQL request.
10.
Which of the following things happen when a query is submitted through the Query Optimizer?
Correct Answer(s)
B. It is parsed, pre-complied and bound before being processed by the optimizer
C. The Plan is passed to the database object containing required information.
D. It is processed here (DBI) or native SQL
H. An Execution plan with a Timestamp is generated
Explanation
When a query is submitted through the Query Optimizer, it goes through several steps. First, it is parsed, pre-compiled, and bound, which involves checking the syntax, validating the query, and creating an execution plan. This execution plan includes a timestamp to track the query's progress. The plan is then passed to the database object that contains the necessary information for processing the query. The query is then processed either through the database interface (DBI) or using native SQL. The generation of a column store or row store table, reading the index, and generating table statistics are not mentioned in relation to the query optimization process.
11.
What are the four types of SQL commands?
Correct Answer(s)
B. DDL – Data Definition Language Mainly for developers
C. TCL – Transaction Control Language Mainly for developers
E. DML – Data Manipulation Language Mainly for database users
F. DCL – Data Control Language Mainly for system Administrators
Explanation
The four types of SQL commands are DML (Data Manipulation Language), DDL (Data Definition Language), TCL (Transaction Control Language), and DCL (Data Control Language). DML is mainly used by database users to manipulate data in the database. DDL is used by developers to define the structure and schema of the database. TCL is used by developers to control transactions in the database. DCL is used by system administrators to control access and permissions to the database.
12.
What are the main components of a database system?
Correct Answer(s)
A. Query processor
B. Data files
D. Config Files
Explanation
The main components of a database system are the query processor, data files, and config files. The query processor is responsible for interpreting and executing database queries. Data files store the actual data in the database. Config files contain configuration settings for the database system, such as access permissions and storage settings. These components work together to ensure efficient and effective management of the database system.
13.
SQL commands consist of the following:
Correct Answer(s)
B. Additional instructions
C. Semicolon
E. Object name (Database Table, Field Name)
G. Command name
Explanation
This answer is correct because SQL commands typically consist of a command name, followed by the object name (which can be a database table or a field name), additional instructions to specify the desired operation, and a semicolon to indicate the end of the command. These components are essential for constructing valid SQL statements. The other listed items (data files, configuration files, clients, and user store) are not directly related to the structure of SQL commands.
14.
Types of SAP HANA privileges?
Correct Answer(s)
A. Analytical Privileges
C. Package Privileges
D. Application Privileges
E. Privileges on user
G. System Privileges
H. Object/ SQL Privileges
Explanation
The types of SAP HANA privileges include System Privileges, Object/SQL Privileges, Analytical Privileges, Package Privileges, Application Privileges, and Privileges on user. System Privileges refer to the privileges granted to perform administrative tasks on the system. Object/SQL Privileges are permissions granted on specific database objects or SQL statements. Analytical Privileges allow users to access and analyze specific data. Package Privileges grant access to specific packages or procedures. Application Privileges provide authorization for specific application functions. Privileges on user refer to the privileges granted to manage user accounts and their privileges.
15.
Which components are updated by SPS update?
Correct Answer(s)
A. Database
B. Studio
C. Clients
E. Agents
G. AFL
Explanation
SPS update refers to Support Package Stack update, which is a collection of software patches and updates provided by SAP to enhance the functionality and fix any issues in their software. In this case, the components that are updated by SPS update include the Database, Studio, Clients, Agents, AFL (Application Function Library), Sapstartsrv (SAP Startup Service), Backup Engine, and Application Engine. These updates ensure that the various components of the SAP software are up to date and functioning properly.
16.
How is a configuration backup initiated?
Correct Answer
B. Manually
Explanation
A configuration backup is initiated manually, meaning that it is done by a user or administrator intentionally triggering the backup process. This could involve selecting the backup option from a menu or interface, or executing a specific command or script to initiate the backup. This allows for greater control and flexibility, as the backup can be performed at the desired time and according to specific requirements or circumstances.
17.
Where can you start and stop the HANA database?
Correct Answer(s)
B. Via HANA Studio
C. HDB start and HDB stop from the OS
Explanation
The HANA database can be started and stopped using two methods. One is via HANA Studio, which is a graphical user interface tool for managing HANA databases. The other method is using the commands "HDB start" and "HDB stop" from the operating system. These commands allow the user to start and stop the HANA database directly from the command line interface. The other options mentioned, Solution Manager Technical Operations Workcenter and DBACockpit, are not valid methods for starting and stopping the HANA database.
18.
What are the customer specific .ini files?
Correct Answer(s)
A. Profile.ini
B. Daemon.ini
D. Nameserver.ini
Explanation
The customer specific .ini files are Profile.ini, Daemon.ini, and Nameserver.ini. These files are used to store specific configurations and settings for individual customers. Profile.ini contains profile-related settings, Daemon.ini contains settings for the daemon process, and Nameserver.ini contains settings for the nameserver. These files allow customization and personalization of the software for each customer's specific needs and preferences.
19.
What tools are available for data backup of a HANA database?
Correct Answer(s)
A. HANA Studio
B. DBACockpit
C. HDB Console
Explanation
The tools available for data backup of a HANA database are HANA Studio, DBACockpit, and HDB Console. These tools provide different functionalities and options for backing up the HANA database. HANA Studio is an integrated development environment that allows administrators to perform various tasks, including data backup. DBACockpit is a web-based tool that provides a graphical interface for database administration tasks, including backup and recovery. HDB Console is a command-line tool that allows administrators to perform advanced database administration tasks, including backup and restore operations. These tools provide flexibility and options for ensuring the data backup of a HANA database.
20.
What is the name of the secure store for user and password data?
Correct Answer
D. HDB User Store
Explanation
The HDB User Store is the secure store for user and password data. It is specifically designed to store and manage user credentials securely. This store ensures that sensitive information such as passwords are protected and can only be accessed by authorized users. It is an essential component for maintaining the security and integrity of user data in the system.
21.
What types of data backup are available?
Correct Answer(s)
A. Full backup
C. Online backup
Explanation
There are multiple types of data backup available, including full backup and online backup. A full backup involves creating a complete copy of all data and files, which can be time-consuming and requires a large amount of storage space. On the other hand, online backup refers to backing up data to a remote server or cloud-based storage, providing easy access and protection against physical damage or loss. Other types of backup mentioned in the question, such as incremental backup and delta backup, are not included in the answer.
22.
What information is displayed in a HANA license?
Correct Answer
C. Licensed Memory
Explanation
A HANA license displays the amount of memory that is licensed for use. This information is important because it determines the maximum amount of memory that can be utilized by the HANA system. By specifying the licensed memory, the license ensures that the system does not exceed the allocated memory limit, which can result in performance issues or violation of licensing agreements.
23.
What service must be running to see HANA diagnostic files?
Correct Answer
A. SAPSTARTSRV
Explanation
To see HANA diagnostic files, the SAPSTARTSRV service must be running. This service is responsible for starting and stopping SAP systems, including HANA. It provides the necessary infrastructure for managing and monitoring HANA databases. Without the SAPSTARTSRV service running, it would not be possible to access and view the diagnostic files, which contain important information for troubleshooting and analyzing the performance of the HANA system.
24.
Where can you monitor several HDBs in HANA Studio?
Correct Answer
B. System Monitor
Explanation
In HANA Studio, the System Monitor is the location where you can monitor several HDBs. This monitor provides an overview of the system landscape, allowing you to view and analyze the performance and status of multiple HDBs at once. It provides detailed information about the system's resources, such as CPU and memory usage, as well as monitoring the overall health and performance of the HDBs. Therefore, the correct answer is System Monitor.
25.
What steps can you take to identify expensive statements?
Correct Answer
A. Activate the expensive statements trace which is deactivated by default
Explanation
To identify expensive statements, one should activate the expensive statements trace, which is deactivated by default. By activating this trace, it allows for the monitoring and tracking of statements that may be causing performance issues or consuming excessive resources. This trace feature helps in identifying and analyzing the statements that are taking longer execution time or utilizing more system resources, allowing for optimization and improvement of overall system performance.
26.
What is the default threshold for identifying expensive statements using the expensive statements trace?
Correct Answer
B. 1 second
Explanation
The default threshold for identifying expensive statements using the expensive statements trace is 1 second. This means that any statement that takes longer than 1 second to execute will be considered expensive and flagged by the trace. This threshold is set as a default value to help identify and optimize slow-performing statements in order to improve overall performance and efficiency.
27.
What tool provides the most detailed trace information?
Correct Answer
B. The performance trace using the debug option
Explanation
The performance trace using the debug option provides the most detailed trace information. Debugging allows for more detailed and granular information to be captured during the performance trace, which can be helpful in identifying and troubleshooting issues. By enabling the debug option, the trace will capture additional information that may not be available at lower verbosity levels or with other trace options such as the expensive statements trace.
28.
Which tool is used to read a performance trace?
Correct Answer
A. HDBadmin
Explanation
HDBadmin is the correct answer because it is a tool specifically designed for reading performance traces in SAP HANA. It provides various functionalities to analyze and troubleshoot performance issues, such as capturing and analyzing performance traces, monitoring system performance, and identifying performance bottlenecks. HDBadmin allows users to access detailed information about query execution, memory consumption, disk I/O, and other performance-related metrics, making it an essential tool for performance tuning and optimization in SAP HANA environments.
29.
Where can you find the query execution time?
Correct Answer
D. SQL Trace
Explanation
The query execution time can be found in the SQL Trace. SQL Trace is a feature in SQL Server that captures detailed information about the execution of a query, including the duration or execution time. By analyzing the SQL Trace, developers and database administrators can identify performance issues and optimize query execution. Other options mentioned, such as Query Execution Trace, Expensive Statement Trace, and Solution Manager Database Monitoring, do not specifically provide the query execution time.
30.
What type of connection is used for SLT logging tables?
Correct Answer
D. Database Connection
Explanation
SLT logging tables require a database connection to store the logged data. This type of connection allows SLT to directly access and interact with the database where the logging tables are located. By using a database connection, SLT can efficiently and securely write the logged data into the appropriate tables, ensuring data integrity and consistency. Additionally, a database connection allows for easy retrieval and analysis of the logged data, making it a suitable choice for SLT logging tables.
31.
What agents are used for supportability?
Correct Answer(s)
B. Diagnostic Agent
D. Host Agent
Explanation
The correct answer is Host Agent and Diagnostic Agent. Host Agent is used for supporting the host system and managing its resources, while Diagnostic Agent is used for troubleshooting and diagnosing issues in the system. These agents work together to ensure the supportability and smooth functioning of the system. Monitoring Agent and Database Agent, although important in their respective roles, are not specifically mentioned as agents used for supportability in the given options.
32.
What HANA components are covered by Solution Manager monitoring?
Correct Answer(s)
B. SLT
C. Database
D. Data Services
Explanation
The Solution Manager monitoring covers the SLT, Database, and Data Services components of HANA. SLT (SAP Landscape Transformation) is a real-time data replication tool, Database refers to the HANA database itself, and Data Services is a data integration and transformation tool. These components are important for monitoring and managing the performance and functionality of HANA.
33.
What is the recommended log mode setting?
Correct Answer
B. Normal
Explanation
The recommended log mode setting is "Normal" because it strikes a balance between providing enough information for troubleshooting purposes and not overwhelming the system with excessive logging. This setting allows for a sufficient level of detail in the logs without causing performance issues or filling up storage space too quickly.
34.
What is the default compression method of a HANA database?
Correct Answer
C. Dictionary Compression
Explanation
The default compression method of a HANA database is Dictionary Compression. This method is used to reduce the storage space required for data by creating a dictionary of unique values and replacing the actual values with references to the dictionary. This helps in reducing the size of the data and improving the overall performance of the database.
35.
What information is provided by the Top Tables View?
Correct Answer
D. The Top 20 tables based on memory consumption
Explanation
The Top Tables View provides information on the top 20 tables based on memory consumption. This means that it shows the tables that are using the most memory in the database. It does not provide information on the number of records in the tables, only on the amount of memory they are consuming.
36.
Which operating systems are supported by HANA?
Correct Answer(s)
A. RHEL 8
D. SLES 11
Explanation
HANA, which is a database management system developed by SAP, supports the SLES 11 (SUSE Linux Enterprise Server 11) and RHEL 8 (Red Hat Enterprise Linux 8) operating systems. CentOS 7 and Fedora 18 are not mentioned as supported operating systems for HANA.
37.
What are the primary factors to consider when sizing a HANA system?
Correct Answer(s)
B. Memory
C. CPU
F. Disk
Explanation
When sizing a HANA system, the primary factors to consider are memory, CPU, and disk. Memory is crucial as HANA relies heavily on in-memory processing for faster data retrieval and analysis. CPU determines the system's processing power and affects its performance. Disk space is necessary to store data and log files. Other factors like the number of concurrent users, available network bandwidth, and the number of systems using the HANA database may also impact system sizing, but memory, CPU, and disk are the main considerations.
38.
Attribute views typically join ____ tables to each other.
Correct Answer
A. Master Data
Explanation
Attribute views typically join Master Data tables to each other. Master Data tables contain information about key entities in a business, such as customers, products, or employees. Attribute views are used to combine and analyze data from these tables, providing a comprehensive view of the master data. By joining Master Data tables, attribute views enable users to gain insights and make informed decisions based on the relationships between different entities in the business.
39.
Analytic Views are represented using the
Correct Answer
A. Star Schema
Explanation
Analytic Views are represented using the Star Schema. In a Star Schema, the central fact table is connected to multiple dimension tables, forming a star-like structure. This schema is commonly used in data warehousing and allows for efficient and fast querying of large amounts of data. Analytic Views are used to provide a multidimensional view of the data, allowing for complex analysis and reporting. By using the Star Schema, Analytic Views can easily aggregate and summarize data across different dimensions, providing valuable insights into the data.
40.
Calculation views are created using the following:
Correct Answer(s)
A. Database Tables
B. Attribute Views
C. Analytic Views
D. Calculation Views
Explanation
Calculation views are created using database tables, attribute views, analytic views, and other calculation views. These components are used to define the structure and logic of the calculation view. Database tables provide the raw data, attribute views define the attributes and hierarchies, analytic views provide aggregated data, and calculation views combine all these elements to perform complex calculations and aggregations. Inner join, outer join, and left join are join operations used to combine data from multiple tables in the calculation view.
41.
A logical join is the same as a
Correct Answer
A. Left Outer Join
Explanation
A logical join is the same as a Left Outer Join. In a Left Outer Join, all the records from the left table are included in the result set, regardless of whether they have a matching record in the right table. If a matching record exists in the right table, it is included in the result set, otherwise, NULL values are used. This type of join is useful when you want to retrieve all the records from the left table and any matching records from the right table.
42.
When is a model executed?
Correct Answer
A. On user query
Explanation
A model is executed on user query, meaning that it is executed when a user requests information or performs an action that requires the model to process and provide a response. This suggests that the model is not continuously running or executing in the background, but rather waits for user input before performing its operations.
43.
You must grant _____ privileges on your schema to ______.
Correct Answer
A. Select, _SYS_REPO
Explanation
To grant privileges on a schema to a user or role, the correct syntax is "GRANT [privileges] ON [schema] TO [user/role]". In this case, the question is asking which privileges should be granted on the schema to the user or role. The correct answer is "Select" privileges, which allows the user or role to retrieve data from the schema. The user or role that should be granted these privileges is "_SYS_REPO".
44.
When using calculation views, use _____ whenever possible.
Correct Answer
A. GrapHical view
Explanation
When using calculation views, it is recommended to use graphical view whenever possible. Graphical view provides a visual interface that allows users to easily define complex calculations and transformations using drag-and-drop features. It offers a more intuitive and user-friendly approach compared to SQLScript, which requires writing code. Analytic view and attribute view are also types of views in SAP HANA, but they are not specifically recommended for calculation purposes. Therefore, the most suitable option for creating calculation views is the graphical view.
45.
Measures are used in which views?
Correct Answer(s)
A. Analytic
B. Calculation
Explanation
Measures are used in both the Analytic and Calculation views. Analytic views are used to perform complex calculations and aggregations on large datasets, while Calculation views are used to create custom calculations and transformations on data. Measures are essential in both types of views as they represent the numerical values that are being analyzed, calculated, or aggregated. Therefore, the correct answer is Analytic, Calculation.
46.
What connects data foundation and attribute views?
Correct Answer
A. Logical Join
Explanation
A logical join connects data foundation and attribute views. A data foundation view represents a set of tables from a database, while an attribute view represents a subset of columns from those tables. A logical join is used to combine the data from these views based on a common attribute or key. It allows for the creation of a unified view that includes relevant data from both the data foundation and attribute views, providing a comprehensive and meaningful representation of the data.
47.
What type of privilege restricts data access for users to specific periods?
Correct Answer
A. Analytic
Explanation
Analytic privilege restricts data access for users to specific periods. This type of privilege allows users to only view and analyze data within a specific time frame, restricting access to data outside of that period. This ensures that users can only access and analyze data that is relevant and valid for a particular time period, maintaining data integrity and security.
48.
Where are SLT logging tables located?
Correct Answer
A. On the source system
Explanation
The SLT logging tables are located on the source system. This means that the tables are created and stored on the system from which the data is being replicated. The source system is responsible for capturing the changes made to the data and storing them in the logging tables. This allows for real-time replication of data from the source system to the target system.
49.
Which of the following statements are true about the Direct Extractor Connection (DXC)?
Correct Answer(s)
A. Simple, low TCO data acquisition for SAP HANA leveraging existing delivered data models
B. Utilizes Data Source extractors existing in SAP Business Suite systems
C. Included in BW since Netweaver 7.0
D. Delta processing works the same for DXC as it would if BW were the receiving system
Explanation
The Direct Extractor Connection (DXC) is a data acquisition method for SAP HANA that offers simple and low total cost of ownership (TCO) by leveraging existing delivered data models. It utilizes Data Source extractors that already exist in SAP Business Suite systems. DXC has been included in BW since Netweaver 7.0, allowing for seamless integration with the receiving system. Furthermore, delta processing works the same for DXC as it would if BW were the receiving system, ensuring efficient and accurate data updates.
50.
What applications in Solution Manager support Root Cause Analysis?
Correct Answer(s)
A. The RCA Workcenter
B. Database Analysis
C. Change Analysis
Explanation
The applications in Solution Manager that support Root Cause Analysis are the RCA Workcenter, Database Analysis, and Change Analysis. These applications provide tools and functionalities to analyze and identify the root cause of issues or problems within the system. The RCA Workcenter allows users to perform in-depth analysis and investigation of incidents, while Database Analysis helps in analyzing and optimizing database performance. Change Analysis helps in identifying the impact of changes on the system. These applications collectively aid in identifying and resolving the root cause of problems, leading to improved system performance and stability.