1.
You are trying to launch an EC2 instance, however the instance seems to go into a terminated status immediately. What would probably not be a reason that this is happening?
Correct Answer
C. You need to create storage in EBS first.
Explanation
Amazon EC2 provides a virtual computing environments, known as an instance. After you launch an instance, AWS recommends that you check its status to confirm that it goes from the pending status to the running status, the not terminated status. The following are a few reasons why an Amazon EBS-backed instance might immediately terminate: You've reached your volume limit. The AM is missing a required part. The snapshot is corrupt. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_|nstanceStraightToTerminated. html
2.
You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7 minutes. The first instance is launched after 3 minutes, while the second instance is launched after 4 minutes. How many minutes after the first instance is launched will Auto Scaling accept another scaling actMty request?
Correct Answer
A. 11 minutes
Explanation
If an Auto Scaling group is launching more than one instance, the cool down period for each instance starts after that instance is launched. The group remains locked until the last instance that was launched has completed its cool down period. In this case the cool down period for the first instance starts after 3 minutes and finishes at the 10th minute (3+7 cool down), while for the second instance it starts at the 4th minute and finishes at the 11th minute (4+7 cool down). Thus, the Auto Scaling group will receive another request only after 11 minutes. Reference: http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI
3.
In Amazon EC2 Container Service components, what is the name of a logical grouping of container instances on which you can place tasks?
Correct Answer
A. A cluster
Explanation
Amazon ECS contains the following components: A Cluster is a logical grouping of container instances that you can place tasks on. A Container instance is an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster. A Task definition is a description of an application that contains one or more container definitions. A Scheduler is the method used for placing tasks on container instances. A Service is an Amazon ECS service that allows you to run and maintain a specified number of instances of a task definition simultaneously. A Task is an instantiation of a task definition that is running on a container instance. A Container is a Linux container that was created as part of a task. Reference: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
4.
In the context of AWS support, why must an EC2 instance be unreachable for 20 minutes rather than allowing customers to open tickets immediately?
Correct Answer
A. Because most reachability issues are resolved by automated processes in less than 20 minutes
Explanation
An EC2 instance must be unreachable for 20 minutes before opening a ticket, because most reachability issues are resolved by automated processes in less than 20 minutes and will not require any action on the part of the customer. If the instance is still unreachable after this time frame has passed, then you should open a case with support. Reference: https://aws.amazon.com/premiumsupport/faqs/
5.
Can a user get a notification of each instance start / terminate configured with Auto Scaling?
Correct Answer
C. Yes, if configured with the Auto Scaling group
Explanation
The user can get notifications using SNS if he has configured the notifications while creating the Auto Scaling group. Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html
6.
Amazon EBS provides the ability to create backups of any Amazon EC2 volume into what is known as
Correct Answer
A. Snapshots
Explanation
Amazon allows you to make backups of the data stored in your EBS volumes through snapshots that can later be used to create a new EBS volume. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.htmI
7.
Which one do you like?
Correct Answer
A. Option 1
8.
Which oneTo specify a resource in a policy statement, in Amazon EC2, can you use its Amazon Resource Name (ARN)? do you like?
Correct Answer
A. Yes, you can.
Explanation
Some Amazon EC2 API actions allow you to include specific resources in your policy that can be created or modified by the action. To specify a resource in the statement, you need to use its Amazon Resource Name (ARN). Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-ug.pdf
9.
After you recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, your client asks you to explain why you are recommending Redshift. Which of the following would be a reasonable response to his request?
Correct Answer
D. All answers listed are a reasonable response to his question
Explanation
Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host. AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one's own data warehouse and dealing with setup, durability, monitoring, scaling and patching Reference: https://aws.amazon.com/running_databases/#redshift_anchor
10.
One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However you are not sure whether you should use gateway-cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?
Correct Answer
A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-stored enables you to configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3
Explanation
Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application sewers. The gateway supports the following volume configurations: Gateway-cached volumes -- You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. Gatewaystored volumes -- If you need low-latency access to your entire data set, you can configure your on-premises gateway to store all your data locally and then asynchronously back up point-intime snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon
11.
A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the availability zone?
Correct Answer
C. Do not select the AZ; instead let AWS select the AZ
Explanation
When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
12.
A user is storing a large number of objects on AWS S3. The user wants to implement the search functionality among the objects. How can the user achieve this?
Correct Answer
A. Use the indexing feature of S3.
Explanation
In Amazon Web Services, AWS S3 does not provide any query facility. To retrieve a specific object the user needs to know the exact bucket / object key. In this case it is recommended to have an own DB system which manages the S3 metadata and key mapping. Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf
13.
After setting up a Virtual Private Cloud (VPC) network, a more experienced cloud engineer suggests that to achieve low network latency and high network throughput you should look into setting up a placement group. You know nothing about this, but begin to do some research about it and are especially curious about its limitations. Which of the below statements is wrong in describing the limitations of a placement group?
Correct Answer
B. A placement group can span multiple Availability Zones.
Explanation
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. Placement groups have the following limitations: The name you specify for a placement group a name must be unique within your AWS account. A placement group can't span multiple Availability Zones. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the same instance type for all instances in a placement group. You can't merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement group. A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about VPC peering connections, see VPC Peering in the Amazon VPC User Guide. You can't move an existing instance into a placement group. You can create an AM from your existing instance, and then launch a new instance from the AMI into a placement group. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
14.
What is a placement group in Amazon EC2?
Correct Answer
A. It is a group of EC2 instances within a single Availability Zone
Explanation
A placement group is a logical grouping of instances within a single Availability Zone. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
15.
You are migrating an internal sewer on your DC to an EC2 instance with EBS volume. Your server disk usage is around 500GB so you just copied all your data to a 2TB disk to be used with AWS Import/Export. Where will the data be imported once it arrives at Amazon?
Correct Answer
B. To an S3 bucket with 2 objects of 1TB
Explanation
An import to Amazon EBS will have different results depending on whether the capacity of your storage device is less than or equal to 1 TB or greater than 1 TB. The maximum size of an Amazon EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on Amazon S3. The target location is determined based on the total capacity of the device, not the amount of data on the device. Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html
16.
A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He also needs an automated process that manages backups, software patching, automatic failure detection, and recovery. You are aware that his existing set up currently uses an Oracle database. Which of the following AWS databases would be best for accomplishing this task?
Correct Answer
A. Amazon RDS
Explanation
Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html
17.
True or falsez A VPC contains multiple subnets, where each subnet can span multiple Availability Zones.
Correct Answer
A. This is true only if requested during the set-up of VPC.
Explanation
A VPC can span several Availability Zones. In contrast, a subnet must reside within a single Availability Zone.
18.
An edge location refers to which Amazon Web Service?
Correct Answer
C. An edge location is the location of the data center used for Amazon CIoudFront.
Explanation
Amazon CIoudFront is a content distribution network. A content delivery network or content distribution network (CDN) is a large distributed system of sewers deployed in multiple data centers across the world. The location of the data center used for CDN is called edge location. Amazon CIoudFront can cache static content at each edge location. This means that your popular static content (e.g., your site's logo, navigational images, cascading style sheets, JavaScript code, etc.) will be available at a nearby edge location for the browsers to download with low latency and improved performance for viewers. Caching popular static content with Amazon CIoudFront also helps you offload requests for such files from your origin sever -- CIoudFront serves the cached copy when available and only makes a request to your origin server if the edge location receMng the browser's request does not have a copy of the file. Reference: http://aws.amazon.com/c|oudfront/
19.
You are looking at ways to improve some existing infrastructure as it seems a lot of engineering resources are being taken up with basic management and monitoring tasks and the costs seem to be excessive. You are thinking of deploying Amazon E|asticCache to help. Which of the following statements is true in regards to EIasticCache?
Correct Answer
D. You can improve load and response times to user actions and queries and also reduce the cost associated with scaling web applications.
Explanation
Amazon EIastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon EIastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications.
AWS-SOLUTION-ARCHITECT-ASSOCIATE
10 https://xcerts.com
Using Amazon EIastiCache, you can not only improve load and response times to user actions and queries, but also reduce the cost associated with scaling web applications. Reference: https://aws.amazon.com/eIasticache/faqs/
20.
Do Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?
Correct Answer
D. Yes, they do
Explanation
An Amazon EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance. The volume persists independently from the running life of an Amazon EC2 instance. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html
21.
Your supervisor has asked you to build a simple file synchronization service for your department. He doesn't want to spend too much money and he wants to be notified of any changes to files by email. What do you think would be the best Amazon service to use for the email solution?
Correct Answer
A. Amazon SES
Explanation
File change notifications can be sent via email to users following the resource with Amazon Simple Email Service (Amazon SES), an easy-to-use, cost-effective email solution. Reference: http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_fiIesync_08.pdf
22.
Does DynamoDB support in-place atomic updates?
Correct Answer
A. Yes
Explanation
DynamoDB supports in-place atomic updates. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/\NorkingWithItems.htmI# Working WithItems.AtomicCounters
23.
Your manager has just given you access to multiple VPN connections that someone else has recently set up between all your company's offices. She needs you to make sure that the communication between the VPNs is secure. Which of the following services would be best for providing a low-cost hub-and-spoke model for primary or backup connectMty between these remote offices?
Correct Answer
D. AWS VPN CIoudHub
Explanation
If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectMty between these remote offices. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI
24.
Amazon EC2 provides a . It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST.
Correct Answer
C. Query API
Explanation
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html
25.
In Amazon AWS, which of the following statements is true of key pairs?
Correct Answer
B. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
Explanation
Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CIoudFront. Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws-sec-cred-types.html
26.
Does Amazon DynamoDB support both increment and decrement atomic operations?
Correct Answer
C. Yes, both increment and decrement operations.
Explanation
Amazon DynamoDB supports increment and decrement atomic operations. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html
27.
An organization has three separate AWS accounts, one each for development, testing, and production. The organization wants the testing team to have access to certain AWS resources in the production account. How can the organization achieve this?
Correct Answer
B. Create the IAM roles with cross account access.
Explanation
An organization has multiple AWS accounts to isolate a development environment from a testing or production environment. At times the users from one account need to access resources in the other account, such as promoting an update from the development environment to the production environment. In this case the IAM role with cross account access will provide a solution. Cross account access lets one account share access to their resources with users in the other AWS accounts. Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf
28.
You need to import several hundred megabytes of data from a local Oracle database to an Amazon RDS DB instance. What does AWS recommend you use to accomplish this?
Correct Answer
C. Oracle Data Pump
Explanation
How you import data into an Amazon RDS DB instance depends on the amount of data you have and the number and variety of database objects in your database. For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use Oracle Data Pump to import complex databases or databases that are several hundred megabytes or several terabytes in size. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htmI
29.
Which one do you like?
Correct Answer
A. Option 1
30.
A user has created an EBS volume with 1000 IOPS. What is the average IOPS that the user will get for most of the year as per EC2 SLA if the instance is attached to the EBS optimized instance?
Correct Answer
D. 900
Explanation
As per AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year. Thus, if the user has created a volume of 1000 IOPS, the user will get a minimum 900 IOPS 99.9% time of the year. Reference: http://aws.amazon.com/ec2/faqs/
31.
You need to migrate a large amount of data into the cloud that you have stored on a hard disk and you decide that the best way to accomplish this is with AWS Import/Export and you mail the hard disk to AWS. Which of the following statements is incorrect in regards to AWS Import/Export?
Correct Answer
C. It can export from Amazon Glacier.
Explanation
AWS Import/Export supports: Import to Amazon S3 Export from Amazon S3 Import to Amazon EBS Import to Amazon Glacier AWS Import/Export does not currently support export from Amazon EBS or Amazon Glacier. Reference: https://docs.aws.amazon.com/AWSImportExport/Iatest/DG/whatisdisk.html
32.
You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones. Obviously, if one fails, you would like Route 53 to direct traffic to the other region. Each region has an ELB with some instances being distributed. What is the best way for you to configure the Route 53 health check?
Correct Answer
D. Route 53 natively supports ELB with an internal health check. Turn "Eva|uate target health" on and "Associate with Health Check" off and R53 will use the ELB's internal health check.
Explanation
With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. When you enable this feature, Route 53 uses health checks-regularly making Internet requests to your appIication's endpoints from multiple locations around the world-to determine whether each endpoint of your application is up or down. To enable DNS Failover for an ELB endpoint, create an Alias record pointing to the ELB and set the "EvaIuate Target HeaIth" parameter to true. Route 53 creates and manages the health checks for your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also do not need to associate your resource record set for the ELB with your own health check, because Route 53 automatically associates it with the health checks that Route 53 manages on your behalf. The ELB health check will also inherit the health of your backend instances behind that ELB. Reference: http://aws.amazon.com/about-aws/whats-new/2013/05/30/amazon-route-53-adds-elbintegration-for-dns- fai|over/
33.
A user wants to use an EBS-backed Amazon EC2 instance for a temporary job. Based on the input data, the job is most likely to finish within a week. Which of the following steps should be followed to terminate the instance automatically once the job is finished?
Correct Answer
C. Configure the CIoudWatch alarm on the instance that should perform the termination action once the instance is idle
Explanation
Auto Scaling can start and stop the instance at a pre-defined time. Here, the total running time is unknown. Thus, the user has to use the CIoudWatch alarm, which monitors the CPU utilization. The user can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use. When the utilization is below the threshold limit, it will terminate the instance as a part of the instance action. Reference: http://docs.aws.amazon.com/AmazonCIoudWatch/|atest/Deve|operGuide/UsingAIarmActions.ht ml
34.
Which of the following is true of Amazon EC2 security group?
Correct Answer
D. You can modify the rules for a security group at any time.
Explanation
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI
35.
An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Your EIP is associated with your AWS account, not a particular EC2 instance, and it remains associated with your account until you choose to explicitly release it. By default how many EIPs is each AWS account limited to on a per region basis?
Correct Answer
B. 5
Explanation
By default, all AWS accounts are limited to 5 Elastic IP addresses per region for each AWS account, because public (IPv4) Internet addresses are a scarce public resource. AWS strongly encourages you to use an EIP primarily for load balancing use cases, and use DNS hostnames for all other inter-node communication. If you feel your architecture warrants additional EIPs, you would need to complete the Amazon EC2 Elastic IP Address Request Form and give reasons as to your need for additional addresses. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.htmI#usinginstance-ad dressing-limit
36.
In Amazon EC2, partial instance-hours are billed .
Correct Answer
D. as full hours
Explanation
In Amazon EC2, partial instance-hours are billed as full hours. This means that even if an instance is used for only a fraction of an hour, the billing will be rounded up to the nearest full hour. For example, if an instance is used for 10 minutes, it will be billed as if it was used for a full hour. This billing method ensures simplicity and consistency in pricing for EC2 instances.
37.
In EC2, what happens to the data in an instance store if an instance reboots (either intentionally or unintentionally)?
Correct Answer
B. Data persists in the instance store.
Explanation
The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data on instance store volumes is lost under the following circumstances. Failure of an underlying drive Stopping an Amazon EBS-backed instance Terminating an instance Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html
38.
You are setting up a VPC and you need to set up a public subnet within that VP C. Which following requirement must be met for this subnet to be considered a public subnet?
Correct Answer
B. Subnet's traffic is routed to an internet gateway.
Explanation
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won't be connected to the Internet. If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet. If a subnet doesn't have a route to the internet gateway, the subnet is known as a private subnet. If a subnet doesn't have a route to the internet gateway, but has its traffic routed to a virtual private gateway, the subnet is known as a VPN-only subnet. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
39.
Can you specify the security group that you created for a VPC when you launch an instance in EC2-Classic?
Correct Answer
B. No
Explanation
You cannot specify the security group created for EC2-Classic when you launch a VPC instance.
40.
While using the EC2 GET requests as URLs, the is the URL that serves as the entry point for the web service.
Correct Answer
B. Endpoint
Explanation
The endpoint is the URL that serves as the entry point for the web service. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-query-api.htmI
41.
You have been asked to build a database warehouse using Amazon Redshift. You know a little about it, including that it is a SQL data warehouse solution, and uses industry standard ODBC and JDBC connections and PostgreSQL drivers. However you are not sure about what sort of storage it uses for database tables. What sort of storage does Amazon Redshift use for database tables?
Correct Answer
C. Columnar data storage
Explanation
Amazon Redshift uses columnar data storage for its database tables. Columnar storage is a method of organizing and storing data by column rather than by row. This allows for more efficient data compression and query performance, as only the columns relevant to a specific query need to be accessed and processed. This type of storage is well-suited for analytical workloads where large amounts of data need to be processed and aggregated quickly.
42.
You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably check the to make sure that your application is not trying to drive more IOPS than you have provisioned.
Correct Answer
C. Average queue length
Explanation
The average queue length is a measure of how many I/O operations are waiting to be processed by the storage subsystem. If the average queue length is high, it indicates that there is a backlog of I/O operations and the storage subsystem is struggling to keep up with the workload. This can result in higher I/O latency, which is the time it takes for the I/O operation to complete. Therefore, checking the average queue length can help identify if the application is trying to drive more IOPS than provisioned, causing higher I/O latency.
43.
Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?
Correct Answer
B. Elastic IP
Explanation
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS. Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html
44.
You have been given a scope to deploy some AWS infrastructure for a large organisation. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fileet is high and conversely remove them when CPU utilization is low. Which AWS services would be best to use to accomplish this?
Correct Answer
B. Auto Scaling, Amazon CIoudWatch and Elastic Load Balancing.
Explanation
Auto Scaling, Amazon CloudWatch, and Elastic Load Balancing are the best AWS services to use to accomplish the given requirements. Auto Scaling allows for automatically adding or removing EC2 instances based on the average utilization of the Amazon EC2 fleet. Amazon CloudWatch provides monitoring and metrics for EC2 instances, including CPU utilization. Elastic Load Balancing distributes incoming traffic across multiple EC2 instances, ensuring high availability and scalability. Together, these services provide the ability to dynamically scale the infrastructure based on CPU utilization, effectively managing resources and optimizing performance.
45.
You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this?
Correct Answer
B. Read Replicas
Explanation
Read Replicas would be the best solution for handling a lot of business reporting queries running all the time. Read Replicas are copies of the primary database instance that can handle read traffic, offloading the workload from the primary instance. This helps in scaling the read capacity and improving the performance of the system. By using Read Replicas, the current DB instance can handle the increased workload without being overwhelmed.
46.
In DynamoDB, could you use IAM to grant access to Amazon DynamoDB resources and API actions?
Correct Answer
D. Yes
Explanation
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM). You can use
47.
Much of your company's data does not need to be accessed often, and can take several hours for retrieval time, so it's stored on Amazon Glacier. However someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in regards to this concern?
Correct Answer
C. Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3
Explanation
Amazon Glacier automatically encrypts the data using AES-256, which is the same level of encryption used by Amazon S3. This ensures that the sensitive data stored on Glacier is protected at the same high level of security as data stored on S3.
48.
Your EBS volumes do not seem to be performing as expected and your team leader has requested you look into improving their performance. Which of the following is not a true statement relating to the performance of your EBS volumes?
Correct Answer
A. Frequent snapshots provide a higher level of data durability and they will not degrade the performance of your application while the snapshot is in progress.
49.
You've created your first load balancer and have registered your EC2 instances with the load balancer. Elastic Load Balancing routinely performs health checks on all the registered EC2 instances and automatically distributes all incoming requests to the DNS name of your load balancer across your registered, healthy EC2 instances. By default, the load balancer uses the _ protocol for checking the health of your instances.
Correct Answer
B. HTTP
Explanation
In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer. Currently, HTTP on port 80 is the default health check. Reference: http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/TerminoIogyandKey Concepts.html
50.
A major finance organisation has engaged your company to set up a large data mining application. Using AWS you decide the best service for this is Amazon Elastic MapReduce(EMR) which you know uses Hadoop. Which of the following statements best describes Hadoop?
Correct Answer
C. Hadoop is an open source Java software framework
Explanation
Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware. Hadoop implements a programming model