1.
Each AWS region is composed of two or more locations that offer organizations the ability to operate production systems that are more highly available, fault tolerant, and scalable than would be possible using a single data center. What are these locations called?
Correct Answer
A. Availability Zones
Explanation
Each AWS region is composed of multiple locations called Availability Zones. Availability Zones are isolated data centers within a region that are designed to be independent from each other in terms of power, cooling, and network connectivity. They are connected through low-latency links and provide fault tolerance and high availability for running production systems. By distributing resources across multiple Availability Zones, organizations can ensure that their systems remain operational even if there is a failure in one of the zones.
2.
What are some reasons to enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers)
Correct Answer(s)
B. You have a set of users or customers who can access the second bucket with lower latency
C. For compliance reasons, you need to store data in a location at least 300 miles away from the first region
Explanation
Enabling cross-region replication on an Amazon S3 bucket allows you to have a backup of your data in case of accidental deletion. It also allows you to store data in a location at least 300 miles away from the first region for compliance reasons. Additionally, enabling cross-region replication can provide lower latency access to a second bucket for a set of users or customers. This can improve the overall performance and user experience.
3.
Which of the following can be accomplished through bootstrapping?
Correct Answer
D. All of the above.
Explanation
Bootstrapping refers to the process of automatically initiating or setting up a system or application. In this context, all of the given options can be accomplished through bootstrapping. By bootstrapping, one can install the most current security updates, ensuring that the system is protected against vulnerabilities. It also allows for the installation of the current version of the application, ensuring that the system has the latest features and bug fixes. Additionally, bootstrapping can be used to configure Operating System (OS) services, enabling the system to function optimally. Therefore, all of the above options can be achieved through bootstrapping.
4.
Which of the following are required elements of an Auto Scaling group? (Choose 2 answers)
Correct Answer(s)
A. Minimum size
D. Launch configuration
Explanation
An Auto Scaling group requires a minimum size to define the minimum number of instances that should be running at all times. This ensures that there is always a certain level of capacity available. Additionally, a launch configuration is necessary to specify the template for launching instances in the group. It includes information such as the AMI, instance type, security groups, and other settings required for launching instances. Health checks and desired capacity are not required elements for an Auto Scaling group, although they can be useful for managing and monitoring the group.
5.
Which process in an Amazon Simple Workflow Service (Amazon SWF) workflow implements a task?
Correct Answer
B. Activity worker
Explanation
In an Amazon Simple Workflow Service (Amazon SWF) workflow, the process that implements a task is called the activity worker. The activity worker is responsible for executing the specific activities or tasks defined in the workflow. It receives the task from the workflow and performs the necessary actions or computations required for that task. The activity worker plays a crucial role in the overall execution of the workflow by executing the individual tasks and returning the results back to the workflow.
6.
Amazon CloudWatch supports which types of monitoring plans? (Choose 2 answers)
Correct Answer(s)
A. Basic monitoring, which is free
F. Detailed monitoring, which has an additional cost
Explanation
Amazon CloudWatch supports two types of monitoring plans: Basic monitoring, which is free, and Detailed monitoring, which has an additional cost.
7.
You are building a photo management application that maintains metadata on millions of images in an Amazon DynamoDB table. When a photo is retrieved, you want to display the metadata next to the image. Which Amazon DynamoDB operation will you use to retrieve the metadata attributes from the table?
Correct Answer
C. Query operation
Explanation
The Query operation in Amazon DynamoDB is used to retrieve items from a table based on the primary key or secondary index. In this scenario, since the photo metadata is stored in the DynamoDB table, the Query operation would be the appropriate choice to retrieve the metadata attributes efficiently. The Query operation allows for more precise retrieval of data by specifying the key conditions and can handle large amounts of data efficiently.
8.
What should you do in order to grant a different AWS account permission to your Amazon Simple Queue Service (Amazon SQS) queue?
Correct Answer
C. Create an Amazon SQS policy that grants the other account access.
Explanation
To grant a different AWS account permission to your Amazon SQS queue, you should create an Amazon SQS policy that grants the other account access. This can be done by specifying the AWS account ID of the other account in the policy and defining the necessary permissions for accessing the queue. By creating an SQS policy, you can securely grant access to the queue without sharing your AWS account credentials or creating a user in IAM for the other account. VPC peering is not required for this specific scenario.
9.
Which DNS record must all zones have by default?
Correct Answer
D. SOA
Explanation
All zones must have a Start of Authority (SOA) record by default. The SOA record is essential as it contains important information about the zone, such as the primary name server responsible for the zone, the email address of the responsible person, and various timing parameters for the zone. This record is crucial for the proper functioning of the DNS system and is therefore required in all zones.
10.
Which of the following cache engines are supported by Amazon ElastiCache? (Choose 2 answers)
Correct Answer(s)
B. Memcached
C. Redis
Explanation
Amazon ElastiCache supports two cache engines: Memcached and Redis. Memcached is a high-performance, distributed memory caching system that is commonly used to speed up dynamic database-driven websites. Redis is an in-memory data structure store that can be used as a cache, database, or message broker. Both Memcached and Redis are popular choices for caching in cloud environments, and Amazon ElastiCache provides support for both of these engines.
11.
Your security team is very concerned about the vulnerability of the IAM administrator user accounts (the accounts used to configure all IAM features and accounts). What steps can be taken to lock down these accounts? (Choose 3 answers)
Correct Answer(s)
A. Add multi-factor authentication (MFA) to the accounts
C. Implement a password policy on the AWS account
D. Apply a source IP address condition to the policy that only grants permissions when the user is on the corporate network
Explanation
To lock down the IAM administrator user accounts, three steps can be taken. Firstly, adding multi-factor authentication (MFA) adds an extra layer of security by requiring additional verification beyond just a password. Secondly, implementing a password policy on the AWS account ensures that strong passwords are used, reducing the risk of unauthorized access. Lastly, applying a source IP address condition to the policy that only grants permissions when the user is on the corporate network further restricts access to the accounts, making them more secure.
12.
Which AWS database service is best suited for traditional Online Transaction Processing (OLTP)?
Correct Answer
B. Amazon Relational Database Service (Amazon RDS)
Explanation
Amazon Relational Database Service (Amazon RDS) is the best suited AWS database service for traditional Online Transaction Processing (OLTP). OLTP involves a high volume of small, frequent transactions, and requires a database that can handle concurrent reads and writes efficiently. Amazon RDS provides managed relational databases, such as MySQL, PostgreSQL, Oracle, and SQL Server, which are designed to handle OLTP workloads effectively. It offers features like automated backups, automatic software patching, and scalability options, making it a reliable choice for traditional OLTP applications.
13.
Which of the following techniques can you use to help you meet Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements? (Choose 3 answers)
Correct Answer(s)
A. DB snapshots
C. Read replica
D. Multi-AZ deployment
Explanation
DB snapshots, read replicas, and multi-AZ deployment are all techniques that can help meet Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements.
DB snapshots allow you to create a point-in-time copy of your database, which can be used for data recovery in case of a failure.
Read replicas are copies of your database that can be used for read operations, providing high availability and reducing the load on your primary database. In case of a failure, read replicas can be promoted to become the primary database, minimizing downtime.
Multi-AZ deployment involves replicating your database to a standby instance in a different Availability Zone. In case of a failure, Amazon RDS automatically fails over to the standby instance, reducing downtime and meeting RTO requirements.
14.
You are a solutions architect who is working for a mobile application company that wants to use Amazon Simple Workflow Service (Amazon SWF) for their new takeout ordering application. They will have multiple workflows that will need to interact. What should you advise them to do in structuring the design of their Amazon SWF environment?
Correct Answer
B. Use a single domain containing multiple workflows. In this manner, the workflows will be able to interact
Explanation
In order to structure the design of their Amazon SWF environment, it is advised to use a single domain containing multiple workflows. This allows the workflows to interact with each other. Using multiple domains, each containing a single workflow, would not allow for this interaction. Similarly, collapsing all activities to within a single workflow would limit the ability for workflows to interact. Using Amazon SQS and Amazon SNS would not be suitable as they do not provide the same level of workflow management and coordination as Amazon SWF.
15.
You host a web application across multiple AWS regions in the world, and you need to configure your DNS so that your end users will get the fastest network performance possible. Which routing policy should you apply?
Correct Answer
B. Latency-based routing
Explanation
Latency-based routing should be applied in this scenario because it allows you to route traffic to the region with the lowest latency or fastest network performance. This ensures that your end users will have the best possible experience when accessing your web application. By measuring the latency between the end user and each AWS region, the DNS can direct the traffic to the region with the lowest latency, reducing network delays and improving overall performance.
16.
Which of the following objects are good candidates to store in a cache? (Choose 3 answers)
Correct Answer(s)
A. Session state
B. Shopping cart
C. Product catalog
Explanation
Session state, shopping cart, and product catalog are good candidates to store in a cache because they are frequently accessed and their data does not change frequently. Caching these objects can help improve performance by reducing the need to retrieve the data from the original source every time it is requested. Additionally, caching can help reduce the load on the system and improve scalability. On the other hand, bank account balance is not a good candidate for caching as it is sensitive information that needs to be up-to-date and accurate at all times.
17.
Amazon Glacier is designed for: (Choose 2 answers)
Correct Answer(s)
B. Infrequently accessed data
C. Data archives
Explanation
Amazon Glacier is designed for storing infrequently accessed data and data archives. It is not suitable for active database storage or frequently accessed data. Glacier is a low-cost storage service that is optimized for long-term retention of data that is rarely accessed but needs to be preserved for compliance or other purposes. It provides secure, durable, and scalable storage for archiving and backup purposes. Therefore, it is the ideal choice for storing infrequently accessed data and data archives. Cached session data and frequently accessed data would require a different storage solution that provides faster access times.
18.
Which of the following describes a physical location around the world where AWS clusters data centers?
Correct Answer
D. Region
Explanation
A region in AWS refers to a physical location around the world where AWS clusters data centers. It is a geographical area that consists of multiple availability zones, each containing one or more data centers. Regions are completely independent and isolated from each other, allowing users to deploy resources in different regions to achieve high availability and fault tolerance.
19.
Which of the following workloads are a good fit for running on Amazon Redshift? (Choose 2 answers)
Correct Answer(s)
B. Reporting database supporting back-office analytics
C. Data warehouse used to aggregate multiple disparate data sources
Explanation
Amazon Redshift is a cloud-based data warehousing solution that is optimized for online analytical processing (OLAP) workloads. It is designed to handle large volumes of data and perform complex queries quickly. Therefore, a reporting database supporting back-office analytics and a data warehouse used to aggregate multiple disparate data sources are both good fits for running on Amazon Redshift. These workloads require a high-performance and scalable solution, which Amazon Redshift provides. However, a transactional database supporting a busy e-commerce order processing website and managing session state and user profile data for thousands of concurrent users are not mentioned as good fits for Amazon Redshift, as these workloads typically require a transactional database system with high concurrency and low latency.
20.
Which of the following is not a supported Amazon Simple Notification Service (Amazon SNS) protocol?
Correct Answer
D. Amazon DynamoDB
Explanation
Amazon DynamoDB is a managed NoSQL database service provided by Amazon Web Services (AWS) and is not a supported protocol for Amazon Simple Notification Service (Amazon SNS). Amazon SNS supports protocols like HTTPS, AWS Lambda, and Email-JSON for sending messages to various endpoints. However, DynamoDB is not a protocol but rather a database service, so it is not a valid option for this question.
21.
Which AWS Service records Application Program Interface (API) calls made on your account and delivers log files to you Amazon Simple Storage Service (Amazon S3) bucket?
Correct Answer
A. AWS CloudTrail
Explanation
AWS CloudTrail is the correct answer because it is a service that records API calls made on your account. It captures detailed information about each API call, including the identity of the caller, the time of the call, the source IP address, the request parameters, and the response elements returned by the AWS service. CloudTrail delivers log files to your Amazon S3 bucket, allowing you to store, monitor, and analyze the data for various purposes such as security analysis, compliance auditing, and troubleshooting.
22.
In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from block and file storage? (Choose 2 answers)
Correct Answer(s)
D. Objects contain both data and metadata.
E. Objects are stored in buckets.
Explanation
Amazon Simple Storage Service (Amazon S3) object storage differs from block and file storage in two ways. Firstly, objects in Amazon S3 contain both data and metadata, allowing for additional information to be stored alongside the actual data. This metadata can include details such as the object's creation date, author, or any other relevant information. Secondly, objects in Amazon S3 are stored in buckets. Buckets act as containers for objects and provide a way to organize and manage the stored data. This hierarchical structure allows for easy management and retrieval of objects within the storage system.
23.
Which of the following Amazon Virtual Private Cloud (Amazon VPC) element acts as a stateless firewall?
Correct Answer
B. Network Access Control List (ACL)
Explanation
A Network Access Control List (ACL) in Amazon VPC acts as a stateless firewall. It is a set of rules that control inbound and outbound traffic at the subnet level. ACLs are associated with subnets and evaluate traffic based on rules defined for each subnet. They can allow or deny traffic based on protocols, ports, and IP addresses. Unlike security groups, which are stateful, ACLs do not keep track of the state of connections. Instead, they evaluate each packet individually. Therefore, ACLs are considered stateless firewalls in Amazon VPC.
24.
Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)
Correct Answer(s)
B. All data on instance-store devices will be lost
E. The underlying host for the instance is changed
Explanation
When an EC2 instance in a VPC with an associated Elastic IP is stopped and started, all data on instance-store devices will be lost. This is because instance-store devices are temporary storage that is not persistent and will not retain data when the instance is stopped. Additionally, the underlying host for the instance is changed when it is stopped and started. This means that the instance may be moved to a different physical server, resulting in a change in the underlying host.
25.
The AWS control environment is in place for the secure delivery of AWS cloud service offerings. Which of the following does the collective control environment NOT explicitly include?
Correct Answer
B. Energy
Explanation
The collective control environment of AWS does not explicitly include energy. The control environment refers to the framework and processes in place to ensure the secure delivery of AWS cloud services. It encompasses various aspects such as people, technology, and processes. However, energy is not directly related to the control environment as it pertains to the physical infrastructure and resources required to power the AWS services, but it is not a part of the control environment itself.
26.
When designing a loosely coupled system, which AWS service provide an intermediate durable storage layer between components? (Choose 2 answers)
Correct Answer(s)
B. Amazon Kinesis
E. Amazon Simple Queue Service (SQS)
Explanation
When designing a loosely coupled system, two AWS services that provide an intermediate durable storage layer between components are Amazon Kinesis and Amazon Simple Queue Service (SQS). Amazon Kinesis allows the streaming of large amounts of data from multiple sources and enables real-time analytics. SQS is a fully managed message queuing service that decouples and scales microservices, distributed systems, and serverless applications. Both services ensure durability and reliability in storing and processing data between components in a loosely coupled system.
27.
Which of the following statements best describes an Availability Zone?
Correct Answer
B. Each Availability Zone consists of a multiple discrete data center with redundant power and networking/connectivity
Explanation
Each Availability Zone consists of multiple discrete data centers with redundant power and networking/connectivity. This means that each Availability Zone is made up of multiple physically separate and isolated data centers that are designed to operate independently. These data centers have redundant power sources and networking/connectivity to ensure high availability and fault tolerance.
28.
Which of the following AWS Cloud Service are designed according to the Multi-AZ principle? (Choose 2 answers)
Correct Answer(s)
A. Amazon DynamoDB
E. Amazon Simple Storage Service (S3)
Explanation
Amazon DynamoDB and Amazon Simple Storage Service (S3) are designed according to the Multi-AZ principle. Multi-AZ refers to the practice of replicating data across multiple Availability Zones (AZs) to ensure high availability and fault tolerance. In the case of DynamoDB, it automatically replicates data across multiple AZs within a region, providing continuous availability and durability. Similarly, S3 also replicates data across multiple AZs, ensuring that data is highly available and durable. This design principle helps to minimize the impact of hardware failures, network issues, or other unforeseen events by maintaining redundant copies of data in different AZs.
29.
How many access keys may an AWS Identity and Access Management(IAM) user have active at one time
Correct Answer
C. 2
Explanation
An AWS Identity and Access Management (IAM) user can have a maximum of two active access keys at one time. Access keys are used to authenticate API requests made to AWS services. Having multiple active access keys allows for seamless rotation and management of keys, ensuring continuous access to AWS resources while maintaining security. By limiting the number of active access keys to two, AWS promotes best practices for security and access management.
30.
To protect S3 data from both accidental deletion and accidental overwriting, you should:
Correct Answer
A. Enable S3 versioning on the bucket
Explanation
Enabling S3 versioning on the bucket allows for the preservation of previous versions of objects in the bucket. This means that even if a file is accidentally deleted or overwritten, the previous versions can still be accessed and restored. By enabling versioning, you can protect your S3 data from both accidental deletion and accidental overwriting, providing an added layer of data protection and ensuring data integrity.
31.
In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics:
Correct Answer
D. Hypervisor visible metrics such as CPU utilization
Explanation
Amazon CloudWatch provides various metrics for monitoring EC2 instances. These metrics include web server visible metrics such as the number of failed transaction requests, operating system visible metrics such as memory utilization, database visible metrics such as the number of connections, and hypervisor visible metrics such as CPU utilization. These metrics help in monitoring the performance and health of EC2 instances and can be used to set alarms and automate actions based on specific thresholds or conditions.
32.
Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?
Correct Answer
A. The ELB stops sending traffic to the instance that failed its health check.
Explanation
When an EC2 instance fails to pass the health checks configured on the Elastic Load Balancer (ELB), the ELB will stop sending traffic to that instance. This means that the instance will no longer receive any requests from the ELB until it passes the health checks again. The other options mentioned in the question are not true. The instance does not get terminated automatically, quarantined for root cause analysis, or replaced automatically by the ELB.
33.
What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)? (Choose 3 answers)
Correct Answer(s)
A. All objects have a URL.
B. Amazon S3 can store unlimited amounts of data.
D. Amazon S3 uses a REST (Representational State Transfer) Application Program Interface (API).
Explanation
Some of the key characteristics of Amazon Simple Storage Service (Amazon S3) are that all objects have a URL, meaning they can be accessed and shared easily. Amazon S3 can also store unlimited amounts of data, providing scalable storage solutions. Additionally, Amazon S3 uses a REST (Representational State Transfer) API, allowing developers to interact with the service programmatically.
34.
You are building the database tier for an enterprise application that gets occasional activity throughout the day. Which storage type should you select as your default option?
Correct Answer
B. General Purpose Solid State Drive (SSD)
Explanation
For an enterprise application that experiences occasional activity throughout the day, it is important to have a storage type that can handle both high and low workloads efficiently. General Purpose Solid State Drives (SSDs) are a suitable option as they offer a balance between performance and cost-effectiveness. SSDs are faster than magnetic storage and provide better I/O performance, making them ideal for handling occasional bursts of activity. Additionally, they are more reliable and durable compared to traditional magnetic storage. Provisioned IOPS (SSD) and Storage Area Network (SAN)-attached options may be more suitable for applications with consistently high workloads.
35.
Which of the following are IAM security features? (Choose 2 answers)
Correct Answer(s)
A. Password policies
C. MFA
Explanation
IAM (Identity and Access Management) is a service provided by Amazon Web Services (AWS) that allows users to manage access to their AWS resources. Password policies and MFA (Multi-Factor Authentication) are both IAM security features. Password policies help enforce strong password requirements and enhance the security of user accounts. MFA adds an extra layer of security by requiring users to provide additional authentication factors, such as a temporary code generated by a mobile app, in addition to their password. Consolidated Billing and Amazon DynamoDB global secondary indexes are not IAM security features, but rather different services or features provided by AWS.
36.
Why is the launch configuration referenced by the Auto Scaling group instead of being part of the Auto Scaling group?
Correct Answer
D. All of the above
Explanation
The launch configuration is referenced by the Auto Scaling group instead of being part of it because it allows for flexibility in managing the instances. By separating the launch configuration, it becomes easier to change the instance type, AMI, or security groups associated with the instances without disrupting the Auto Scaling group. This flexibility is beneficial when rolling out patches or making changes to the instances without affecting the overall functioning of the Auto Scaling group. Therefore, all of the given options are correct explanations for why the launch configuration is referenced by the Auto Scaling group.
37.
Elastic Load Balancing health checks may be: (Choose 3 answers)
Correct Answer(s)
A. A ping
C. A connection attempt
D. A page request
Explanation
Elastic Load Balancing health checks can be performed using various methods. One option is to use a ping, which sends a network request to the target to check if it is responsive. Another option is to perform a connection attempt, which tries to establish a connection with the target. Additionally, health checks can be done by sending a page request to the target, which checks if the target is able to serve web pages properly. These three methods help ensure that the target instances are healthy and able to handle incoming traffic effectively.
38.
What aspect of an Amazon VPC is stateful?
Correct Answer
B. Security groups
Explanation
Security groups in an Amazon VPC are stateful. This means that any inbound traffic that is allowed is automatically allowed for the outbound traffic as well. In other words, if a request is made from an instance to allow traffic from a specific IP address, the response from that IP address is automatically allowed back to the instance. This simplifies the management of network security as there is no need to create separate rules for inbound and outbound traffic.
39.
You are creating a High-Performance Computing (HPC) cluster and need very low latency and high bandwidth between instances. What combination of the following will allow this? (Choose 3 answers)
Correct Answer(s)
A. Use an instance type with 10 Gbps network performance.
B. Put the instances in a placement group.
D. Enable enhanced networking on the instances.
Explanation
To achieve very low latency and high bandwidth between instances in a High-Performance Computing (HPC) cluster, the following combinations would be effective:
1. Using an instance type with 10 Gbps network performance ensures high network throughput.
2. Placing the instances in a placement group allows for low-latency communication within the group.
3. Enabling enhanced networking on the instances optimizes network performance by offloading network processing tasks to dedicated network interfaces.
40.
Which of the following are features of Amazon Elastic Block Store (Amazon EBS)? (Choose 2 answers)
Correct Answer(s)
A. Data stored on Amazon EBS is automatically replicated within an Availability Zone.
C. Amazon EBS volumes can be encrypted transparently to workloads on the attached instance.
Explanation
Data stored on Amazon EBS is automatically replicated within an Availability Zone, which ensures high durability and availability of data. Amazon EBS volumes can also be encrypted transparently to workloads on the attached instance, providing an additional layer of security for sensitive data.
41.
You have an application that will run on an Amazon Elastic Compute Cloud (Amazon EC2) instance. The application will make requests to Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. Using best practices, what type of AWS Identity and Access Management (IAM) identity should you create for your application to access the identified services?
Correct Answer
A. IAM role
Explanation
The best type of AWS Identity and Access Management (IAM) identity to create for the application to access Amazon S3 and Amazon DynamoDB is an IAM role. IAM roles provide temporary credentials that can be assumed by trusted entities, such as EC2 instances, without the need for long-term access keys. This allows for secure and controlled access to the identified services without the need for managing individual user credentials.
42.
Which of the following can be used to address an Amazon Elastic Compute Cloud (Amazon EC2) instance over the web? (Choose 2 answers)
Correct Answer(s)
B. Public DNS name
D. Elastic IP address
Explanation
The Public DNS name and Elastic IP address can be used to address an Amazon EC2 instance over the web. The Public DNS name is a globally unique identifier that can be used to access the instance from the internet. The Elastic IP address is a static, public IP address that can be associated with the instance, providing a fixed address that can be used to access the instance over the internet.
43.
Elastic Load Balancing allows you to distribute traffic across which of the following?
Correct Answer
B. Multiple Availability Zones within a region
Explanation
Elastic Load Balancing allows you to distribute traffic across multiple Availability Zones within a region. This means that the load balancer can evenly distribute incoming traffic to multiple instances across different Availability Zones, ensuring high availability and fault tolerance. By spreading the load across multiple zones, it helps to prevent any single point of failure and provides better performance and scalability for applications.
44.
What administrative tasks are handled by AWS for Amazon Relational Database Service (Amazon RDS) databases? (Choose 4 answers)
Correct Answer(s)
A. Apply Security patch
B. Regular backups of the database
C. Deploying virtual infrastructure
E. Patching the operating system and database software
Explanation
AWS handles the administrative tasks of applying security patches, performing regular backups of the database, deploying the virtual infrastructure, and patching the operating system and database software for Amazon RDS databases. These tasks are important for ensuring the security, reliability, and performance of the databases.
45.
- Your order-processing application processes orders extracted from a queue with two Reserved Instances processing 10 orders/ minute. If an order fails during processing, then it is returned to the queue without penalty. Due to a weekend sale, the queues have several hundred orders backed up. While the backup is not catastrophic, you would like to drain it so that customers get their confirmation emails faster. What is a cost-effective way to drain the queue for orders?
Correct Answer
B. Deploy additional Spot Instances to assist in processing the orders
Explanation
Deploying additional Spot Instances to assist in processing the orders would be a cost-effective way to drain the queue for orders. Spot Instances are spare computing capacity available at a lower price compared to On-Demand or Reserved Instances. By deploying additional Spot Instances, you can increase the processing power and speed up order processing without incurring high costs. This solution is suitable for handling the temporary increase in orders during the weekend sale without the need for long-term commitments or high expenses.
46.
Your company has 17TB of financial trading records that need to be stored for seven years by law. Experience has shown that any record more than a year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost-efficient manner?
Correct Answer
B. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year, and delete the object after seven years.
Explanation
Storing the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year, and delete the object after seven years, is the most cost-efficient storage plan. This plan takes into account the fact that records older than a year are unlikely to be accessed, allowing for a lower-cost storage option like Amazon Glacier. Additionally, deleting the object after seven years ensures compliance with the legal requirement of storing the financial trading records for that duration.
47.
Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model allows you to pay a set hourly price for compute, giving you full control over when the instance launches and terminates?
Correct Answer
C. On Demand instances
Explanation
On Demand instances in Amazon EC2 allow users to pay a set hourly price for compute. This pricing model provides full control over when the instance launches and terminates, making it suitable for applications with short-term, irregular workloads or unpredictable usage patterns. Users can launch instances as needed without any upfront commitment or long-term contract. This flexibility makes On Demand instances a convenient choice for users who require immediate access to compute resources without any long-term commitment or pre-planning.
48.
Your company experiences fluctuations in traffic patterns to their e-commerce website based on flash sales. What service can help your company dynamically match the required compute capacity to the spike in traffic during flash sales?
Correct Answer
A. Auto Scaling
Explanation
Auto Scaling is a service provided by Amazon Web Services (AWS) that allows companies to automatically adjust their compute capacity based on demand. In the case of flash sales, where there is a sudden spike in traffic to the e-commerce website, Auto Scaling can dynamically increase the compute capacity to handle the increased load. This helps ensure that the website remains responsive and can handle the high traffic without any performance degradation or downtime. Auto Scaling can also automatically decrease the compute capacity once the traffic subsides, helping to optimize costs by only using the required resources.
49.
Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what kinds of operations is it possible to get stale data as a result of eventual consistency? (Choose 2 answers)
Correct Answer(s)
B. GET or LIST after a DELETE
C. GET after overwrite PUT (PUT to an existing key)
Explanation
GET or LIST after a DELETE: When a DELETE operation is performed on an object in Amazon S3, it may take some time for the deletion to be fully propagated across all the storage nodes. Therefore, if a GET or LIST operation is performed immediately after the delete, it is possible to get stale data.
GET after overwrite PUT (PUT to an existing key): If a PUT operation is performed to overwrite an existing object in Amazon S3, it may take some time for the new version of the object to be fully propagated. If a GET operation is performed immediately after the overwrite PUT, it is possible to get stale data from the previous version of the object.
50.
Your web application needs four instances to support steady traffic nearly all of the time. On the last day of each month, the traffic triples. What is a cost-effective way to handle this traffic pattern?
Correct Answer
C. Run four Reserved Instances constantly, then add eight On-Demand Instances on the last day of each month.
Explanation
Running four Reserved Instances constantly provides a cost-effective solution for steady traffic throughout the year. By reserving these instances, you can take advantage of lower hourly rates. Additionally, adding eight On-Demand Instances on the last day of each month allows you to accommodate the increased traffic without incurring the higher costs associated with running all 12 instances as Reserved Instances. This approach balances cost-effectiveness with the flexibility to handle the spikes in traffic on the last day of each month.