Spring Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Amazon Web Services SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Exam Practice Test

Page: 1 / 76
Total 758 questions

AWS Certified Solutions Architect - Associate (SAA-C03) Questions and Answers

Question 1

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without traveling across the internet. The company has no existing dedicated connectivity to AWS.

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

Options:

A.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.

B.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a public VIF between the on-premises environment and the private VPC.

C.

Create an Amazon S3 interface endpoint in the networking account.

D.

Create an Amazon S3 gateway endpoint in the networking account.

E.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.

Question 2

A company runs a latency-sensitive gaming service in the AWS Cloud. The gaming service runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). An Amazon DynamoDB table stores the gaming data. All the infrastructure is in a single AWS Region. The main user base is in that same Region.

A solutions architect needs to update the architecture to support a global expansion of the gaming service. The gaming service must operate with the least possible latency.

Which solution will meet these requirements?

Options:

A.

Create an Amazon CloudFront distribution in front of the ALB.

B.

Deploy an Amazon API Gateway regional API endpoint. Integrate the API endpoint with the ALB.

C.

Create an accelerator in AWS Global Accelerator. Add a listener. Configure the endpoint to point to the ALB.

D.

Deploy the ALB and the fleet of EC2 instances to another Region. Use Amazon Route 53 with geolocation routing.

Question 3

A company stores sensitive financial reports in an Amazon S3 bucket. To comply with auditing requirements, the company must encrypt the data at rest. Users must not have the ability to change the encryption method or remove encryption when the users upload data. The company must be able to audit all encryption and storage actions. Which solution will meet these requirements and provide the MOST granular control?

Options:

A.

Enable default server-side encryption with Amazon S3 managed keys (SSE-S3) for the S3 bucket. Apply a bucket policy that denies any upload requests that do not include the x-amz-server-side-encryption header.

B.

Configure server-side encryption with AWS KMS (SSE-KMS) keys. Use an S3 bucket policy to reject any data that is not encrypted by the designated key.

C.

Use client-side encryption before uploading the reports. Store the encryption keys in AWS Secrets Manager.

D.

Enable default server-side encryption with Amazon S3 managed keys (SSE-S3). Use AWS Identity and Access Management (IAM) to prevent users from changing S3 bucket settings.

Question 4

A company needs an automated solution to detect cryptocurrency mining activity on Amazon EC2 instances. The solution must automatically isolate any identified EC2 instances for forensic analysis.

Which solution will meet these requirements?

Options:

A.

Create an Amazon EventBridge rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

B.

Create an AWS Security Hub custom action that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the custom action to invoke an AWS Lambda function to isolate the identified EC2 instances.

C.

Create an Amazon Inspector rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

D.

Create an AWS Config custom rule that runs when AWS Config detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

Question 5

A financial company is migrating banking applications to AWS accounts managed through AWS Organizations. The applications store sensitive customer data on Amazon EBS volumes, and the company takes regular snapshots for backups.

The company must implement controls across all accounts to prevent sharing EBS snapshots publicly, with the least operational overhead.

Which solution will meet these requirements?

Options:

A.

Enable AWS Config rules for each OU to monitor EBS snapshot permissions.

B.

Enable block public access for EBS snapshots at the organization level.

C.

Create an IAM policy in the root account that prevents users from modifying snapshot permissions.

D.

Use AWS CloudTrail to track snapshot permission changes.

Question 6

A company has deployed a multi-tier web application to support a website. The architecture includes an Application Load Balancer (ALB) in public subnets, two Amazon Elastic Container Service (Amazon ECS) tasks in the public subnets, and a PostgreSQL cluster that runs on Amazon EC2 instances in private subnets.

The EC2 instances that host the PostgreSQL database run shell scripts that need to access an external API to retrieve product information. A solutions architect must design a solution to allow the EC2 instances to securely communicate with the external API without increasing operational overhead.

Which solution will meet these requirements?

Options:

A.

Assign public IP addresses to the EC2 instances in the private subnets. Configure security groups to allow outbound internet access.

B.

Configure a NAT gateway in the public subnets. Update the route table for the private subnets to route traffic to the NAT gateway.

C.

Configure a VPC peering connection between the private subnets and a public subnet that has access to the external API.

D.

Deploy an interface VPC endpoint to securely connect to the external API.

Question 7

An insurance company runs an application on premises to process contracts. The application processes jobs that are comprised of many tasks. The individual tasks run for up to 5 minutes. Some jobs can take up to 24 hours in total to finish. If a task fails, the task must be reprocessed.

The company wants to migrate the application to AWS. The company will use Amazon S3 as part of the solution. The company wants to configure jobs to start automatically when a contract is uploaded to an S3 bucket.

Which solution will meet these requirements?

Options:

A.

Use AWS Lambda functions to process individual tasks. Create a primary Lambda function to handle the overall job processing by calling individual Lambda functions in sequence. Configure the S3 bucket to send an event notification to invoke the primary Lambda function to begin processing.

B.

Use a state machine in AWS Step Functions to handle the overall contract processing job. Configure the S3 bucket to send an event notification to Amazon EventBridge. Create a rule in Amazon EventBridge to target the state machine.

C.

Use an AWS Batch job to handle the overall contract processing job. Configure the S3 bucket to send an event notification to initiate the Batch job.

D.

Use an S3 event notification to notify an Amazon Simple Queue Service (Amazon SQS) queue when a contract is uploaded. Configure an AWS Lambda function to read messages from the queue and to run the contract processing job.

Question 8

A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure The company ' s security team must be able to track and audit all incremental changes to the infrastructure.

Which solution will meet these requirements?

Options:

A.

Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes

B.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.

C.

Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.

D.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.

Question 9

How can trade data from DynamoDB be ingested into an S3 data lake for near real-time analysis?

Options:

A.

Use DynamoDB Streams to invoke a Lambda function that writes to S3.

B.

Use DynamoDB Streams to invoke a Lambda function that writes to Data Firehose, which writes to S3.

C.

Enable Kinesis Data Streams on DynamoDB. Configure it to invoke a Lambda function that writes to S3.

D.

Enable Kinesis Data Streams on DynamoDB. Use Data Firehose to write to S3.

Question 10

A company hosts an ecommerce application that stores all data in a single Amazon RDS for MySQL DB instance that is fully managed by AWS. The company needs to mitigate the risk of a single point of failure.

Which solution will meet these requirements with the LEAST implementation effort?

Options:

A.

Modify the RDS DB instance to use a Multi-AZ deployment. Apply the changes during the next maintenance window.

B.

Migrate the current database to a new Amazon DynamoDB Multi-AZ deployment. Use AWS Database Migration Service (AWS DMS) with a heterogeneous migration strategy to migrate the current RDS DB instance to DynamoDB tables.

C.

Create a new RDS DB instance in a Multi-AZ deployment. Manually restore the data from the existing RDS DB instance from the most recent snapshot.

D.

Configure the DB instance in an Amazon EC2 Auto Scaling group with a minimum group size of three. Use Amazon Route 53 simple routing to distribute requests to all DB instances.

Question 11

A company stores petabytes of historical medical information on premises. The company has a process to manage encryption of the data to comply with regulations. The company needs a cloud-based solution for data backup, recovery, and archiving. The company must retain control over the encryption key material. Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Create an AWS Key Management Service (AWS KMS) key without key material. Import the company ' s key material into the KMS key.

B.

Create an AWS Key Management Service (AWS KMS) encryption key that contains key material generated by AWS KMS.

C.

Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage. Use S3 Bucket Keys with AWS Key Management Service (AWS KMS) keys.

D.

Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).

E.

Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).

Question 12

A company is building a serverless application that processes large volumes of data from a mobile app. A Lambda function processes the data and stores it in DynamoDB. The company must ensure the application can recover from failures and continue processing without losing records.

Which solution will meet these requirements?

Options:

A.

Configure the Lambda function with a dead-letter queue (DLQ) using SQS. Retry failed records from the DLQ with exponential backoff.

B.

Configure the Lambda function to read records from Amazon Data Firehose. Replay Firehose records in case of failures.

C.

Use Amazon OpenSearch Service to store failed records. Configure Lambda to retry failed records from OpenSearch. Use EventBridge for orchestration.

D.

Use Amazon SNS to store failed records. Configure Lambda to retry records from SNS. Use API Gateway to orchestrate retries.

Question 13

A company provides a trading platform to customers. The platform uses an Amazon API Gateway REST API, AWS Lambda functions, and an Amazon DynamoDB table. Each trade that the platform processes invokes a Lambda function that stores the trade data in Amazon DynamoDB. The company wants to ingest trade data into a data lake in Amazon S3 for near real-time analysis. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon S3.

B.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.

C.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure Kinesis Data Streams to invoke a Lambda function that writes the data to Amazon S3.

D.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure a data stream to be the input for Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.

Question 14

A company is planning to run an AI/ML workload on AWS. The company needs to train a model on a dataset that is in Amazon S3 Standard. A model training application requires multiple compute nodes and single-digit millisecond access to the data.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Move the data to S3 Intelligent-Tiering. Point the model training application to S3 Intelligent-Tiering as the data source.

B.

Add partitions to the S3 bucket by adding random prefixes. Reconfigure the model training application to point to the new prefixes as the data source.

C.

Move the data to S3 Express One Zone. Point the model training application to S3 Express One Zone as the data source.

D.

Move the data to a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS)volume attached to an Amazon EC2 instance. Point the model training application to the gp3 volume as the data source.

Question 15

A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their access to on-premises resources through Active Directory user groups on the corporate network. However, the company does not want users to have to maintain another identity to access the resources. A solutions architect must manage user access to the AWS resources while preserving access to the on-premises resources.

What should the solutions architect do to meet these requirements?

Options:

A.

Create an IAM user for each user in the company. Attach the appropriate policies to each user.

B.

Use Amazon Cognito with an Active Directory user pool. Create roles with the appropriate policies attached.

C.

Define cross-account roles with the appropriate policies attached. Map the roles to the Active Directory groups.

D.

Configure Security Assertion Markup Language (SAML) 2.0-based federation. Create roles with the appropriate policies attached. Map the roles to the Active Directory groups.

Question 16

A solutions architect is designing a multi-Region disaster recovery (DR) strategy for a company. The company runs an application on Amazon EC2 instances in Auto Scaling groups that are behind an Application Load Balancer (ALB). The company hosts the application in the company ' s primary and secondary AWS Regions.

The application must respond to DNS queries from the secondary Region if the primary Region fails. Only one Region must serve traffic at a time.

Which solution will meet these requirements?

Options:

A.

Create an outbound endpoint in Amazon Route 53 Resolver. Create forwarding rules that determine how queries will be forwarded to DNS resolvers on the network. Associate the rules with VPCs in each Region.

B.

Create primary and secondary DNS records in Amazon Route 53. Configure health checks and a failover routing policy.

C.

Create a traffic policy in Amazon Route 53. Use a geolocation routing policy and a value type of ELB Application Load Balancer.

D.

Create an Amazon Route 53 profile. Associate DNS resources to the profile. Associate the profile with VPCs in each Region.

Question 17

A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application ' s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.

Which solution will meet these requirements?

Options:

A.

Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.

B.

Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.

C.

Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.

D.

Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.

Question 18

A company is building a new web application on AWS. The application needs to consume files from a legacy on-premises application that runs a batch process and outputs approximately 1 GB of data every night to an NFS file mount.

A solutions architect needs to design a storage solution that requires minimal changes to the legacy application and keeps costs low.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy an Outpost in AWS Outposts to the on-premises location where the legacy application is stored. Configure the legacy application and the web application to store and retrieve the files in Amazon S3 on the Outpost.

B.

Deploy an AWS Storage Gateway Volume Gateway on premises. Point the legacy application to the Volume Gateway. Configure the web application to use the Amazon S3 bucket that the Volume Gateway uses.

C.

Deploy an Amazon S3 interface endpoint on AWS. Reconfigure the legacy application to store the files directly on an Amazon S3 endpoint. Configure the web application to retrieve the files from Amazon S3.

D.

Deploy an Amazon S3 File Gateway on premises. Point the legacy application to the File Gateway. Configure the web application to retrieve the files from the S3 bucket that the File Gateway uses.

Question 19

A company runs an HPC workload that uses a 200-TB file system on premises. The company needs to migrate this data to Amazon FSx for Lustre. Internet capacity is 10 Mbps, and all data must be migrated within 30 days.

Which solution will meet this requirement?

Options:

A.

Use AWS DMS to transfer data into S3 and link FSx for Lustre to the bucket.

B.

Deploy AWS DataSync on premises and transfer directly into FSx for Lustre.

C.

Use AWS Storage Gateway Volume Gateway to move data into FSx for Lustre.

D.

Use an AWS Snowball Edge storage-optimized device to transfer data into S3 and link FSx for Lustre to the bucket.

Question 20

A company wants to restrict access to the content of its web application. The company needs to protect the content by using authorization techniques that are available on AWS. The company also wants to implement a serverless architecture for authorization and authentication that has low login latency.

The solution must integrate with the web application and serve web content globally. The application currently has a small user base, but the company expects the application ' s user base to increase

Which solution will meet these requirements?

Options:

A.

Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization. Configure Amazon CloudFront to serve the web application globally

B.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

C.

Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization Use Amazon S3 Transfer Acceleration to serve the web application globally.

D.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

Question 21

A company wants to publish a private website for its on-premises employees. The website consists of several HTML pages and image files. The website must be available only through HTTPS and must be available only to on-premises employees. A solutions architect plans to store the website files in an Amazon S3 bucket.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket policy to deny access when the source IP address is not the public IP address of the on-premises environment Set up an Amazon Route 53 alias record to point to the S3 bucket. Provide the alias record to the on-premises employees to grant the employees access to the website.

B.

Create an S3 access point to provide website access. Attach an access point policy to deny access when the source IP address is not the public IP address of the on-premises environment. Provide the S3 access point alias to the on-premises employees to grant the employees access to the website.

C.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Use AWS Certificate Manager for SSL. Use AWS WAF with an IP set rule that allows access for the on-premises IP address. Set up an Amazon Route 53 alias record to point to the CloudFront distribution.

D.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Create a CloudFront signed URL for the objects in the bucket. Set up an Amazon Route 53 alias record to point to the CloudFront distribution. Provide the signed URL to the on-premises employees to grant the employees access to the website.

Question 22

A company hosts an application on AWS. The application gives users the ability to upload photos and store the photos in an Amazon S3 bucket. The company wants to use Amazon CloudFront and a custom domain name to upload the photo files to the S3 bucket in the eu-west-1 Region.

Which solution will meet these requirements? (Select TWO.)

Options:

A.

Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront

B.

Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront.

C.

Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration.

D.

Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC).

E.

Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.

Question 23

A finance company hosts a data lake in Amazon S3. The company receives financial data records over SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2 instance in a public subnet of a VPC. After the files are uploaded, they are moved to the data lake by a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.example.com through the use of Amazon Route 53.

What should a solutions architect do to improve the reliability and scalability of the SFTP solution?

Options:

A.

Move the EC2 instance into an Auto Scaling group. Place the EC2 instance behind an Application Load Balancer (ALB). Update the DNS record sftp.example.com in Route 53 to point to the ALB.

B.

Migrate the SFTP server to AWS Transfer for SFTP. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname.

C.

Migrate the SFTP server to a file gateway in AWS Storage Gateway. Update the DNS record sftp.example.com in Route 53 to point to the file gateway endpoint.

D.

Place the EC2 instance behind a Network Load Balancer (NLB). Update the DNS record sftp.example.com in Route 53 to point to the NLB.

Question 24

A company needs a solution to integrate transaction data from several Amazon DynamoDB tables into an existing Amazon Redshift data warehouse. The solution must maintain the provisioned throughput of DynamoDB.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an Amazon S3 bucket. Configure DynamoDB to export to the bucket on a regular schedule. Use an Amazon Redshift COPY command to read from the S3 bucket.

B.

Use an Amazon Redshift COPY command to read directly from each DynamoDB table.

C.

Create an Amazon S3 bucket. Configure an AWS Lambda function to read from the DynamoDB tables and write to the S3 bucket on a regular schedule. Use Amazon Redshift Spectrum to access the data in the S3 bucket.

D.

Use Amazon Athena Federated Query with a DynamoDB connector and an Amazon Redshift connector to read directly from the DynamoDB tables.

Question 25

A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.

B.

Use the modlfy-db-instance command in the AWS CLI to change the password.

C.

Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

D.

Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

Question 26

A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types.

Which solutions to deploy the SCP will meet these requirements? (Select TWO.)

Options:

A.

Attach the SCP to the root OU for the organization.

B.

Attach the SCP to the three nonproduction Organizations member accounts.

C.

Attach the SCP to the Organizations management account.

D.

Create an OU for the production account. Attach the SCP to the OU. Move the production member account into the new OU.

E.

Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts into the new OU.

Question 27

A media publishing company is building an application that allows users to print custom books. The frontend runs in a Docker container. Incoming order volume is highly variable and can exceed the throughput of the physical printing machines. Order-processing payloads can be up to 4 MB.

The company needs a scalable solution for handling incoming orders.

Which solution will meet this requirement?

Options:

A.

Use Amazon SQS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EKS.

B.

Use Amazon SNS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on AWS Fargate.

C.

Use Amazon SQS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on Amazon ECS with the Fargate launch type.

D.

Use Amazon SNS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EC2 instances.

Question 28

A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.

The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.

B.

Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.

C.

Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.

D.

Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.

Question 29

A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to enforce column-level authorization so that the company ' s marketing team can access only a subset of columns in the database.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.

B.

Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight.

C.

Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.

D.

Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.

Question 30

A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website serves only a small amount of traffic. The company is concerned about the reliability of the instance and needs to migrate to a highly available architecture. The company cannot modify the application code.

Which combination of actions should a solutions architect take to achieve high availability for the website? (Select TWO.)

Options:

A.

Provision an internet gateway in each Availability Zone in use.

B.

Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.

C.

Migrate the database to Amazon DynamoDB. and enable DynamoDB auto scaling.

D.

Use AWS DataSync to synchronize the database data across multiple EC2 instances.

E.

Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.

Question 31

A company is developing a platform to process large volumes of data for complex analytics and machine learning (ML) tasks. The platform must handle compute-intensive workloads. The workloads currently require 20 to 30 minutes for each data processing step.

The company wants a solution to accelerate data processing.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Deploy three Amazon EC2 instances. Distribute the EC2 instances across three Availability Zones. Use traditional batch processing techniques for data processing.

B.

Create an Amazon EMR cluster. Use managed scaling. Install Apache Spark to assist with data processing.

C.

Create an AWS Lambda function for each data processing step. Deploy an Amazon Simple Queue Service (Amazon SQS) queue to relay data between Lambda functions.

D.

Create a series of AWS Lambda functions to process the data. Use AWS Step Functions to orchestrate the Lambda functions into data processing steps.

Question 32

A company runs multiple web applications on Amazon EC2 instances behind a single Application Load Balancer (ALB). The application experiences unpredictable traffic spikes throughout each day. The traffic spikes cause high latency. The unpredictable spikes last less than 3 hours. The company needs a solution to resolve the latency issue caused by traffic spikes.

Options:

A.

Use EC2 instances in an Auto Scaling group. Configure the ALB and Auto Scaling group to use a target tracking scaling policy.

B.

Use EC2 Reserved Instances in an Auto Scaling group. Configure the Auto Scaling group to use a scheduled scaling policy based on peak traffic hours.

C.

Use EC2 Spot Instances in an Auto Scaling group. Configure the Auto Scaling group to use a scheduled scaling policy based on peak traffic hours.

D.

Use EC2 Reserved Instances in an Auto Scaling group. Replace the ALB with a Network Load Balancer (NLB).

Question 33

A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.

Which solution will meet these requirements?

Options:

A.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.

B.

Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.

D.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.

Question 34

A company uses Amazon Elastic Container Service (Amazon ECS) to run workloads that belong to service teams. Each service team uses an owner tag to specify the ECS containers that the team owns. The company wants to generate an AWS Cost Explorer report that shows how much each service team spends on ECS containers on a monthly basis.

Which combination of steps will meet these requirements in the MOST operationally efficient way? (Select TWO.)

Options:

A.

Create a custom report in Cost Explorer. Apply a filter for Amazon ECS.

B.

Create a custom report in Cost Explorer. Apply a filter for the owner resource tag.

C.

Set up AWS Compute Optimizer. Review the rightsizing recommendations.

D.

Activate the owner tag as a cost allocation tag. Group the Cost Explorer report by linked account.

E.

Activate the owner tag as a cost allocation tag. Group the Cost Explorer report by the owner cost allocation tag.

Question 35

A company runsmultiple applications on Amazon EC2 instances in a VPC.

Application Aruns in aprivate subnetthat has acustom route table and network ACL.

Application Bruns in asecond private subnet in the same VPC.

The companyneeds to prevent Application A from sending traffic to Application B.

Which solution will meet this requirement?

Options:

A.

Add adeny outbound ruleto asecurity group associated with Application B. Configure the rule toprevent Application B from sending traffic to Application A.

B.

Add adeny outbound ruleto asecurity group associated with Application A. Configure the rule toprevent Application A from sending traffic to Application B.

C.

Add adeny outbound ruleto thecustom network ACL for the Application B subnet. Configure the rule toprevent Application B from sending traffic to the IP addresses associated with Application A.

D.

Add adeny outbound ruleto thecustom network ACL for the Application A subnet. Configure the rule toprevent Application A from sending traffic to the IP addresses associated with Application B.

Question 36

An application uses an Amazon SQS queue and two AWS Lambda functions. One of the Lambda functions pushes messages to the queue, and the other function polls the queue and receives queued messages.

A solutions architect needs to ensure that only the two Lambda functions can write to or read from the queue.

Which solution will meet these requirements?

Options:

A.

Attach an IAM policy to the SQS queue that grants the Lambda function principals read and write access. Attach an IAM policy to the execution role of each Lambda function that denies all access to the SQS queue except for the principal of each function.

B.

Attach a resource-based policy to the SQS queue to deny read and write access to the queue for any entity except the principal of each Lambda function. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

C.

Attach a resource-based policy to the SQS queue that grants the Lambda function principals read and write access to the queue. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

D.

Attach a resource-based policy to the SQS queue to deny all access to the queue. Attach an IAM policy to the execution role of each Lambda function that grants read and write access to the queue.

Question 37

A company creates a VPC that has one public subnet and one private subnet. The company attaches an internet gateway to the VPC. An Application Load Balancer (ALB) in the public subnet communicates with Amazon EC2 instances in the private subnet.

The EC2 instances in the private subnet must be able to download operating system and application updates from the internet. The instances must not be accessible from the internet.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Associate an Elastic IP address with the NAT gateway.

B.

Add a route of 0.0.0.0/0 to the private subnet route table. Set the NAT gateway as a target.

C.

Deploy a NAT gateway in the public subnet.

D.

Deploy a NAT gateway in the private subnet.

E.

Add a route of 0.0.0.0/0 to the public subnet route table. Set the NAT gateway as a target.

F.

Associate an Elastic IP address with the internet gateway.

Question 38

A company runs an ecommerce website on AWS. The website architecture uses a single Amazon EC2 instance to run a custom application that handles the website ' s functions. The website functions include product catalog management and customer checkout.

The company ' s website traffic and transaction volume are increasing rapidly. The company wants to re-architect the application from its current monolithic architecture to a loosely coupled architecture to enable independent scaling.

Which solution will meet these requirements?

Options:

A.

Configure an Auto Scaling group that includes multiple EC2 instances that each run a copy of the application. Use an Application Load Balancer (ALB) to distribute traffic across the EC2 instances.

B.

Refactor the application into microservices that run on Amazon ECS containers. Deploy each service to its own container. Use an Application Load Balancer (ALB) to distribute traffic.

C.

Refactor the web application and split the logic into frontend and backend tiers. Run the frontend tier on the existing EC2 instance. Add a second EC2 instance to run the backend tier.

D.

Migrate the entire application to a Kubernetes cluster that has a single container by using Amazon EKS. Implement Amazon Route 53 to geographically distribute traffic.

Question 39

A company wants a flexible compute solution that includes Amazon EC2 instances and AWS Fargate. The company does not want to commit to multi-year contracts.

Which purchasing option will meet these requirements MOST cost-effectively?

Options:

A.

Purchase a 1-year EC2 Instance Savings Plan with the All Upfront option.

B.

Purchase a 1-year Compute Savings Plan with the No Upfront option.

C.

Purchase a 1-year Compute Savings Plan with the Partial Upfront option.

D.

Purchase a 1-year Compute Savings Plan with the All Upfront option.

Question 40

A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build asolution to analyze the performance of the web application with a granularity of no more than 2 minutes.

What should the solutions architect do to meet this requirement?

Options:

A.

Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.

B.

Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.

C.

Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.

D.

Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.

Question 41

A company is building a critical data processing application that will run on Amazon EC2 instances. The company must not run any two nodes on the same underlying hardware. The company requires at least 99.99% availability for the application.

Which solution will meet these requirements?

Options:

A.

Deploy the application to one Availability Zone by using a cluster placement group strategy.

B.

Deploy the application to three Availability Zones by using a spread placement group strategy.

C.

Deploy the application to three Availability Zones by using a cluster placement group strategy.

D.

Deploy the application to one Availability Zone by using a partition placement group strategy.

Question 42

A solutions architect is configuring a VPC that has public subnets and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs). An internet gateway is attached to the VPC.

The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.

Which solution will meet this requirement?

Options:

A.

Create a NAT gateway in one of the public subnets. Update the route tables that are attached to the private subnets to forward non-VPC traffic to the NAT gateway.

B.

Create three NAT instances in each private subnet. Create a private route table for each Availability Zone that forwards non-VPC traffic to the NAT instances.

C.

Attach an egress-only internet gateway in the VPC. Update the route tables of the private subnets to forward non-VPC traffic to the egress-only internet gateway.

D.

Create a NAT gateway in one of the private subnets. Update the route tables that are attached to the private subnets to forward non-VPC traffic to the NAT gateway.

Question 43

An ecommerce company wants to collect user clickstream data from the company ' s website for real-time analysis. The website experiences fluctuating traffic patterns throughout the day. The company needs a scalable solution that can adapt to varying levels of traffic.

Which solution will meet these requirements?

Options:

A.

Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time.

B.

Use Amazon Data Firehose to capture the clickstream data. Use AWS Glue to process the data in real time.

C.

Use Amazon Kinesis Video Streams to capture the clickstream data. Use AWS Glue to process the data in real time.

D.

Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to capture the clickstream data. Use AWS Lambda to process the data in real time.

Question 44

A company has a large fleet of vehicles that are equipped with internet connectivity to send telemetry to the company. The company receives over 1 million data points every 5 minutes from the vehicles. The company uses the data in machine learning (ML) applications to predict vehicle maintenance needs and to preorder parts. The company produces visual reports based on the captured data. The company wants to migrate the telemetry ingestion, processing, and visualization workloads to AWS. Which solution will meet these requirements?

Options:

A.

Use Amazon Timestream for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon QuickSight to visualize the data.

B.

Use Amazon DynamoDB to store the data points. Use DynamoDB Connector to ingest data from DynamoDB into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.

C.

Use Amazon Neptune to store the data points. Use Amazon Kinesis Data Streams to ingest data from Neptune into an AWS Lambda function for processing. Use Amazon QuickSight to visualize the data.

D.

Use Amazon Timestream to for LiveAnalytics to store the data points. Grant Amazon SageMaker permission to access the data for processing. Use Amazon Athena to visualize the data.

Question 45

A company needs to run a critical data processing workload that uses a Python script every night. The workload takes 1 hour to finish.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type. Use the Fargate Spot capacity provider. Schedule the job to run once every night.

B.

Deploy an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type. Schedule the job to run once every night.

C.

Create an AWS Lambda function that uses the existing Python code. Configure Amazon EventBridge to invoke the function once every night.

D.

Create an Amazon EC2 On-Demand Instance that runs Amazon Linux. Migrate the Python script to the instance. Use a cron job to schedule the script. Create an AWS Lambda function to start and stop the instance once every night.

Question 46

A company is building a serverless application to process ecommerce orders. The application must handle bursts of traffic and process orders asynchronously in the order received.

Which solution will meet these requirements?

Options:

A.

Use Amazon SNS with AWS Lambda.

B.

Use Amazon SQS FIFO with AWS Lambda.

C.

Use Amazon SQS standard with AWS Batch.

D.

Use Amazon SNS with AWS Batch.

Question 47

A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route 53 will be used to distribute traffic between these Regions.

Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?

Options:

A.

Create an A record with a latency policy.

B.

Create an A record with a geolocation policy.

C.

Create a CNAME record with a failover policy.

D.

Create a CNAME record with a geoproximity policy.

Question 48

A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access The company must protect the REST APIs from SQL injection and cross-site scripting attacks.

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Configure AWS Shield.

B.

Configure AWS WAR

C.

Set up API Gateway with an Amazon CloudFront distribution Configure AWS Shield in CloudFront.

D.

Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront

Question 49

A company is migrating a Linux-based web server group to AWS. The web servers must access shared files by using the NFS protocol. The company must not make any changes to the web server application.

Which solution will meet these requirements?

Options:

A.

Create an Amazon S3 bucket to store the shared files in S3 Standard. Grant the S3 bucket access to the web servers.

B.

Configure an Amazon CloudFront distribution. Set an Amazon S3 bucket as the origin. Store the shared files in the S3 bucket.

C.

Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the web servers.

D.

Create an Amazon FSx for Windows File Server file system. Configure SMB protocol access for the web servers.

Question 50

A company deploys a stateful application on Amazon EC2 On-Demand Instances in multiple Availability Zones behind an Application Load Balancer (ALB). The application workload is predictable, and the company has not received any CPU usage alerts. The company expects to run the application for at least 1 year.

The company expects CPU usage to increase by 50% during an upcoming 2-week holiday period. The company wants to optimize costs for the application for both the holiday period and normal operations.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Continue to use On-Demand Instances to handle the existing workload. Purchase additional On-Demand Instances to handle the capacity requirement for the upcoming holiday period.

B.

Purchase a 12-month EC2 Instance Savings Plan to handle the existing workload. Use On-Demand Instances to handle the additional capacity requirement for the upcoming holiday period.

C.

Purchase a 12-month Compute Savings Plan to handle the existing workload. Use Spot Instances to handle the additional capacity requirement for the upcoming holiday period.

D.

Purchase a 12-month Compute Savings Plan to handle both the existing workload and the additional capacity requirement for the upcoming holiday period.

Question 51

A company has Amazon EC2 instances in multiple AWS Regions. The instances all store and retrieve confidential data from the same Amazon S3 bucket. The company wants to improve the security of its current architecture.

The company wants to ensure that only the Amazon EC2 instances within its VPC can access the S3 bucket. The company must block all other access to the bucket.

Which solution will meet this requirement?

Options:

A.

Use IAM policies to restrict access to the S3 bucket.

B.

Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.

C.

Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.

D.

Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.

Question 52

A retail company is building an order fulfillment system using a microservices architecture on AWS. The system must store incoming orders durably until processing completes successfully. Multiple teams’ services process orders according to a defined workflow. Services must be scalable, loosely coupled, and able to handle sudden surges in order volume. The processing steps of each order must be centrally tracked.

Which solution will meet these requirements?

Options:

A.

Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.

B.

Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Start an AWS Step Functions workflow for each order that orchestrates the microservices. Use AWS Lambda functions for each microservice.

C.

Send incoming orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EventBridge to distribute events among the microservices. Use AWS Lambda functions for each microservice.

D.

Send incoming orders to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EventBridge to the topic to distribute events among the microservices. Use AWS Lambda functions for each microservice.

Question 53

A company runs a workload in an AWS Region. Users connect to the workload by using an Amazon API Gateway REST API.

The company uses Amazon Route 53 as its DNS provider and has created a Route 53 Hosted Zone. The company wants to provide unique and secure URLs for all workload users.

Which combination of steps will meet these requirements with the MOST operational efficiency? (Select THREE.)

Options:

A.

Create a wildcard custom domain name in the Route 53 hosted zone as an alias for the API Gateway endpoint.

B.

Use AWS Certificate Manager (ACM) to request a wildcard certificate that matches the custom domain in a second Region.

C.

Create a hosted zone for each user in Route 53. Create zone records that point to the API Gateway endpoint.

D.

Use AWS Certificate Manager (ACM) to request a wildcard certificate that matches the custom domain name in the same Region.

E.

Use API Gateway to create multiple API endpoints for each user.

F.

Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).

Question 54

A company is designing a web application on AWS. The application uses a VPN connection between the company ' s on-premises data center and the company ' s VPC.

The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to communicate with the on-premises services from a VPC.

Which solution will meet these requirements MOST securely?

Options:

A.

Create a Route 53 Resolver outbound endpoint. Create a Resolver rule. Associate the Resolver rule with the VPC.

B.

Create a Route 53 Resolver inbound endpoint. Create a Resolver rule. Associate the Resolver rule with the VPC.

C.

Create a Route 53 private hosted zone. Associate the private hosted zone with the on-premises network.

D.

Create a Route 53 public hosted zone. Create a record for each on-premises service.

Question 55

A mining company is using Amazon S3 as its data lake. The company wants to analyze the data collected by the sensors in its mines. A data pipeline is being built to capture data from the sensors, ingest the data into an S3 bucket, and convert the data to Apache Parquet format. The data pipeline must be processed in near-real time. The data will be used for on-demand queries with Amazon Athena.

Which solution will meet these requirements?

Options:

A.

Use Amazon Data Firehose to invoke an AWS Lambda function that converts the data to Parquet format and stores the data in Amazon S3.

B.

Use Amazon Kinesis Data Streams to invoke an AWS Lambda function that converts the data to Parquet format and stores the data in Amazon S3.

C.

Use AWS DataSync to invoke an AWS Lambda function that converts the data to Parquet format and stores the data in Amazon S3.

D.

Use Amazon Simple Queue Service (Amazon SQS) to stream data directly to an AWS Glue job that converts the data to Parquet format and stores the data in Amazon S3.

Question 56

A company wants to share data that is collected from self-driving cars with the automobile community. The data will be made available from within an Amazon S3 bucket. The company wants to minimize its cost of making this data available to other AWS accounts.

What should a solutions architect do to accomplish this goal?

Options:

A.

Create an S3 VPC endpoint for the bucket.

B.

Configure the S3 bucket to be a Requester Pays bucket.

C.

Create an Amazon CloudFront distribution in front of the S3 bucket.

D.

Require that the files be accessible only with the use of the BitTorrent protocol.

Question 57

A company currently stores 5 TB of data in on-premises block storage systems. The company ' s current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.

B.

Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.

C.

Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.

D.

Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.

Question 58

A company uses AWS Organizations to manage multiple AWS accounts. The company needs a secure, event-driven architecture in which specific Amazon SNS topics in Account A can publish messages to specific Amazon SQS queues in Account B.

Which solution meets these requirements while maintaining least privilege?

Options:

A.

Create a new IAM role in Account A that can publish to any SQS queue. Share the role ARN with Account B.

B.

Add SNS topic ARNs to SQS queue policies in Account B. Configure SNS topics to publish to any queue. Encrypt the queue with an AWS KMS key.

C.

Modify the SQS queue policies in Account B to allow only specific SNS topic ARNs from Account A to publish messages. Ensure the SNS topics have publish permissions for the specific queue ARN.

D.

Create a shared IAM role across both accounts with permission to publish to all SQS queues. Enable cross-account access.

Question 59

A company is storing data in Amazon S3 buckets. The company needs to retain any objects that contain personally identifiable information (PII) that might need to be reviewed.

A solutions architect must develop an automated solution to identify objects that contain PII and apply the necessary controls to prevent deletion before review.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.

Create a job in Amazon Macie to scan the S3 buckets for the relevant sensitive data identifiers.

B.

Move the identified objects to the S3 Glacier Deep Archive storage class.

C.

Create an AWS Lambda function that performs an S3 Object Lock legal hold operation on the identified objects.

D.

Create an AWS Lambda function that applies an S3 Object Lock retention period to the identified objects in governance mode.

E.

Create an Amazon EventBridge rule that invokes the AWS Lambda function when Amazon Macie detects sensitive data.

F.

Configure multi-factor authentication (MFA) delete on the S3 buckets.

Question 60

A company runs an application on Amazon EC2 instances. The application needs to access an Amazon RDS database. The company wants to grant the EC2 instances access permissions to the RDS database while following the principle of least privilege.

Which solution will meet these requirements?

Options:

A.

Create an IAM user that has a policy that grants administrative permissions. Use the IAM user ' s access keys on the EC2 instances to access the RDS database.

B.

Create an IAM user that has a policy that grants the minimum required permissions to access the RDS database. Embed the IAM user ' s access keys on the EC2 instances to access the RDS database.

C.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role access key and the IAM role secret key to the EC2 instance profile.

D.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role to an EC2 instance profile. Associate the instance profile with the instances.

Question 61

A global ecommerce company runs its critical workloads on AWS. The workloads use an Amazon RDS for PostgreSQL DB instance that is configured for a Multi-AZ deployment.

Customers have reported application timeouts when the company undergoes database failovers. The company needs a resilient solution to reduce failover time

Which solution will meet these requirements?

Options:

A.

Create an Amazon RDS Proxy. Assign the proxy to the DB instance.

B.

Create a read replica for the DB instance Move the read traffic to the read replica.

C.

Enable Performance Insights. Monitor the CPU load to identify the timeouts.

D.

Take regular automatic snapshots Copy the automatic snapshots to multiple AWS Regions

Question 62

A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.

Which solution will meet these requirements?

Options:

A.

Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.

B.

Use AWS Batch to delete objects older than 3 years except for the data that must be retained

C.

Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.

D.

Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.

Question 63

A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.

Options:

Options:

A.

Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.

B.

Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.

C.

Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.

D.

Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.

Question 64

A company recently migrated a monolithic application to an Amazon EC2 instance and Amazon RDS. The application has tightly coupled modules. The existing design of the application gives the application the ability to run on only a single EC2 instance.

The company has noticed high CPU utilization on the EC2 instance during peak usage times. The high CPU utilization corresponds to degraded performance on Amazon RDS for read requests. The company wants to reduce the high CPU utilization and improve read request performance.

Which solution will meet these requirements?

Options:

A.

Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Configure an RDS read replica for read requests.

B.

Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Add an RDS read replica and redirect all read/write traffic to the replica.

C.

Configure an Auto Scaling group with a minimum size of 1 and maximum size of 2. Resize the RDS DB instance to an instance type that has more CPU capacity.

D.

Resize the EC2 instance to an EC2 instance type that has more CPU capacity Configure an Auto Scaling group with a minimum and maximum size of 1. Resize the RDS DB instance to an instance type that has more CPU capacity.

Question 65

A company runs a mobile game app on AWS. The app stores data for every user session. The data updates frequently during a gaming session. The app stores up to 256 KB for each session. Sessions can last up to 48 hours.

The company wants to automate the deletion of expired session data. The company must be able to restore all session data automatically if necessary.

Which solution will meet these requirements?

Options:

A.

Use an Amazon DynamoDB table to store the session data. Enable point-in-time recovery (PITR) and TTL for the table. Select the corresponding attribute for TTL in the session data.

B.

Use an Amazon MemoryDB table to store the session data. Enable point-in-time recovery (PITR) and TTL for the table. Select the corresponding attribute for TTL in the session data.

C.

Store session data in an Amazon S3 bucket. Use the S3 Standard storage class. Enable S3 Versioning for the bucket. Create an S3 Lifecycle configuration to expire objects after 48 hours.

D.

Store session data in an Amazon S3 bucket. Use the S3 Intelligent-Tiering storage class. Enable S3 Versioning for the bucket. Create an S3 Lifecycle configuration to expire objects after 48 hours.

Question 66

A company manages millions of documents in hundreds of Amazon S3 buckets in multiple AWS Regions. The company must determine whether any of the S3 buckets contain personally identifiable information (PII).

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.

Use Amazon Detective to detect PII in the S3 buckets.

B.

Use AWS Trusted Advisor to generate PII notifications.

C.

Use Amazon Macie to detect PII in the S3 buckets.

D.

Use AWS Lambda functions to review each file in the S3 buckets to identify PII.

Question 67

Question:

A company uses AWS Organizations to manage multiple AWS accounts. Each department in the company has its own AWS account. A security team needs to implement centralized governance and control to enforce security best practices across all accounts. The team wants to have control over which AWS services each account can use. The team needs to restrict access to sensitive resources based on IP addresses or geographic regions. The root user must be protected with multi-factor authentication (MFA) across all accounts.

Options:

Options:

A.

Use AWS Identity and Access Management (IAM) to manage IAM users and IAM roles in each account. Implement MFA for the root user in each account. Enforce service restrictions by using AWS managed prefix lists.

B.

Use AWS Control Tower to establish a multi-account environment. Use service control policies (SCPs) to enforce service restrictions in AWS Organizations. Configure MFA for the root user across all accounts.

C.

Use AWS Systems Manager to enforce service restrictions across multiple accounts. Use IAM policies to enforce MFA for the root user across all accounts.

D.

Use AWS IAM Identity Center to manage user access and to enforce service restrictions by using permissions boundaries in each account.

Question 68

A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier.

The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost.

Which solution will meet these requirements?

Options:

A.

Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.

B.

Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.

D.

Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.

Question 69

A global media streaming company is migrating its user authentication and content delivery services to AWS. The company wants to use Amazon API Gateway for user authentication and authorization. The company needs a solution that restricts API access to AWS Regions in the United States and ensures minimal latency.

Which solution will meet these requirements?

Options:

A.

Create an API Gateway REST API. Configure an AWS WAF firewall in the same Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway REST API.

B.

Create an API Gateway HTTP API. Configure an AWS WAF firewall in a different Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway HTTP API.

C.

Create an API Gateway REST API. Configure an AWS WAF firewall in a different Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway REST API.

D.

Create an API Gateway HTTP API. Configure an AWS WAF firewall in the same Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway HTTP API.

Question 70

A company runs a critical public application on Amazon Elastic Kubernetes Service (Amazon EKS) clusters. The application has a microservices architecture. The company needs to implement a solution that collects, aggregates, and summarizes metrics and logs from the application in a centralized location.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Run the Amazon CloudWatch agent in the existing EKS cluster. Use a CloudWatch dashboard to view the metrics and logs.

B.

Configure a data stream in Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read events and to deliver the events to an Amazon S3 bucket. Use Amazon Athena to view the events.

C.

Configure AWS CloudTrail to capture data events. Use Amazon OpenSearch Service to query CloudTrail.

D.

Configure Amazon CloudWatch Container Insights in the existing EKS cluster. Use a CloudWatch dashboard to view the metrics and logs.

Question 71

A company is developing software that uses a PostgreSQL database schema. The company needs to configure development environments and test environments for its developers.

Each developer at the company uses their own development environment, which includes a PostgreSQL database. On average, each development environment is used for an 8-hour workday. The test environments will be used for load testing that can take up to 2 hours each day.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure development environments and test environments with their own Amazon Aurora Serverless v2 PostgreSQL database.

B.

For each development environment, configure an Amazon RDS for PostgreSQL Single-AZ DB instance. For the test environment, configure a single Amazon RDS for PostgreSQL Multi-AZ DB instance.

C.

Configure development environments and test environments with their own Amazon Aurora PostgreSQL DB cluster.

D.

Configure an Amazon Aurora global database. Allow developers to connect to the database with their own credentials.

Question 72

A solutions architect needs to optimize a large data analytics job that runs on an Amazon EMR cluster. The job takes 13 hours to finish. The cluster has multiple core nodes and worker nodes deployed on large, compute-optimized instances.

After reviewing EMR logs, the solutions architect discovers that several nodes are idle for more than 5 hours while the job is running. The solutions architect needs to optimize cluster performance.

Which solution will meet this requirement MOST cost-effectively?

Options:

A.

Increase the number of core nodes to ensure there is enough processing power to handle the analytics job without any idle time.

B.

Use the EMR managed scaling feature to automatically resize the cluster based on workload.

C.

Migrate the analytics job to a set of AWS Lambda functions. Configure reserved concurrency for the functions.

D.

Migrate the analytics job core nodes to a memory-optimized instance type to reduce the total job runtime.

Question 73

A company is migrating its databases to Amazon RDS for PostgreSQL. The company is migrating its applications to Amazon EC2 instances. The company wants to optimize costs for long-running workloads.

Which solution will meet this requirement MOST cost-effectively?

Options:

A.

Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.

B.

Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.

C.

Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.

D.

Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances.

Question 74

The customers of a finance company request appointments with financial advisors by sending text messages. A web application that runs on Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS) queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting confirmation email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB database.

As the company expands, customers report that their meeting invitations are taking longer to arrive.

What should a solutions architect recommend to resolve this issue?

Options:

A.

Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.

B.

Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.

C.

Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.

D.

Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth of the SQS queue.

Question 75

A company receives data transfers from a small number of external clients that use SFTP software on an Amazon EC2 instance. The clients use an SFTP client to upload data. The clients use SSH keys for authentication. Every hour, an automated script transfers new uploads to an Amazon S3 bucket for processing.

The company wants to move the transfer process to an AWS managed service and to reduce the time required to start data processing. The company wants to retain the existing user management and SSH key generation process. The solution must not require clients to make significant changes to their existing processes.

Which solution will meet these requirements?

Options:

A.

Reconfigure the script that runs on the EC2 instance to run every 15 minutes. Create an S3 Event Notifications rule for all new object creation events. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination.

B.

Create an AWS Transfer Family SFTP server that uses the existing S3 bucket as a target. Use service-managed users to enable authentication.

C.

Require clients to add the AWS DataSync agent into their local environments. Create an IAM user for each client that has permission to upload data to the target S3 bucket.

D.

Create an AWS Transfer Family SFTP connector that has permission to access the target S3 bucket for each client. Store credentials in AWS Systems Manager. Create an IAM role to allow the SFTP connector to securely use the credentials.

Question 76

A company hosts an application that allows authorized users to upload and download documents. The application uses Amazon EC2 instances and an Amazon Elastic File System (Amazon EFS) file system.

The company plans to deploy the application into a second AWS Region. The company will launch a new EFS file system and a new set of EC2 instances in the second Region. A solutions architect must develop a highly available and fault-tolerant solution to establish two-way synchronization across the Regions.

Which solution will meet these requirements?

Options:

A.

Create an Amazon EFS VPC endpoint for the original EFS file system in the second Region. Mount both the original and the new EFS file system to the new set of EC2 instances in the second Region. Configure an rsync cron job to run every 5 minutes.

B.

Set up EFS replication between the two EFS file systems. Set the new file system as the source. Set the original file system in the first Region as the destination. Turn off overwrite protection for the destination file system.

C.

Set up one AWS DataSync agent in each Region. Configure Amazon EFS VPC endpoints, EFS transfer locations, and EFS transfer tasks with opposite directions on the two DataSync agents.

D.

Mount the EFS file system in the second Region to the new set of EC2 instances in the second Region. Use AWS Transfer Family to establish SFTP access to the EFS file system in the original Region. Configure an rsync cron job to run every 5 minutes.

Question 77

A media company needs to migrate its Windows-based video editing environment to AWS. The company ' s current environment processes 4K video files that require sustained throughput of 2 GB per second across multiple concurrent users.

The company ' s storage needs increase by 1 TB each week. The company needs a shared file system that supports SMB protocol and can scale automatically based on storage demands.

Which solution will meet these requirements?

Options:

A.

Deploy an Amazon FSx for Windows File Server Multi-AZ file system with SSD storage.

B.

Deploy an Amazon Elastic File System (Amazon EFS) file system in Max I/O mode. Provision mount targets in multiple Availability Zones.

C.

Deploy an Amazon FSx for Lustre file system with a Persistent 2 deployment type. Provision the file system with 2 TB of storage.

D.

Deploy Amazon S3 File Gateway by using multiple cached gateway instances. Configure S3 Transfer Acceleration.

Question 78

A company runs a three-tier web application in a VPC on AWS. The company deployed an Application Load Balancer (ALB) in a public subnet. The web tier and application tier Amazon EC2 instances are deployed in a private subnet. The company uses a self-managed MySQL database that runs on EC2 instances in an isolated private subnet for the database tier.

The company wants a mechanism that will give a DevOps team the ability to use SSH to access all the servers. The company also wants to have a centrally managed log of all connections made to the servers.

Which combination of solutions will meet these requirements with the MOST operational efficiency? (Select TWO.)

Options:

A.

Create a bastion host in the public subnet. Configure security groups in the public, private, and isolated subnets to allow SSH access.

B.

Create an interface VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.

C.

Create an IAM policy that grants access to AWS Systems Manager Session Manager. Attach the IAM policy to the EC2 instances.

D.

Create a gateway VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.

E.

Attach an AmazonSSMManagedInstanceCore AWS managed IAM policy to all the EC2 instance roles.

Question 79

A social media company allows users to upload images to its website. The website runs on Amazon EC2 instances. During upload requests, the website resizes the images to a standard size and stores the resized images in Amazon S3. Users are experiencing slow upload requests to the website.

The company needs to reduce coupling within the application and improve website performance. A solutions architect must design the most operationally efficient process for image uploads.

Which combination of actions should the solutions architect take to meet these requirements? (Select TWO.)

Options:

A.

Configure the application to upload images to S3 Glacier Flexible Retrieval.

B.

Configure the web server to upload the original images to Amazon S3.

C.

Configure the application to upload images directly from each user ' s browser to Amazon S3 by using a presigned URL.

D.

Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.

E.

Create an Amazon EventBridge rule that invokes an AWS Lambda function on a schedule to resize uploaded images.

Question 80

A company serves its website by using an Auto Scaling group of Amazon EC2 instances in a single AWS Region. The website does not require a database

The company is expanding, and the company ' s engineering team deploys the website to a second Region. The company wants to distribute traffic across both Regions to accommodate growth and for disaster recovery purposes The solution should not serve traffic from a Region in which the website is unhealthy.

Which policy or resource should the company use to meet these requirements?

Options:

A.

An Amazon Route 53 simple routing policy

B.

An Amazon Route 53 multivalue answer routing policy

C.

An Application Load Balancer in one Region with a target group that specifies the EC2 instance IDs from both Regions

D.

An Application Load Balancer in one Region with a target group that specifies the IP addresses of the EC2 instances from both Regions

Question 81

A company hosts dozens of multi-tier applications on AWS. The presentation layer and logic layer are Amazon EC2 Linux instances that use Amazon EBS volumes.

The company needs a solution to ensure that operating system vulnerabilities are not introduced to the EC2 instances when the company deploys new features. The company uses custom AMIs to deploy EC2 instances in an Auto Scaling group. The solution must scale to handle all applications that the company hosts.

Which solution will meet these requirements?

Options:

A.

Use Amazon Inspector to patch operating system vulnerabilities. Invoke Amazon Inspector when a new AMI is deployed.

B.

Use AWS Backup to back up the EBS volume of each updated instance. Use the EBS backup volumes to create new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

C.

Use AWS Systems Manager Patch Manager to patch operating system vulnerabilities in the custom AMIs.

D.

Use EC2 Image Builder to create new AMIs when the company deploys new features. Include the update-linux component in the build components of the new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

Question 82

A company has separate AWS accounts for its finance, data analytics, and development departments. Because of costs and security concerns, the company wants to control which services each AWS account can use

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use AWS Systems Manager templates to control which AWS services each department can use

B.

Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs.

C.

Use AWS CloudFormation to automatically provision only the AWS services that each department can use.

D.

Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of specific AWS services

Question 83

An internal product team is deploying a new application to a private VPC in a company ' s AWS account. The application runs on Amazon EC2 instances that are in a security group named App1. The EC2 instances store application data in an Amazon S3 bucket and use AWS Secrets Manager to store application service credentials. The company ' s security policy prohibits applications in a private VPC from using public IP addresses to communicate.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Configure gateway endpoints for Amazon S3 and AWS Secrets Manager.

B.

Configure interface VPC endpoints for Amazon S3 and AWS Secrets Manager.

C.

Add routes to the endpoints in the VPC route table.

D.

Associate the App1 security group with the interface VPC endpoints. Configure a self-referencing security group rule to allow inbound traffic.

E.

Associate the App1 security group with the gateway endpoints. Configure a self-referencing security group rule to allow inbound traffic.

Question 84

A company runs an order management application on AWS. The application allows customers to place orders and pay with a credit card. The company uses an Amazon CloudFront distribution to deliver the application. A security team has set up logging for all incoming requests. The security team needs a solution to generate an alert if any user modifies the logging configuration.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Configure an Amazon EventBridge rule that is invoked when a user creates or modifies a CloudFront distribution. Add the AWS Lambda function as a target of the EventBridge rule.

B.

Create an Application Load Balancer (ALB). Enable AWS WAF rules for the ALB. Configure an AWS Config rule to detect security violations.

C.

Create an AWS Lambda function to detect changes in CloudFront distribution logging. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send notifications to the security team.

D.

Set up Amazon GuardDuty. Configure GuardDuty to monitor findings from the CloudFront distribution. Create an AWS Lambda function to address the findings.

E.

Create a private API in Amazon API Gateway. Use AWS WAF rules to protect the private API from common security problems.

Question 85

An ecommerce company hosts an application on AWS across multiple Availability Zones. The application experiences uniform load throughout most days.

The company hosts some components of the application in private subnets. The components need to access the internet to install and update patches.

A solutions architect needs to design a cost-effective solution that provides secure outbound internet connectivity for private subnets across multiple Availability Zones. The solution must maintain high availability.

Options:

A.

Deploy one NAT gateway in each Availability Zone. Configure the route table for each pri-vate subnet within an Availability Zone to route outbound traffic through the NAT gateway in the same Availability Zone.

B.

Place one NAT gateway in a designated Availability Zone within the VPC. Configure the route tables of the private subnets in each Availability Zone to direct outbound traffic specifi-cally through the NAT gateway for internet access.

C.

Deploy an Amazon EC2 instance in a public subnet. Configure the EC2 instance as a NAT instance. Set up the instance with security groups that allow inbound traffic from private sub-nets and outbound internet access. Configure route tables to direct traffic from the private sub-nets through the NAT instance.

D.

Use one NAT Gateway in a Network Load Balancer (NLB) target group. Configure private subnets in each Availability Zone to route traffic to the NLB for outbound internet access.

Question 86

A company runs an on-premises application on a Kubernetes cluster. The company recently added millions of new customers. The company ' s existing on-premises infrastructure is unable to handle the large number of new customers. The company needs to migrate the on-premises application to the AWS Cloud.

The company will migrate to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The company does not want to manage the underlying compute infrastructure for the new architecture on AWS.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use a self-managed node to supply compute capacity. Deploy the application to the new EKS cluster.

B.

Use managed node groups to supply compute capacity. Deploy the application to the new EKS cluster.

C.

Use AWS Fargate to supply compute capacity. Create a Fargate profile. Use the Fargate profile to deploy the application.

D.

Use managed node groups with Karpenter to supply compute capacity. Deploy the application to the new EKS cluster.

Question 87

A company wants to deploy an AWS Lambda function that will read and write objects to Amazon S3 bucket. The Lambda function must be connected to the company ' s VPC. The company must deploy the Lambda function only to private subnets in the VPC. The Lambda function must not be allowed to access the internet.

Which solutions will meet these requirements? (Select TWO.)

Options:

A.

Create a private NAT gateway to access the S3 bucket.

B.

Attach an Elastic IP address to the NAT gateway.

C.

Create a gateway VPC endpoint for the S3 bucket.

D.

Create an interface VPC endpoint for the S3 bucket.

E.

Create a public NAT gateway to access the S3 bucket.

Question 88

A solutions architect is designing the architecture for a company website that is composed of static content. The company ' s target customers are located in the United States and Europe.

Which architecture should the solutions architect recommend to MINIMIZE cost?

Options:

A.

Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to limit the edge locations in use.

B.

Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to maximize the use of edge locations.

C.

Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront geolocation routing policy to route requests to the closest Region to the user.

D.

Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront distribution with an Amazon Route 53 latency routing policy to route requests to the closest Region to the user.

Question 89

A company runs an enterprise resource planning (ERP) system on Amazon EC2 instances in a single AWS Region. Users connect to the ERP system by using a public API that is hosted on the EC2 instances. International users report slow API response times from their data centers.

A solutions architect needs to improve API response times for the international users.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Set up an AWS Direct Connect connection that has a public virtual interface (VIF) to connect each user ' s data center to the EC2 instances. Create a Direct Connect gateway for the ERP system API to route user API requests.

B.

Deploy Amazon API Gateway endpoints in multiple Regions. Use Amazon Route 53 latency-based routing to route requests to the nearest endpoint. Configure a VPC peering connection between the Regions to connect to the ERP system.

C.

Set up AWS Global Accelerator. Configure listeners for the necessary ports. Configure endpoint groups for the appropriate Regions to distribute traffic. Create an endpoint in each group for the API.

D.

Use AWS Site-to-Site VPN to establish dedicated VPN tunnels between multiple Regions and user networks. Route traffic to the API through the VPN connections.

Question 90

A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-premises locations.

B.

Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.

C.

Create a transit gateway. Create VPC attachments for the VPC connections. Create VPNattachments for the on-premises connections.

D.

Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect the central VPC to other VPCs by using peering connections.

Question 91

A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete. If the job is interrupted, it has to restart from the beginning.

How should the solutions architect address this issue in the MOST cost-effective manner?

Options:

A.

Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.

B.

Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.

C.

Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.

D.

Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge scheduled event.

Question 92

A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns.

Which solution will meet these requirements?

Options:

A.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning (ML) models to analyze the transcript files.

B.

Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena to analyze the transcript files.

C.

Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries to analyze the transcript files.

D.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract to analyze the transcript files.

Question 93

A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must automatically identify noncompliant resources and enforce compliance policies on findings.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

B.

Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.

C.

Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

D.

Use Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Question 94

A company has primary and secondary data centers that are 500 miles (804.7 km) apart and interconnected with high-speed fiber-optic cable. The company needs a highly available and secure network connection between its data centers and a VPC on AWS for a mission-critical workload.

A solutions architect must choose a connection solution that provides maximum resiliency.

Which solution meets these requirements?

Options:

A.

Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices

B.

A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device

C.

Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices

D.

A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices

Question 95

A global company runs its workloads on AWS The company ' s application uses Amazon S3 buckets across AWS Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets daily. The company wants to identify all S3 buckets that are not versioning-enabled.

Which solution will meet these requirements?

Options:

A.

Set up an AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-enabled across Regions

B.

Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.

C.

Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across Regions

D.

Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled across Regions

Question 96

A company runs an application on Amazon EC2 instances that have instance store volumes attached. The application uses Amazon Elastic File System (Amazon EFS) to store files that are shared across a cluster of Linux servers. The shared files are at least 1 GB in size.

The company accesses the files often for the first 7 days after creation. The files must remain readily available after the first 7 days.

The company wants to optimize costs for the application.

Which solution will meet these requirements?

Options:

A.

Configure an AWS Storage Gateway Amazon S3 File Gateway to cache frequently accessed files locally. Store older files in Amazon S3.

B.

Move the files from Amazon EFS, and store the files locally on each EC2 instance.

C.

Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.

D.

Deploy AWS DataSync to automatically move files older than 7 days to Amazon S3 Glacier Deep Archive.

Question 97

A company hosts an application on AWS. The application has generated approximately 2.5 TB of data over the previous 12 years. The company currently stores the data on Amazon EBS volumes.

The company wants a cost-effective backup solution for long-term storage. The company must be able to retrieve the data within minutes when required for audits.

Which solution will meet these requirements?

Options:

A.

Create EBS snapshots to back up the data.

B.

Create an Amazon S3 bucket. Use the S3 Glacier Deep Archive storage class to back up the data.

C.

Create an Amazon S3 bucket. Use the S3 Glacier Flexible Retrieval storage class to back up the data.

D.

Create an Amazon Elastic File System (Amazon EFS) file system to back up the data.

Question 98

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures. The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

Options:

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Question 99

A company ' s ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that the Lambda invocations do not overload the database with too many connections.

What should a solutions architect do to meet these requirements?

Options:

A.

Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.

B.

Point the client driver at an RDS Proxy endpoint. Deploy the Lambda functions inside a VPC.

C.

Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.

D.

Point the client driver at an RDS Proxy endpoint. Deploy the Lambda functions outside a VPC.

Question 100

A company needs a solution to back up and protect critical AWS resources. The company needs to regularly take backups of several Amazon EC2 instances and Amazon RDS for PostgreSQL databases. To ensure high resiliency, the company must have the ability to validate and restore backups.

Which solution meets the requirement with LEAST operational overhead?

Options:

A.

Use AWS Backup to create a backup schedule for the resources. Use AWS Backup to create a restoration testing plan for the required resources.

B.

Take snapshots of the EC2 instances and RDS DB instances. Create AWS Batch jobs to validate and restore the snapshots.

C.

Create a custom AWS Lambda function to take snapshots of the EC2 instances and RDS DB instances. Create a second Lambda function to restore the snapshots periodically to validate the backups.

D.

Take snapshots of the EC2 instances and RDS DB instances. Create an AWS Lambda function to restore the snapshots periodically to validate the backups.

Question 101

A healthcare provider is planning to store patient data on AWS as PDF files. To comply with regulations, the company must encrypt the data and store the files in multiple locations. The data must be available for immediate access from any environment.

Options:

A.

Store the files in an Amazon S3 bucket. Use the Standard storage class. Enable server-side encryption with Amazon S3 managed keys (SSE-S3) on the bucket. Configure cross-Region replication on the bucket.

B.

Store the files in an Amazon Elastic File System (Amazon EFS) volume. Use an AWS KMS managed key to encrypt the EFS volume. Use AWS DataSync to replicate the EFS volume to a second AWS Region.

C.

Store the files in an Amazon Elastic Block Store (Amazon EBS) volume. Configure AWS Backup to back up the volume on a regular schedule. Use an AWS KMS key to encrypt the backups.

D.

Store the files in an Amazon S3 bucket. Use the S3 Glacier Flexible Retrieval storage class. Ensure that all PDF files are encrypted by using client-side encryption before the files are uploaded. Configure cross-Region replication on the bucket.

Question 102

A financial service company has a two-tier consumer banking application. The frontend serves static web content. The backend consists of APIs. The company needs to migrate the frontendcomponent to AWS. The backend of the application will remain on premises. The company must protect the application from common web vulnerabilities and attacks.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Migrate the frontend to Amazon EC2 instances. Deploy an Application Load Balancer (ALB) in front of the instances. Use the instances to invoke the on-premises APIs. Associate AWS WAF rules with the instances.

B.

Deploy the frontend as an Amazon CloudFront distribution that has multiple origins. Configure one origin to be an Amazon S3 bucket that serves the static web content. Configure a second origin to route traffic to the on-premises APIs based on the URL pattern. Associate AWS WAF rules with the distribution.

C.

Migrate the frontend to Amazon EC2 instances. Deploy a Network Load Balancer (NLB) in front of the instances. Use the instances to invoke the on-premises APIs. Create an AWS Network Firewall instance. Route all traffic through the Network Firewall instance.

D.

Deploy the frontend as a static website based on an Amazon S3 bucket. Use an Amazon API Gateway REST API and a set of Amazon EC2 instances to invoke the on-premises APIs. Associate AWS WAF rules with the REST API and the S3 bucket.

Question 103

A company hosts an application on AWS that gives users the ability to download photos. The company stores all photos in an Amazon S3 bucket that is located in the us-east-1 Region. The company wants to provide the photo download application to global customers with low latency.

Which solution will meet these requirements?

Options:

A.

Find the public IP addresses that Amazon S3 uses in us-east-1. Configure an Amazon Route 53 latency-based routing policy that routes to all the public IP addresses.

B.

Configure an Amazon CloudFront distribution in front of the S3 bucket. Use the distribution endpoint to access the photos that are in the S3 bucket.

C.

Configure an Amazon Route 53 geoproximity routing policy to route the traffic to the S3 bucket that is closest to each customer ' s location.

D.

Create a new S3 bucket in the us-west-1 Region. Configure an S3 Cross-Region Replication rule to copy the photos to the new S3 bucket.

Question 104

A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS). The company must be able to control rotation of the encryption keys.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a customer managed key Use the key to encrypt the EBS volumes.

B.

Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.

C.

Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.

D.

Use an AWS owned key to encrypt the EBS volumes.

Question 105

A company has a website that handles dynamic traffic loads. The website architecture is based on Amazon EC2 instances in an Auto Scaling group that is configured to use scheduled scaling. Each EC2 instance runs code from an Amazon Elastic File System (Amazon EFS) volume and stores shared data back to the same volume.

The company wants to optimize costs for the website.

Which solution will meet this requirement?

Options:

A.

Reconfigure the Auto Scaling group to set a desired number of instances. Turn off scheduled scaling.

B.

Create a new launch template version for the Auto Scaling group that uses larger EC2 instances.

C.

Reconfigure the Auto Scaling group to use a target tracking scaling policy.

D.

Replace the EFS volume with instance store volumes.

Question 106

A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications that run on Amazon EC2 instances access the file shares. The company needs a storage disaster recovery (DR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be accessed by using the same protocols as the primary Region.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicate the S3 bucket to the secondary Region.

B.

Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary Region. Create a new FSx for ONTAP instance from the backup.

C.

Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from the primary Region to the secondary Region.

D.

Create an Amazon EFS volume. Migrate the current data to the volume. Replicate the volume to the secondary Region.

Question 107

A company is developing a new application that uses Amazon EC2, Amazon S3, and AWS Lambda resources. The company wants to allow employees to access the AWS Management Console by using existing credentials that the company stores and manages in an on-premises Microsoft Active Directory. Each employee must have a specific level of access to the AWS resources that is based on the employee ' s role.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Configure AWS Directory Service to create an Active Directory in AWS Managed Microsoft AD. Establish a trust relationship with the on-premises Active Directory. Configure IAM roles and trust policies to give the employees access to the AWS resources.

B.

Use LDAP to directly integrate the on-premises Active Directory with IAM. Map Active Directory groups to IAM roles to control access to AWS resources.

C.

Implement a custom identity broker to authenticate users into the on-premises Active Directory. Configure the identity broker to use AWS STS to grant authorized users IAM role-based access to the AWS resources.

D.

Configure Amazon Cognito to federate users into the on-premises Active Directory. Use Cognito user pools to manage user identities and to manage user access to the AWS resources.

Question 108

A company is developing a new application that will run on Amazon EC2 instances. The application needs to access multiple AWS services.

The company needs to ensure that the application will not use long-term access keys to access AWS services.

Options:

A.

Create an IAM user. Assign the IAM user to the application. Create programmatic access keys for the IAM user. Embed the access keys in the application code.

B.

Create an IAM user that has programmatic access keys. Store the access keys in AWS Secrets Manager. Configure the application to retrieve the keys from Secrets Manager when the application runs.

C.

Create an IAM role that can access AWS Systems Manager Parameter Store. Associate the role with each EC2 instance profile. Create IAM access keys for the AWS services, and store the keys in Parameter Store. Configure the application to retrieve the keys from Parameter Store when the application runs.

D.

Create an IAM role that has permissions to access the required AWS services. Associate the IAM role with each EC2 instance profile.

Question 109

Question:

A company recently migrated a large amount of research data to an Amazon S3 bucket. The company needs an automated solution to identify sensitive data in the bucket. A security team also needs to monitor access patterns for the data 24 hours a day, 7 days a week to identify suspicious activities or evidence of tampering with security controls.

Options:

Options:

A.

Set up AWS CloudTrail reporting, and grant the security team read-only access to the CloudTrail reports. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.

B.

Enable Amazon Macie and Amazon GuardDuty on the account. Grant the security team access to Macie and GuardDuty. Review the findings with the security team.

C.

Set up an Amazon S3 Inventory report. Use Amazon Athena and Amazon QuickSight to identify sensitive data. Create a dashboard for the security team to review findings.

D.

Use AWS Identity and Access Management (IAM) Access Advisor to monitor for suspicious activity and tampering. Create a dashboard for the security team. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.

Question 110

A company uses Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store media files. The company wants to automate infrastructure creation across multiple Regions and securely grant EC2 access to S3 using IAM.

Which solution will meet these requirements MOST securely?

Options:

A.

Store IAM access keys in UserData.

B.

Store access keys in S3 and reference them in CloudFormation.

C.

Use an IAM role and instance profile in CloudFormation.

D.

Retrieve access keys dynamically and store them on EC2.

Question 111

A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript.

Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low.

Which combination of AWS services or features will meet these requirements? (Select TWO.)

Options:

A.

AWS Batch

B.

Network Load Balancer

C.

Application Load Balancer

D.

Amazon EC2 Auto Scaling

E.

Amazon S3 website hosting

Question 112

A company recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon Elastic File System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes the company ' s files, and the index is stored in an Amazon RDS database.

The company needs to optimize storage costs with some application and services changes.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create an Amazon S3 bucket that uses an Intelligent-Tiering lifecycle policy. Copy all files to the S3 bucket. Update the application to use Amazon S3 API to store and retrieve files.

B.

Deploy Amazon FSx for Windows File Server file shares. Update the application to use CIFS protocol to store and retrieve files.

C.

Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount point to store and retrieve files.

D.

Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to the S3 bucket. Update the application to use Amazon S3 API to store and retrieve files as standard retrievals.

Question 113

A company plans to deploy an application that uses an Amazon CloudFront distribution. The company will set an Application Load Balancer (ALB) as the origin for the distribution. The company wants to ensure that users access the ALB only through the CloudFront distribution. The company plans to deploy the solution in a new VPC.

Which solution will meet these requirements?

Options:

A.

Configure the network ACLs in the subnet where the ALB is deployed to allow inbound traf-fic only from the public IP addresses of the CloudFront edge locations.

B.

Create a VPC origin for the CloudFront distribution. Set the VPC origin Amazon Resource Name (ARN) to the ARN of the ALB.

C.

Create a security group that allows only inbound traffic from the public IP addresses of the CloudFront edge locations. Associate the security group with the ALB.

D.

Create a VPC origin for the CloudFront distribution. Configure an ALB rule. Set the source IP condition to allow traffic only from the public IP addresses of the CloudFront edge locations.

Question 114

A company runs a container application on a Kubernetes cluster in the company ' s data center. The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center cannot scale fast enough to meet the company ' s expanding business needs. The company wants to migrate the workloads to AWS.

Which solution will meet these requirements with the LEAST overhead?

Options:

A.

Migrate the container application to Amazon ECS. Use Amazon SQS to retrieve the messages.

B.

Migrate the container application to Amazon EKS. Use Amazon MQ to retrieve the messages.

C.

Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve the messages.

D.

Use AWS Lambda functions to run the application. Use Amazon SQS to retrieve the messages.

Question 115

A company has a legacy mainframe system that can retrieve data only from systems that provide synchronous RESTful APIs. A developer at the company creates a new web service to calculate stock prices. The new web service takes 3 minutes on average to process each request. The developer must integrate the new web service with the legacy mainframe system.

Which solution will meet these requirements?

Options:

A.

Deploy an Amazon API Gateway REST API. Integrate the REST API with an AWS Lambda function. Configure the legacy mainframe to use the REST API endpoint.

B.

Deploy an Amazon API Gateway HTTP API. Integrate the HTTP API with an AWS Lambda function. Configure the legacy mainframe to use the HTTP API endpoint.

C.

Deploy an Amazon API Gateway WebSocket API. Integrate the WebSocket API with an AWS Lambda function. Configure the legacy mainframe to use the WebSocket API endpoint.

D.

Configure a URL for an AWS Lambda function. Configure the legacy mainframe to use the Lambda function URL endpoint.

Question 116

An ecommerce company is redesigning a web application to run on the AWS Cloud. The application needs to store static website content and must use a Microsoft SQL Server database to store customer data. The company needs to deploy the application in a resilient way across multiple Availability Zones.

Which solution will meet these requirements?

Options:

A.

Use an Amazon S3 bucket to store static content. Deploy an Amazon RDS Custom for SQL Server DB instance for the database.

B.

Use an Amazon S3 bucket to store static content. Create an Amazon RDS for SQL Server Multi-AZ deployment for the database.

C.

Create an Amazon Elastic Block Store (Amazon EBS) Multi-Attach volume to store static content. Deploy an Amazon RDS for SQL Server DB instance for the database.

D.

Create an Amazon Elastic Block Store (Amazon EBS) Multi-Attach volume to store static content. Deploy SQL Server on two Amazon EC2 instances in separate Availability Zones.

Question 117

A medical company wants to perform transformations on a large amount of clinical trial data that comes from several customers. The company must extract the data from a relational databasethatcontains the customer data. Then the company will transform the data by using a series of complex rules. The company will load the data to Amazon S3 when the transformations are complete.

All data must be encrypted where it is processed before the company stores the data in Amazon S3. All data must be encrypted by using customer-specific keys.

Which solution will meet these requirements with the LEAST amount of operational effort?

Options:

A.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.

B.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses client-side encryption with a custom client-side root key (CSE-Custom) to encrypt the data.

C.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses client-side encryption with AWS KMS managed keys (CSE-KMS) to encrypt the data.

D.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the data.

Question 118

A company hosts a two-tier website that runs on Amazon EC2 instances. The website has a database that runs on Amazon RDS for MySQL. All users are required to log in to the website to see their own customized pages.

The website typically experiences low traffic. Occasionally, the website experiences sudden increases in traffic and becomes unresponsive. During these increases in traffic, the database experiences a heavy write load. A solutions architect must improve the website ' s availability without changing the application code.

What should the solutions architect do to meet these requirements?

Options:

A.

Create an Amazon ElastiCache (Redis OSS) cluster. Configure the application to cache common database queries in the ElastiCache cluster.

B.

Create an Auto Scaling group. Configure Amazon CloudWatch alarms to scale the number of EC2 instances based on the percentage of CPU in use during the traffic increases.

C.

Create an Amazon CloudFront distribution that points to the EC2 instances as the origin. Enable caching of dynamic content, and configure a write throttle from the EC2 instances to the RDS database.

D.

Migrate the database to an Amazon Aurora Serverless cluster. Set the maximum Aurora capacity units (ACUs) to a value high enough to respond to the traffic increases. Configure the EC2 instances to connect to the Aurora database.

Question 119

A finance company is migrating its trading platform to AWS. The trading platform processes a high volume of market data and processes stock trades. The company needs to establish a consistent, low-latency network connection from its on-premises data center to AWS.

The company will host resources in a VPC. The solution must not use the public internet.

Which solution will meet these requirements?

Options:

A.

Use AWS Client VPN to connect the on-premises data center to AWS.

B.

Use AWS Direct Connect to set up a connection from the on-premises data center to AWS

C.

Use AWS PrivateLink to set up a connection from the on-premises data center to AWS.

D.

Use AWS Site-to-Site VPN to connect the on-premises data center to AWS.

Question 120

A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an Amazon EC2 Auto Scaling group with scheduled scaling actions. However, the capacity does not always increase at the scheduled times, and instances terminate many times a day. A solutions architect must ensure that the instances launch on time and have fewer interruptions.

Which action will meet these requirements?

Options:

A.

Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.

B.

Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.

C.

Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.

D.

Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.

Question 121

A company has deployed resources in the us-east-1 Region. The company also uses thousands of AWS Outposts servers deployed at remote locations around the world. These Outposts servers regularly download new software versions from us-east-1 that consist of hundreds of files. The company wants to improve the latency of the software download process.

Which solution will meet these requirements?

Options:

A.

Create an Amazon S3 bucket in us-east-1. Configure the bucket for static website hosting. Use bucket policies and ACLs to provide read access to the Outposts servers.

B.

Create an Amazon S3 bucket in us-east-1 and a second bucket in us-west-2. Configure replication. Set up a CloudFront distribution with origin failover between the buckets. Download by using signed URLs.

C.

Create an Amazon S3 bucket in us-east-1. Configure S3 Transfer Acceleration. Configure the Outposts servers to download by using the acceleration endpoint.

D.

Create an Amazon S3 bucket in us-east-1. Set up a CloudFront distribution using all edge locations with caching enabled. Configure the bucket as the origin. Download the software by using signed URLs.

Question 122

A company runs its application on Oracle Database Enterprise Edition The company needs to migrate the application and the database to AWS. The company can use the Bring Your Own License (BYOL) model while migrating to AWS The application uses third-party database features that require privileged access.

A solutions architect must design a solution for the database migration.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Migrate the database to Amazon RDS for Oracle by using native tools. Replace the third-party features with AWS Lambda.

B.

Migrate the database to Amazon RDS Custom for Oracle by using native tools Customize the new database settings to support the third-party features.

C.

Migrate the database to Amazon DynamoDB by using AWS Database Migration Service {AWS DMS). Customize the new database settings to support the third-party features.

D.

Migrate the database to Amazon RDS for PostgreSQL by using AWS Database Migration Service (AWS DMS). Rewrite the application code to remove the dependency on third-party features.

Question 123

A company decides to use AWS Key Management Service (AWS KMS) for data encryption operations. The company must create a KMS key and automate the rotation of the key. The company also needs the ability to deactivate the key and schedule the key for deletion.

Which solution will meet these requirements?

Options:

A.

Create an asymmetric customer managed KMS key. Enable automatic key rotation.

B.

Create a symmetric customer managed KMS key. Disable the envelope encryption option.

C.

Create a symmetric customer managed KMS key. Enable automatic key rotation.

D.

Create an asymmetric customer managed KMS key. Disable the envelope encryption option.

Question 124

A company ' s application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis.

Which solution will meet these requirements?

Options:

A.

Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.

B.

Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Data Firehose to stream the logs from the log group to OpenSearch Service.

C.

Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.

D.

Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Data Firehose to stream the logs from the trail to OpenSearch Service.

Question 125

A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database that runs on the same EC2 instance. The company needs a highly available and automatically scalable solution to handle increased traffic.

Which solution will meet these requirements?

Options:

A.

Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application Load Balancer. Create an Amazon Redshift cluster that has multiple MySQL-compatible nodes.

B.

Deploy the application to EC2 instances that are configured as a target group behind an Application Load Balancer. Create an Amazon RDS for MySQL cluster that has multiple instances.

C.

Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application Load Balancer. Create an Amazon Aurora Serverless MySQL cluster for the database layer.

D.

Deploy the application to EC2 instances that are configured as a target group behind an Application Load Balancer. Create an Amazon ElastiCache (Redis OSS) cluster that uses the MySQL connector.

Question 126

A company uses an Amazon CloudFront distribution to serve thousands of media files to users. The CloudFront distribution uses a private Amazon S3 bucket as an origin.

A solutions architect must prevent users in specific countries from accessing the company ' s files.

Which solution will meet these requirements in the MOST operationally-efficient way?

Options:

A.

Require users to access the files by using CloudFront signed URLs.

B.

Configure geographic restrictions in CloudFront.

C.

Require users to access the files by using CloudFront signed cookies.

D.

Configure an origin access control (OAC) between CloudFront and the S3 bucket.

Question 127

A company recently migrated a data warehouse to AWS. The company has an AWS Direct Connect connection to AWS. Company users query the data warehouse by using a visualization tool. The average size of the queries that the data warehouse returns is 50 MB. The average visualization that the visualization tool produces is 500 KB in size. The result sets that the data warehouse returns are not cached.

The company wants to optimize costs for data transfers between the data warehouse and the company.

Which solution will meet this requirement?

Options:

A.

Host the visualization tool on premises. Connect to the data warehouse directly through the internet.

B.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.

C.

Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection.

D.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.

Question 128

A company wants to use automatic machine learning (ML) to create and visualize forecasts of complex scenarios and trends.

Which solution will meet these requirements with the LEAST management overhead?

Options:

A.

Use an AWS Glue ML job to transform the data and create forecasts. Use Amazon QuickSight to visualize the data.

B.

Use Amazon QuickSight to visualize the data. Use ML-powered forecasting in QuickSight to create forecasts.

C.

Use a prebuilt ML AMI from the AWS Marketplace to create forecasts. Use Amazon QuickSight to visualize the data.

D.

Use Amazon SageMaker AI inference pipelines to create and update forecasts. Use Amazon QuickSight to visualize the combined data.

Question 129

A company runs a multi-tier application on premises by using virtual machines (VMs). The application tiers communicate asynchronously through third-party middleware that guarantees exactly-once delivery. The company is planning to migrate the application to AWS and needs to replace the middleware solution. The solution must provide exactly-once delivery for messages from the application.

Which combination of actions will meet these requirements with the LEAST infrastructure management? (Select TWO.)

Options:

A.

Use AWS Lambda functions to provide compute layers in the architecture.

B.

Use Amazon EC2 instances to provide compute layers in the architecture.

C.

Use Amazon SNS as a messaging component between the compute layers.

D.

Use Amazon SQS FIFO queues as a messaging component between the compute layers.

E.

Run containers on Amazon EKS to provide compute layers in the architecture.

Question 130

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.

Which storage solution should a solutions architect recommend for use after the migration?

Options:

A.

AWS DataSync

B.

Amazon Elastic Block Store (Amazon EB5)

C.

Amazon Elastic File System (Amazon EF5)

D.

Amazon EMR File System (Amazon EMRFS)

Question 131

How can a law firm make files publicly readable while preventing modifications or deletions until a specific future date?

Options:

A.

Upload files to an Amazon S3 bucket configured for static website hosting. Grant read-only IAM permissions to any AWS principals.

B.

Create an S3 bucket. Enable S3 Versioning. Use S3 Object Lock with a retention period. Create a CloudFront distribution. Use a bucket policy to restrict access.

C.

Create an S3 bucket. Enable S3 Versioning. Configure an event trigger with AWS Lambda to restore modified objects from a private S3 bucket.

D.

Upload files to an S3 bucket for static website hosting. Use S3 Object Lock with a retention period. Grant read-only IAM permissions.

Question 132

A company is building an application composed of multiple microservices that communicate over HTTP. The company must deploy the application across multiple AWS Regions to meet disaster recovery requirements. The application must maintain high availability and automatic fault recovery.

Which solution will meet these requirements?

Options:

A.

Deploy all microservices on a single large EC2 instance in one Region to simplify communication.

B.

Use AWS Fargate to run each microservice in separate containers. Deploy across multiple Availability Zones in one Region behind an Application Load Balancer.

C.

Use Amazon Route 53 with latency-based routing. Deploy microservices on Amazon EC2 instances in multiple Regions behind Application Load Balancers.

D.

Implement each microservice using AWS Lambda. Expose the microservices using an Amazon API Gateway REST API.

Question 133

A company uses AWS WAF to protect its web applications. A solutions architect configures a web ACL that uses several rules, including a rule that inspects the HTTP request body for malicious content.

The solutions architect notices that the web ACL is not inspecting large HTTP POST requests properly. As a result, suspicious activities are not being detected. Some large HTTP POST requests are more than 8 MB in size.

The solutions architect must ensure that the web ACL inspects the large HTTP POST requests properly.

Which solution will meet this requirement?

Options:

A.

Create two custom AWS WAF rules. Configure one rule to block all oversized requests. Configure the second rule with a higher priority to allow large requests from legitimate hosts.

B.

Enable AWS Shield Advanced. Reconfigure the web ACL to block oversized requests by using Shield Advanced.

C.

Verify that the Content-Type header is correctly set in the HTTP requests that AWS WAF rules inspect.

D.

Create an AWS Lambda function to preprocess the large requests before AWS rules inspect the requests.

Question 134

A company hosts an application in an Amazon EC2 Auto Scaling group. The company has observed that during periods of high demand, new instances take too long to join the Auto Scaling group and serve the increased demand. The company determines that the root cause of the issue is the long boot time of the instances in the Auto Scaling group. The company needs to reduce the time required to launch new instances to respond to demand. Which solution will meet this requirement?

Options:

A.

Increase the maximum capacity of the Auto Scaling group by 50%.

B.

Create a warm pool for the Auto Scaling group. Use the default specification for the warm pool size.

C.

Increase the health check grace period for the Auto Scaling group by 50%.

D.

Create a scheduled scaling action. Set the desired capacity equal to the maximum capacity of the Auto Scaling group.

Question 135

A solutions architect is designing a three-tier web application. The architecture consists of an internet-facing Application Load Balancer (ALB) and a web tier that is hosted on Amazon EC2 instances in private subnets. The application tier with the business logic runs on EC2 instances in private subnets. The database tier consists of Microsoft SQL Server that runs on EC2 instances in private subnets. Security is a high priority for the company. Which combination of security group configurations should the solutions architect use? (Select THREE.)

Options:

A.

Configure the security group for the web tier to allow inbound HTTPS traffic from the security group for the ALB.

B.

Configure the security group for the web tier to allow outbound HTTPS traffic to 0.0.0.0/0.

C.

Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from the security group for the application tier.

D.

Configure the security group for the database tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier.

E.

Configure the security group for the application tier to allow inbound HTTPS traffic from the security group for the web tier.

F.

Configure the security group for the application tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier.

Question 136

A company wants to create a long-term storage solution that will allow users to upload terabytes of images and videos. The company will use the images and videos to train machine learning (ML) models. The storage solution must be scalable and cost-optimized.

Which solution will meet these requirements?

Options:

A.

Provision an Amazon S3 bucket for users to upload images and videos. Copy the data from the S3 bucket to an Amazon FSx for Lustre file system to make the data available for ML model training.

B.

Provision an Amazon S3 bucket for users to upload images and videos. Configure the S3 bucket to make the data available to Amazon SageMaker AI training. Store the data in the S3 Intelligent-Tiering storage class.

C.

Configure an Amazon SageMaker AI notebook instance with 16 GB of storage. Create a custom application to allow users to upload images and videos directly to the notebook instance.

D.

Provision an Amazon S3 bucket for users to upload images and videos. Copy the data from the S3 bucket to an Amazon Elastic File System (Amazon EFS) file system to make the data available for ML model training.

Question 137

A company is designing an application to maintain a record of customer orders. The application will generate events. The company wants to use an Amazon EventBridge event bus to send the application ' s events to an Amazon DynamoDB table. Which solution will meet these requirements?

Options:

A.

Use the EventBridge default event bus. Configure DynamoDB Streams for the DynamoDB table that hosts the customer order data.

B.

Create an EventBridge custom event bus. Create an AWS Lambda function as a target. Configure the Lambda function to forward the customer order data to the DynamoDB table.

C.

Create an EventBridge partner event bus. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic. Configure the Lambda function to read the customer order data and to forward the data to the DynamoDB table.

D.

Create an EventBridge partner event bus. Create an AWS Lambda function as a target. Configure the Lambda function to forward the customer order data to the DynamoDB table.

Question 138

A company runs an internet-facing web application on AWS and uses Amazon Route 53 with a public hosted zone.

The company wants to log DNS response codes to support future root cause analysis.

Which solution will meet these requirements?

Options:

A.

Use Route 53 to configure query logging.

B.

Use AWS CloudTrail to record all Route 53 queries.

C.

Use Amazon CloudWatch metrics for Route 53.

D.

Use AWS Trusted Advisor for root cause analysis.

Question 139

A company runs a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that must run 24/7. The backend nodes only need to run for short periods depending on the workload.

Frontend nodes accept jobs and place them in queues. Backend nodes asynchronously process jobs from the queues, and jobs can be restarted. The company wants to scale infrastructure based on workload, using the most cost-effective option.

Which solution meets these requirements MOST cost-effectively?

Options:

A.

Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

B.

Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.

C.

Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.

D.

Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

Question 140

A company has hired an external vendor to work in the company’s AWS account. The vendor uses an automated tool that the vendor hosts in its own AWS account. The vendor does not have IAM access to the company ' s AWS account. A solutions architect needs to grant access to the vendor.

Which solution will meet these requirements MOST securely?

Options:

A.

Create an IAM role in the company ' s account to delegate access to the vendor ' s IAM role. Attach the appropriate IAM policies to the new IAM role to grant the permissions that the vendor requires.

B.

Create an IAM user in the company ' s account with a password. Attach the appropriate IAM policies to the IAM user.

C.

Create an IAM group in the company ' s account. Add the IAM user for the vendor ' s automated tool from the vendor account to the IAM group. Attach policies to the group.

D.

Create a new identity provider (IdP) of provider type AWS account. Supply the vendor ' s AWS account ID and username. Attach policies to the IdP.

Question 141

An adventure company has launched a new feature on its mobile app. Users can use the feature to upload their hiking and rafting photos and videos anytime. The photos and videos are stored in Amazon S3 Standard storage in an S3 bucket and are served through Amazon CloudFront.

The company needs to optimize the cost of the storage. A solutions architect discovers that most of the uploaded photos and videos are accessed infrequently after 30 days. However, some of the uploaded photos and videos are accessed frequently after 30 days. The solutions architect needs to implement a solution that maintains millisecond retrieval availability of the photos and videos at the lowest possible cost.

Which solution will meet these requirements?

Options:

A.

Configure S3 Intelligent-Tiering on the S3 bucket.

B.

Configure an S3 Lifecycle policy to transition image objects and video objects from S3 Standard to S3 Glacier Deep Archive after 30 days.

C.

Replace Amazon S3 with an Amazon Elastic File System (Amazon EFS) file system that is mounted on Amazon EC2 instances.

D.

Add a Cache-Control: max-age header to the S3 image objects and S3 video objects. Set the header to 30 days.

Question 142

A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV format. The company needs to store this data in the AWS Cloud in near-real time for analysis.

Options:

A.

Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.

B.

Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.

C.

Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.

D.

Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by using SFTP.

Question 143

A company needs to grant a team of developers access to the company ' s AWS resources. The company must maintain a high level of security for the resources.

The company requires an access control solution that will prevent unauthorized access to the sensitive data.

Which solution will meet these requirements?

Options:

A.

Share the IAM user credentials for each development team member with the rest of the team to simplify access management and to streamline development workflows.

B.

Define IAM roles that have fine-grained permissions based on the principle of least privilege. Assign an IAM role to each developer.

C.

Create IAM access keys to grant programmatic access to AWS resources. Allow only developers to interact with AWS resources through API calls by using the access keys.

D.

Create an AWS Cognito user pool. Grant developers access to AWS resources by using the user pool.

Question 144

A company is developing a rating system for its ecommerce web application. The company needs a solution to save ratings that users submit in an Amazon DynamoDB table.

The company wants to ensure that developers do not need to interact directly with the DynamoDB table. The solution must be scalable and reusable.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an Application Load Balancer (ALB). Create an AWS Lambda function, and set the function as a target group in the ALB. Invoke the Lambda function by using the put_item method through the ALB.

B.

Create an AWS Lambda function. Configure the Lambda function to interact with the DynamoDB table by using the put-item method from Boto3. Invoke the Lambda function from the web application.

C.

Create an Amazon Simple Queue Service (Amazon SQS) queue and an AWS Lambda function that has an SQS trigger type. Instruct the developers to add customer ratings to the SQS queue as JSON messages. Configure the Lambda function to fetch the ratings from the queue and store the ratings in DynamoDB.

D.

Create an Amazon API Gateway REST API Define a resource and create a new POST method Choose AWS as the integration type, and select DynamoDB as the service. Set the action to PutItem.

Question 145

A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to portions of the data that contain sensitive information.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an IAM role that includes permissions to access Lake Formation tables.

B.

Create data filters to implement row-level security and cell-level security.

C.

Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.

D.

Create an AWS Lambda function that periodically queries and removes sensitive information from Lake Formation tables.

Question 146

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?

Options:

A.

Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.

B.

Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.

C.

Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.

D.

Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

Question 147

A company requires centralized auditing for all AWS accounts and compliance monitoring against AWS Foundational Security Best Practices (FSBP) with minimal operational overhead.

Which solution will meet these requirements?

Options:

A.

Deploy AWS Control Tower in the management account. Enable AWS Security Hub and Account Factory.

B.

Deploy AWS Control Tower in a member account.

C.

Use AWS Managed Services (AMS) with GuardDuty.

D.

Use AWS Managed Services (AMS) with Security Hub.

Question 148

A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day.

The company needs to prevent users from accidentally deleting the EBS volume snapshots. The solution must not change the administrative rights of a storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?

Options:

A.

Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.

B.

Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.

C.

Add tags to the snapshots. Create tag-level retention rules in the Recycle Bin for EBS snapshots. Configure rule lock settings for the retention rules.

D.

Take EBS snapshots by using the EBS direct APIs. Copy the snapshots to an Amazon S3 bucket. Configure S3 Versioning and Object Lock on the bucket.

Question 149

A company discovers that an Amazon DynamoDB Accelerator (DAX) cluster for the company ' s web application workload is not encrypting data at rest. The company needs to resolve thesecurity issue.

Which solution will meet this requirement?

Options:

A.

Stop the existing DAX cluster. Enable encryption at rest for the existing DAX cluster, and start the cluster again.

B.

Delete the existing DAX cluster. Recreate the DAX cluster, and configure the new cluster to encrypt the data at rest.

C.

Update the configuration of the existing DAX cluster to encrypt the data at rest.

D.

Integrate the existing DAX cluster with AWS Security Hub to automatically enable encryption at rest.

Question 150

A company has AWS Lambda functions that use environment variables. The company does not want its developers to see environment variables in plaintext.

Which solution will meet these requirements?

Options:

A.

Deploy code to Amazon EC2 instances instead of using Lambda functions.

B.

Configure SSL encryption on the Lambda functions to use AWS CloudHSM to store and encrypt the environment variables.

C.

Create a certificate in AWS Certificate Manager (ACM). Configure the Lambda functions to use the certificate to encrypt the environment variables.

D.

Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt the environment variables.

Question 151

A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational overhead.

Options:

A.

Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket.

B.

Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

C.

Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

D.

Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:*) event. Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

Question 152

An insurance company is creating an application to record personal user data. The data includes users’ names, ages, and health data. The company wants to run the application in a private subnet on AWS.

Because of data security requirements, the company must have access to the operating system of the compute resources that run the application tier. The company must use a low-latency NoSQL database to store the data.

Which solution will meet these requirements?

Options:

A.

Use Amazon EC2 instances for the application tier. Use an Amazon DynamoDB table for the database tier. Create a VPC endpoint for DynamoDB. Assign the instances an instance profile that has permission to access DynamoDB.

B.

Use AWS Lambda functions for the application tier. Use an Amazon DynamoDB table for the database tier. Assign a Lambda function an appropriate IAM role to access the table.

C.

Use AWS Fargate for the application tier. Create an Amazon Aurora PostgreSQL instance inside a private subnet for the database tier.

D.

Use Amazon EC2 instances for the application tier. Use an Amazon S3 bucket to store the data in JSON format. Configure the application to use Amazon Athena to read and write the data to and from the S3 bucket.

Question 153

An online gaming company is transitioning user data storage to Amazon DynamoDB to support the company ' s growing user base. The current architecture includes DynamoDB tables that contain user profiles, achievements, and in-game transactions.

The company needs to design a robust, continuously available, and resilient DynamoDB architecture to maintain a seamless gaming experience for users.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create DynamoDB tables in a single AWS Region. Use on-demand capacity mode. Use global tables to replicate data across multiple Regions.

B.

Use DynamoDB Accelerator (DAX) to cache frequently accessed data. Deploy tables in a single AWS Region and enable auto scaling. Configure Cross-Region Replication manually to additional Regions.

C.

Create DynamoDB tables in multiple AWS Regions. Use on-demand capacity mode. Use DynamoDB Streams for Cross-Region Replication between Regions.

D.

Use DynamoDB global tables for automatic multi-Region replication. Deploy tables in multiple AWS Regions. Use provisioned capacity mode. Enable auto scaling.

Question 154

A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues.

Which solution will resolve this issue with the LEAST administrative overhead?

Options:

A.

Scale out the nodes by tracking the memory usage.

B.

Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

C.

Use an AWS Lambda function to resize the EKS cluster automatically.

D.

Use an Amazon EC2 Auto Scaling group to distribute the workload.

Question 155

An online food delivery company wants to optimize its storage costs. The company has been collecting operational data for the last 10 years in a data lake that was built on Amazon S3 by using a Standard storage class. The company does not keep data that is older than 7 years. A solutions architect frequently uses data from the past 6 months for reporting and runs queries on data from the last 2 years about once a month. Data that is more than 2 years old is rarely accessed and is only used for audit purposes.

Which combination of solutions will optimize the company ' s storage costs? (Select TWO.)

Options:

A.

Create an S3 Lifecycle configuration rule to transition data that is older than 6 months to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create another S3 Lifecycle configuration rule to transition data that is older than 2 years to the S3 Glacier Deep Archive storage class.

B.

Create an S3 Lifecycle configuration rule to transition data that is older than 6 months to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class. Create another S3 Lifecycle configuration rule to transition data that is older than 2 years to the S3 Glacier Flexible Retrieval storage class.

C.

Use the S3 Intelligent-Tiering storage class to store data instead of the S3 Standard storage class.

D.

Create an S3 Lifecycle expiration rule to delete data that is older than 7 years.

E.

Create an S3 Lifecycle configuration rule to transition data that is older than 7 years to the S3 Glacier Deep Archive storage class.

Question 156

An application is experiencing performance issues based on increased demand. This increased demand is on read-only historical records that are pulled from an Amazon RDS-hosted database with custom views and queries. A solutions architect must improve performance without changing the database structure.

Which approach will improve performance and MINIMIZE management overhead?

Options:

A.

Deploy Amazon DynamoDB, move all the data, and point to DynamoDB.

B.

Deploy Amazon ElastiCache (Redis OSS) and cache the data for the application.

C.

Deploy Memcached on Amazon EC2 and cache the data for the application.

D.

Deploy Amazon DynamoDB Accelerator (DAX) on Amazon RDS to improve cache performance.

Question 157

A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to users across the world. The application needs to be highly available with automated failover across AWS Regions. The company wants to minimize the latency of users without relying on IP address caching on user devices.

What should a solutions architect do to meet these requirements?

Options:

A.

Use AWS Global Accelerator with health checks.

B.

Use Amazon Route 53 with a geolocation routing policy.

C.

Create an Amazon CloudFront distribution that includes multiple origins.

D.

Create an Application Load Balancer that uses path-based routing.

Question 158

A company is building a serverless application to process large video files that users upload. The application performs multiple tasks to process each video file. Processing can take up to 30 minutes for the largest files.

The company needs a scalable architecture to support the processing application.

Which solution will meet these requirements?

Options:

A.

Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure a schedule in Amazon EventBridge Scheduler to invoke an AWS Lambda function periodically to check for new files. Configure the Lambda function to perform all the processing tasks.

B.

Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure an Amazon EFS event notification to start an AWS Step Functions workflow that uses AWS Fargate tasks to perform the processing tasks.

C.

Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to send an event to Amazon EventBridge when a user uploads a new video file. Configure an AWS Step Functions workflow as a target for an EventBridge rule. Use the workflow to manage AWS Fargate tasks to perform the processing tasks.

D.

Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to invoke an AWS Lambda function when a user uploads a new video file. Configure the Lambda function to perform all the processing tasks.

Question 159

A company hosts its application on several Amazon EC2 instances inside a VPC. The company creates a dedicated Amazon S3 bucket for each customer to store their relevant information in Amazon S3.

The company wants to ensure that the application running on EC2 instances can securely access only the S3 buckets that belong to the company ' s AWS account.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the IAM instance profile policy to provide access to only the specific buckets that the application needs.

B.

Create a NAT gateway in a public subnet with a security group that allows access to only Amazon S3 Update the route tables to use the NAT Gateway.

C.

Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the IAM instance profile policy with a Deny action and the following condition key:

D.

Create a NAT Gateway in a public subnet Update route tables to use the NAT Gateway Assign bucket policies for all buckets with a Deny action and the following condition key:

Question 160

A company is developing a microservices-based application to manage the company ' s delivery operations. The application consists of microservices that process orders, manage a fleet of delivery vehicles, and optimize delivery routes.

The microservices must be able to scale independently and must be able to handle bursts of traffic without any data loss.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon API Gateway REST APIs to establish communication between microservices. Deploy the application on Amazon EC2 instances in Auto Scaling groups.

B.

Use Amazon Simple Queue Service (Amazon SQS) to establish communication between microservices. Deploy the application on Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate.

C.

Use WebSocket-based communication between microservices. Deploy the application on Amazon EC2 instances in Auto Scaling groups.

D.

Use Amazon Simple Notification Service (Amazon SNS) to establish communication between microservices. Deploy the application on Amazon Elastic Container Service (Amazon ECS) containers on Amazon EC2 instances.

Question 161

A company wants to re-architect a large-scale web application to a serverless microservices architecture. The application uses Amazon EC2 instances and is written in Python.

The company selected one component of the web application to test as a microservice. The component supports hundreds of requests per second. The company wants to create and test the microservice on an AWS solution that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal operational support.

Which solution will meet these requirements?

Options:

A.

Use a Spot Fleet with Auto Scaling of EC2 instances that run the most recent Amazon Linux operating system.

B.

Use an AWS Elastic Beanstalk web server environment that has high availability configured.

C.

Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2 instances.

D.

Use an AWS Lambda function that runs custom-developed code.

Question 162

A company runs an application as a task in an Amazon Elastic Container Service (Amazon ECS) cluster. The application must have read and write access to a specific group of Amazon S3 buckets. The S3 buckets are in the same AWS Region and AWS account as the ECS cluster. The company needs to grant the application access to the S3 buckets according to the principle of least privilege.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Add a tag to each bucket. Create an IAM policy that includes a StringEquals condition that matches the tags and values of the buckets.

B.

Create an IAM policy that lists the full Amazon Resource Name (ARN) for each S3 bucket.

C.

Attach the IAM policy to the instance role of the ECS task.

D.

Create an IAM policy that includes a wildcard Amazon Resource Name (ARN) that matches all combinations of the S3 bucket names.

E.

Attach the IAM policy to the task role of the ECS task.

Question 163

A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the storage performance of the cluster as the load on the application increases

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure the cluster to use the Aurora Standard storage configuration.

B.

Configure the cluster storage type as Provisioned IOPS.

C.

Configure the cluster storage type as General Purpose.

D.

Configure the cluster to use the Aurora l/O-Optimized storage configuration.

Question 164

A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the Aurora database by using user names and passwords that the company stores locally in a file.

The company changes the user names and passwords every month. The company wants to minimize the operational overhead of credential management.

Which solution will meet these requirements?

Options:

A.

Store the credentials as a secret within AWS Secrets Manager. Assign IAM permissions to the secret. Reconfigure the application to call the secret. Enable rotation on the secret and configure rotation to occur on a monthly schedule.

B.

Use AWS Systems Manager Parameter Store to create a new parameter for the credentials. Use IAM policies to restrict access to the parameter. Reconfigure the application to access the parameter.

C.

Create an Amazon S3 bucket to store objects. Use an AWS Key Management Service (AWS KMS) key to encrypt the objects. Migrate the credentials file to the S3 bucket. Update the application to retrieve the credentials file from the S3 bucket.

D.

Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the encrypted EBS volumes to the EC2 instances. Migrate the credentials file to the new EBS volumes.

Question 165

A developer creates a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The developer reviews the deployment and notices some suspicious traffic to the application. The traffic is malicious and is coming from a single public IP address. A solutions architect must block the public IP address.

Which solution will meet this requirement?

Options:

A.

Create a security group rule to deny all inbound traffic from the suspicious IP address. Associate the security group with the ALB.

B.

Implement Amazon Detective to monitor traffic and to block malicious activity from the internet. Configure Detective to integrate with the ALB.

C.

Implement AWS Resource Access Manager (AWS RAM) to manage traffic rules and to block malicious activity from the internet. Associate AWS RAM with the ALB.

D.

Add the malicious IP address to an IP set in AWS WAF. Create a web ACL. Include an IP set rule with the action set to BLOCK. Associate the web ACL with the ALB.

Question 166

A company wants to migrate an on-premises video processing application to AWS. Processing times range from 5–30 minutes. The application must run multiple jobs in parallel. The application processes videos that users upload to an Amazon S3 bucket.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Configure the S3 bucket to send S3 event notifications to an Amazon SQS standard queue. Deploy the application on an Amazon ECS cluster. Configure automatic scaling for AWS Fargate tasks based on the SQS queue size.

B.

Configure the S3 bucket to send S3 event notifications to an Amazon SQS FIFO queue. Deploy the application on Amazon EC2 instances. Create an Auto Scaling group to scale based on the SQS queue size.

C.

Configure the S3 bucket to send S3 event notifications to an Amazon SQS standard queue. Deploy the application as an AWS Lambda function. Configure the Lambda function to poll the SQS queue.

D.

Configure the S3 bucket to send S3 event notifications to an Amazon SNS topic. Deploy the application as an AWS Lambda function. Configure the SNS topic to invoke the Lambda function.

Question 167

A company is building a serverless application to process clickstream data from its website. The clickstream data is sent to an Amazon Kinesis Data Streams data stream from the application web servers.

The company wants to enrich the clickstream data by joining the clickstream data with customer profile data from an Amazon Aurora Multi-AZ database. The company wants to use Amazon Redshift to analyze the enriched data. The solution must be highly available.

Which solution will meet these requirements?

Options:

A.

Use an AWS Lambda function to process and enrich the clickstream data. Use the same Lambda function to write the clickstream data to Amazon S3. Use Amazon Redshift Spectrum to query the enriched data in Amazon S3.

B.

Use an Amazon EC2 Spot Instance to poll the data stream and enrich the clickstream data. Configure the EC2 instance to use the COPY command to send the enriched results to Amazon Redshift.

C.

Use an Amazon Elastic Container Service (Amazon ECS) task with AWS Fargate Spot capacity to poll the data stream and enrich the clickstream data. Configure an Amazon EC2 instance to use the COPY command to send the enriched results to Amazon Redshift.

D.

Use Amazon Kinesis Data Firehose to load the clickstream data from Kinesis Data Streams to Amazon S3. Use AWS Glue crawlers to infer the schema and populate the AWS Glue Data Catalog. Use Amazon Athena to query the raw data in Amazon S3.

Question 168

A company has an application that runs on Amazon EC2 instances in an Auto Scaling group. The application uses hardcoded credentials to access an Amazon RDS database.

To comply with new regulations, the company needs to automatically rotate the database password for the application service account every 90 days.

Which solution will meet these requirements?

Options:

A.

Create an AWS Lambda function to generate new passwords and upload them to EC2 instances by using SSH.

B.

Create a secret for the database credentials in AWS Secrets Manager. Enable rotation every 90 days. Modify the application to retrieve credentials from Secrets Manager.

C.

Create an Amazon ECS task to rotate passwords and upload them to EC2 instances.

D.

Create a new EC2 instance that runs a cron job to rotate passwords.

Question 169

A company processes streaming data by using Amazon Kinesis Data Streams and an AWS Lambda function. The streaming data comes from devices that are connected to the internet. The company is experiencing scaling problems and needs to implement shard-level control and custom checkpointing.

Which solution will meet these requirements with the LEAST latency?

Options:

A.

Connect Kinesis Data Streams to Amazon Data Firehose to ingest incoming data to an Amazon S3 bucket. Configure S3 Event Notifications to invoke the Lambda function.

B.

Increase the provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) standard queue. Invoke the Lambda function to process the messages.

C.

Run the Lambda function code in an Amazon Elastic Container Service (Amazon ECS) container that runs on AWS Fargate. Change the code to use the Kinesis Client Library (KCL).

D.

Increase the memory and provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to be invoked by the SQS queue.

Question 170

A company needs a solution to process customer orders from a global ecommerce platform. The solution must automatically start processing new orders immediately and must maintain a history of all order processing attempts.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Create an Amazon EventBridge rule that invokes an AWS Lambda function once every minute to check for new orders. Configure the Lambda function to process orders and store results in Amazon Aurora.

B.

Create an Amazon EventBridge event pattern that monitors the ecommerce platform ' s order events. Configure an EventBridge rule to invoke an AWS Lambda function when the platform receives a new order. Configure the function to store the results in Amazon DynamoDB.

C.

Use an Amazon EC2 instance to poll the ecommerce platform for new orders. Configure the instance to invoke an AWS Lambda function to process new orders. Configure the function to log results to Amazon CloudWatch.

D.

Use an Amazon SQS queue to invoke an AWS Lambda function when the platform receives a new order. Configure the function to process batches of orders and to store results in an Amazon EFS file system.

Question 171

A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances that are backed by Amazon EBS volumes behind an Application Load Balancer (ALB). The application will also use an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.

Which solution will meet these requirements?

Options:

A.

Use AWS KMS certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to encrypt the EBS volumes and Aurora database storage at rest.

B.

Use the AWS root account to log in to the AWS Management Console. Upload the company ' s encryption certificates. While in the root account, select the option to turn on encryption for all data at rest and in transit for the account.

C.

Use AWS KMS to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate Manager (ACM) certificate to the ALB to encrypt data in transit.

D.

Use BitLocker to encrypt all data at rest. Import the company ' s TLS certificate keys to AWS KMS. Attach the KMS keys to the ALB to encrypt data in transit.

Question 172

A company wants to migrate a visual search application from an on-premises environment to AWS. The application uses NFS storage to cache images. The image cache is currently a few terabytes in size. The company needs to migrate to a cost-effective cloud alternative.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Use an Amazon ElastiCache (Memcached) cluster as the image cache. Set the cache TTL according to the required image lifetime in the cache.

B.

Use compute-optimized Amazon EC2 instances with instance store volumes as the image cache. Recycle EC2 instances for cache invalidation.

C.

Use an Amazon EFS One Zone file system as the image cache. Configure the application to use the EFS mount target.

D.

Use Amazon S3 Express One Zone to store the images. Store the S3 object URLs in an Amazon DynamoDB table. Use DynamoDB TTL to invalidate image cache entries.

Question 173

A company is developing a social media application that must scale to meet demand spikes and handle ordered processes.

Which AWS services meet these requirements?

Options:

A.

ECS with Fargate, RDS, and SQS for decoupling.

B.

ECS with Fargate, RDS, and SNS for decoupling.

C.

DynamoDB, Lambda, DynamoDB Streams, and Step Functions.

D.

Elastic Beanstalk, RDS, and SNS for decoupling.

Question 174

A company has a web application with sporadic usage patterns. Usage is heavy at the beginning of each month, moderate weekly, and unpredictable during the week. The application uses a MySQL database and must move to AWS without database modifications.

Which solution will meet these requirements?

Options:

A.

Amazon DynamoDB

B.

Amazon RDS for MySQL

C.

MySQL-compatible Amazon Aurora Serverless

D.

MySQL on Amazon EC2 in an Auto Scaling group

Question 175

A company is migrating mobile banking applications to run on Amazon EC2 instances in a VPC. Backend service applications run in an on-premises data center. The data center has an AWS Direct Connect connection into AWS. The applications that run in the VPC need to resolve DNS requests to an on-premises Active Directory domain that runs in the data center.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Provision a set of EC2 instances across two Availability Zones in the VPC as caching DNS servers to resolve DNS queries from the application servers within the VPC.

B.

Provision an Amazon Route 53 private hosted zone. Configure NS records that point to on-premises DNS servers.

C.

Create DNS endpoints by using Amazon Route 53 Resolver. Add conditional forwarding rules to resolve DNS namespaces between the on-premises data center and the VPC.

D.

Provision a new Active Directory domain controller in the VPC with a bidirectional trust between this new domain and the on-premises Active Directory domain.

Question 176

A company runs a Microsoft Windows SMB file share on-premises to support an application. The company wants to migrate the application to AWS. The company wants to share storage across multiple Amazon EC2 instances.

Which solutions will meet these requirements with the LEAST operational overhead? (Select TWO.)

Options:

A.

Create an Amazon Elastic File System (Amazon EFS) file system with elastic throughput.

B.

Create an Amazon FSx for NetApp ONTAP file system.

C.

Use Amazon Elastic Block Store (Amazon EBS) to create a self-managed Windows file share on the instances.

D.

Create an Amazon FSx for Windows File Server file system.

E.

Create an Amazon FSx for OpenZFS file system.

Question 177

A company is running a web-based game in two Availability Zones in the us-west-2 Region. The web servers use an Application Load Balancer (ALB) in public subnets. The ALB has an SSL certificate from AWS Certificate Manager (ACM) with a custom domain name. The game is written in JavaScript and runs entirely in a user ' s web browser.

The game is increasing in popularity in many countries around the world. The company wants to update the application architecture and optimize costs without compromising performance.

What should a solutions architect do to meet these requirements?

Options:

A.

Use Amazon CloudFront and create a global distribution that points to the ALB. Reuse the existing certificate from ACM for the CloudFront distribution. Use Amazon Route 53 to update the application alias to point to the distribution.

B.

Use AWS CloudFormation to deploy the application stack to AWS Regions near countries where the game is popular. Use ACM to create a new certificate for each application instance. Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local application instance.

C.

Use Amazon S3 and create an S3 bucket in AWS Regions near countries where the game is popular. Deploy the HTML and JavaScript files to each S3 bucket. Use ACM to create a new certificate for each S3 bucket. Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local S3 bucket.

D.

Use Amazon S3 and create an S3 bucket in us-west-2. Deploy the HTML and JavaScript files to the S3 bucket. Use Amazon CloudFront and create a global distribution with the S3 bucket as the origin. Use ACM to create a new certificate for the distribution. Use Amazon Route 53 to update the application alias to point to the distribution.

Question 178

A company is migrating its workloads to AWS. The company has sensitive and critical data in on-premises relational databases that run on SQL Server instances. The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases.

Which solution will meet these requirements?

Options:

A.

Migrate the databases to Amazon EC2 instances. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.

B.

Migrate the databases to a Multi-AZ Amazon RDS for SQL Server DB instance. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.

C.

Migrate the data to an Amazon S3 bucket. Use Amazon Macie to ensure data security.

D.

Migrate the databases to an Amazon DynamoDB table. Use Amazon CloudWatch Logs to ensure data security.

Question 179

A company has an employee web portal. Employees log in to the portal to view payroll details. The company is developing a new system to give employees the ability to upload scanned documents for reimbursement. The company runs a program to extract text-based data from the documents and attach the extracted information to each employee ' s reimbursement IDs for processing.

The employee web portal requires 100% uptime. The document extract program runs infrequently throughout the day on an on-demand basis. The company wants to build a scalable and cost-effective new system that will require minimal changes to the existing web portal. The company does not want to make any code changes.

Which solution will meet these requirements with the LEAST implementation effort?

Options:

A.

Run Amazon EC2 On-Demand Instances in an Auto Scaling group for the web portal. Use an AWS Lambda function to run the document extract program. Invoke the Lambda function when an employee uploads a new reimbursement document.

B.

Run Amazon EC2 Spot Instances in an Auto Scaling group for the web portal. Run the document extract program on EC2 Spot Instances Start document extract program instances when an employee uploads a new reimbursement document.

C.

Purchase a Savings Plan to run the web portal and the document extract program. Run the web portal and the document extract program in an Auto Scaling group.

D.

Create an Amazon S3 bucket to host the web portal. Use Amazon API Gateway and an AWS Lambda function for the existing functionalities. Use the Lambda function to run the document extract program. Invoke the Lambda function when the API that is associated with a new document upload is called.

Question 180

Question:

A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.

The company wants to optimize costs to archive data.

Options:

Options:

A.

Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.

B.

Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

C.

Create an AWS Glue DataBrew Job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

D.

Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

Question 181

A company plans to deploy containerized microservices in the AWS Cloud. The containers must mount a persistent file store that the company can manage by using OS-level permissions. The company requires fully managed services to host the containers and file store.

Options:

A.

Use AWS Lambda functions and an Amazon API Gateway REST API to handle the microservices. Use Amazon S3 buckets for storage.

B.

Use Amazon EC2 instances to host the microservices. Use Amazon Elastic Block Store (Amazon EBS) volumes for storage.

C.

Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon Elastic File System (Amazon EFS) file system for storage.

D.

Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon EC2 instance that runs a dedicated file store for storage.

Question 182

A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create Amazon RDS automated backups. Set the retention period to 90 days.

B.

Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.

C.

Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days

D.

Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.

Question 183

A company runs an application on premises. The application stores files that the application servers process in a shared storage system. The company uses Linux file system permissions to control access to the files.

The company plans to migrate the application servers to Amazon EC2 instances across multiple Availability Zones. The company does not want to change the application code.

Which solution will meet these requirements?

Options:

A.

Migrate the files to an Amazon S3 bucket. Use the S3 Intelligent-Tiering storage class. Mount the S3 bucket to the EC2 instances.

B.

Migrate the files to a set of Amazon EC2 instance store volumes. Mount the instance store volumes to the EC2 instances.

C.

Migrate the files to a set of Amazon EBS volumes. Mount the EBS volumes to the EC2 instances.

D.

Migrate the files to an Amazon EFS file system. Mount the EFS file system to the EC2 instances.

Question 184

An advertising company stores terabytes of data in an Amazon S3 data lake. The company wants to build its own foundation model (FM) and has deployed a training cluster on AWS. The company loads file-based data from Amazon S3 to the training cluster to train the FM. The company wants to reduce data loading time to optimize the overall deployment cycle.

The company needs a storage solution that is natively integrated with Amazon S3. The solution must be scalable and provide high throughput.

Which storage solution will meet these requirements?

Options:

A.

Mount an Amazon Elastic File System (Amazon EFS) file system to the training cluster. Use AWS DataSync to migrate data from Amazon S3 to the EFS file system to train the FM.

B.

Use an Amazon FSx for Lustre file system and Amazon S3 with Data Repository Association (DRA). Preload the data from Amazon S3 to the Lustre file system to train the FM.

C.

Attach Amazon Block Store (Amazon EBS) volumes to the training cluster. Load the data from Amazon S3 to the EBS volumes to train the FM.

D.

Use AWS DataSync to migrate the data from Amazon S3 to the training cluster as files. Train the FM on the local file-based data.

Question 185

A company runs an application that stores and shares photos. Users upload the photos to an Amazon S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

B.

Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

C.

Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.

D.

Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.

Question 186

A company must protect sensitive documents in Amazon S3 from deletion or modification for a fixed retention period to meet regulatory requirements.

Which solution will meet these requirements?

Options:

A.

Enable S3 Object Lock in governance mode.

B.

Enable S3 Object Lock in compliance mode.

C.

Enable S3 versioning with lifecycle deletion rules.

D.

Transition objects to S3 Glacier Flexible Retrieval.

Question 187

A company wants to run big data workloads on Amazon EMR. The workloads need to process terabytes of data in memory.

A solutions architect needs to identify the appropriate EMR cluster instance configuration for the workloads.

Which solution will meet these requirements?

Options:

A.

Use a storage optimized instance for the primary node. Use compute optimized instances for core nodes and task nodes.

B.

Use a memory optimized instance for the primary node. Use storage optimized instances for core nodes and task nodes.

C.

Use a general purpose instance for the primary node. Use memory optimized instances for core nodes and task nodes.

D.

Use general purpose instances for the primary, core, and task nodes.

Question 188

A company is designing a microservice-based architecture tor a new application on AWS. Each microservice will run on its own set of Amazon EC2 instances. Each microservice will need to interact with multiple AWS services such as Amazon S3 and Amazon Simple Queue Service (Amazon SQS).

The company wants to manage permissions for each EC2 instance based on the principle of least privilege.

Which solution will meet this requirement?

Options:

A.

Assign an IAM user to each micro-service. Use access keys stored within the application code to authenticate AWS service requests.

B.

Create a single IAM role that has permission to access all AWS services. Associate the IAM role with all EC2 instances that run the microservices

C.

Use AWS Organizations to create a separate account for each microservice. Manage permissions at the account level.

D.

Create individual IAM roles based on the specific needs of each microservice. Associate the IAM roles with the appropriate EC2 instances.

Question 189

A company uses a general-purpose instance class Amazon RDS for MySQL DB instance. The company has configured the DB instance in a Multi-AZ configuration across two Availability Zones as part of the company ' s production application.

The company ' s finance team needs to run SQL queries against the DB instance to generate reports. Customers have reported significant performance issues with the application during report generation.

A solutions architect needs to minimize the effect of the reporting job on the DB instance.

Which solution will meet these requirements?

Options:

A.

Create a proxy in Amazon RDS Proxy. Update the reporting job to query the proxy endpoint.

B.

Update the RDS DB instance configuration to use three Availability Zones.

C.

Add an RDS read replica. Update the reporting job to query the replica endpoint.

D.

Change the RDS configuration from a general-purpose instance class to a memory-optimized instance class.

Question 190

A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.

The company needs a managed solution with proactive engagement to detect against DDoS attacks.

Which solution will meet these requirements?

Options:

A.

Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.

B.

Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.

C.

Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.

D.

Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.

Question 191

A solutions architect needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days. Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create a manual copy of the snapshot.

B.

Export the contents of the snapshot to an Amazon S3 bucket.

C.

Change the retention period of the snapshot to 45 days.

D.

Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.

Question 192

A solutions architect is designing the architecture for a two-tier web application. The web application consists of an internet-facing Application Load Balancer (ALB) that forwards traffic to an Auto Scaling group of Amazon EC2 instances.

The EC2 instances must be able to access an Amazon RDS database. The company does not want to rely solely on security groups or network ACLs. Only the minimum resources that are necessary should be routable from the internet.

Which network design meets these requirements?

Options:

A.

Place the ALB, EC2 instances, and RDS database in private subnets.

B.

Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.

C.

Place the ALB and EC2 instances in public subnets. Place the RDS database in private subnets.

D.

Place the ALB outside the VPC. Place the EC2 instances and RDS database in private subnets.

Question 193

A company is migrating a production environment application to the AWS Cloud. The company uses Amazon RDS for Oracle for the database layer. The company needs to configure thedatabase to meet the needs of high I/O intensive workloads that require low latency and consistent throughput. The database workloads are read intensive and write intensive.

Which solution will meet these requirements?

Options:

A.

Use a Multi-AZ DB instance deployment for the RDS for Oracle database.

B.

Configure the RDS for Oracle database to use the Provisioned IOPS SSD storage type.

C.

Configure the RDS for Oracle database to use the General Purpose SSD storage type.

D.

Enable RDS read replicas for RDS for Oracle.

Question 194

An analytics application runs on multiple Amazon EC2 Linux instances that use Amazon Elastic File System (Amazon EFS) Standard storage. The files vary in size and access frequency. The company accesses the files infrequently after 30 days. However, users sometimes request older files to generate reports.

The company wants to reduce storage costs for files that are accessed infrequently. The company also wants throughput to adjust based on the size of the file system. The company wants to use the TransitionToIA Amazon EFS lifecycle policy to transition files to Infrequent Access (IA) storage after 30 days.

Which solution will meet these requirements?

Options:

A.

Configure files to transition back to Standard storage when a user accesses the files again. Specify the provisioned throughput mode.

B.

Specify the provisioned throughput mode only.

C.

Configure files to transition back to Standard storage when a user accesses the files again. Specify the bursting throughput mode.

D.

Specify the bursting throughput mode only.

Question 195

An e-commerce company has an application that uses Amazon DynamoDB tables configured with provisioned capacity. Order data is stored in a table named Orders. The Orders table has a primary key of order-ID and a sort key of product-ID. The company configured an AWS Lambda function to receive DynamoDB streams from the Orders table and update a table named Inventory. The company has noticed that during peak sales periods, updates to the Inventory table take longer than the company can tolerate. Which solutions will resolve the slow table updates? (Select TWO.)

Options:

A.

Add a global secondary index to the Orders table. Include the product-ID attribute.

B.

Set the batch size attribute of the DynamoDB streams to be based on the size of items in the Orders table.

C.

Increase the DynamoDB table provisioned capacity by 1,000 write capacity units (WCUs).

D.

Increase the DynamoDB table provisioned capacity by 1,000 read capacity units (RCUs).

E.

Increase the timeout of the Lambda function to 15 minutes.

Question 196

A company hosts a web application on Amazon EC2 instances that are part of an Auto Scaling group behind an Application Load Balancer (ALB). The application experiences spikes in requests that come through the ALB throughout each day. The traffic spikes last between 15 and 20 minutes.

The company needs a solution that uses a standard or custom metric to scale the EC2 instances based on the number of requests that come from the ALB.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure an Amazon CloudWatch alarm to monitor the ALB RequestCount metric. Configure a simple scaling policy to scale the EC2 instances in response to the metric.

B.

Configure a predictive scaling policy based on the ALB RequestCount metric to scale the EC2 instances.

C.

Configure an Amazon CloudWatch alarm to monitor the ALB UnhealthyHostCount metric. Configure a target tracking policy to scale the EC2 instances in response to the metric.

D.

Create an Amazon CloudWatch alarm to monitor a user-defined metric for GET requests. Configure a target tracking policy threshold to scale the EC2 instances.

Question 197

A company is migrating a data processing application to AWS. The application processes several short-lived batch jobs that cannot be disrupted. The process generates data after each batch job finishes running. The company accesses the data for 30 days following data generation. After 30 days, the company stores the data for 2 years.

The company wants to optimize costs for the application and data storage. Which solution will meet these requirements?

Options:

A.

Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Instant Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.

B.

Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Glacier Instant Retrieval. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.

C.

Use Amazon EC2 Spot Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Flexible Retrieval after 30 days. Configure a bucket policy to delete the data after 2 years.

D.

Use Amazon EC2 On-Demand Instances to run the application. Store the data in Amazon S3 Standard. Move the data to S3 Glacier Deep Archive after 30 days. Configure an S3 Lifecycle configuration to delete the data after 2 years.

Question 198

A media company has an ecommerce website to sell music. Each music file is stored as an MP3 file. Premium users of the website purchase music files and download the files. The company wants to store music files on AWS. The company wants to provide access only to the premium users. The company wants to use the same URL for all premium users.

Which solution will meet these requirements?

Options:

A.

Store the MP3 files on a set of Amazon EC2 instances that have Amazon Elastic Block Store (Amazon EBS) volumes attached. Manage access to the files by creating an IAM user and an IAM policy for each premium user.

B.

Store all the MP3 files in an Amazon S3 bucket. Create a presigned URL for each MP3 file. Share the presigned URLs with the premium users.

C.

Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Generate CloudFront signed cookies for the music files. Share the signed cookies with the premium users.

D.

Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Use a CloudFront signed URL for each music file. Share the signed URLs with the premium users.

Question 199

A company plans to use an Amazon S3 bucket to archive backup data. Regulations require the company to retain the backup data for 7 years.

During the retention period, the company must prevent users, including administrators, from deleting the data. The company can delete the data after 7 years.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket policy that denies delete operations for 7 years. Create an S3 Lifecycle policy to delete the data after 7 years.

B.

Create an S3 Object Lock default retention policy that retains data for 7 years in governance mode. Create an S3 Lifecycle policy to delete the data after 7 years.

C.

Create an S3 Object Lock default retention policy that retains data for 7 years in compliance mode. Create an S3 Lifecycle policy to delete the data after 7 years.

D.

Create an S3 Batch Operations job to set a legal hold on each object for 7 years. Create an S3 Lifecycle policy to delete the data after 7 years.

Question 200

A company is building a new web application that serves static and dynamic content from an API. Users will access the application from around the world. The company wants to minimize latency in the most cost-effective way.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy the static content to an Amazon S3 bucket. Use an Amazon API Gateway HTTP API to serve the dynamic content. Create an Amazon CloudFront distribution that uses the S3 bucket and the HTTP API as origins. Enable caching for static content.

B.

Deploy the static content to an Amazon S3 bucket. Provide the bucket website endpoint to users. Use an Amazon API Gateway HTTP API with caching enabled to serve the dynamic content.

C.

Deploy the static content to an Amazon S3 bucket. Use two Amazon EC2 instances as web servers. Deploy an Application Load Balancer to distribute traffic. Create an Amazon CloudFront distribution in front of the S3 bucket to cache static content.

D.

Deploy the static content to an Amazon S3 bucket. Provide the bucket website endpoint to users. Create an Amazon CloudFront distribution in front of the S3 bucket to cache static content.

Question 201

A media streaming company is redesigning its infrastructure to accommodate increasing demand for video content that users consume daily. The company needs to process terabyte-sized videos to block some content in the videos. Video processing can take up to 20 minutes.

The company needs a solution that is cost-effective, highly available, and scalable.

Which solution will meet these requirements?

Options:

A.

Use AWS Lambda functions to process the videos. Store video metadata in Amazon DynamoDB. Store video content in Amazon S3 Intelligent-Tiering.

B.

Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type to implement microservices to process videos. Store video metadata in Amazon Aurora. Store video content in Amazon S3 Intelligent-Tiering.

C.

Use Amazon EMR to process the videos with Apache Spark. Store video content in Amazon FSx for Lustre. Use Amazon Kinesis Data Streams to ingest videos in real time.

D.

Deploy a containerized video processing application on Amazon Elastic Kubernetes Service (Amazon EKS) with the Amazon EC2 launch type. Store video metadata in Amazon RDS in a single Availability Zone. Store video content in Amazon S3 Glacier Deep Archive.

Question 202

A company manages multiple AWS accounts in an organization in AWS Organizations. The company ' s applications run on Amazon EC2 instances in multiple AWS Regions. The company needs a solution to simplify the management of security rules across the accounts in its organization. The solution must apply shared security group rules, audit security groups, and detect unused and redundant rules in VPC security groups across all AWS environments.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Use AWS Firewall Manager to create a set of rules based on the security requirements. Replicate the rules to all the AWS accounts and Regions.

B.

Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Deploy AWS Network Firewall to define the firewall rules to control network traffic across multiple accounts and Regions.

C.

Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Configure AWS Config and AWS Lambda to evaluate compliance information and to automate enforcement across all accounts and Regions.

D.

Use AWS Network Firewall to build policies based on the security requirements. Centrally apply the new policies to all the VPCs and accounts.

Question 203

An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs) across multiple AWS Regions. The NLBs can route requests to targets overthe internet. The company wants to improve the customer playing experience by reducing end-to-end load time for its global customer base.

Which solution will meet these requirements?

Options:

A.

Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.

B.

Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.

C.

Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.

D.

Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.

Question 204

A company collects 10 GB of telemetry data every day from multiple devices. The company stores the data in an Amazon S3 bucket that is in a source data account.

The company has hired several consulting agencies to analyze the company ' s data. Each agency has a unique AWS account. Each agency requires read access to the company ' s data.

The company needs a secure solution to share the data from the source data account to the consulting agencies.

Which solution will meet these requirements with the LEAST operational effort?

Options:

A.

Set up an Amazon CloudFront distribution. Use the S3 bucket as the origin.

B.

Make the S3 bucket public for a limited time. Inform only the agencies that the bucket is publicly accessible.

C.

Configure cross-account access for the S3 bucket to the accounts that the agencies own.

D.

Set up an IAM user for each agency in the source data account. Grant each agency IAM user access to the company ' s S3 bucket.

Question 205

A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance. High traffic on the application is causing increased latency for many read queries.

A solutions architect must improve the performance of the application.

Which solution will meet this requirement?

Options:

A.

Enable Amazon RDS Performance Insights. Configure storage capacity to scale automatically.

B.

Configure the DB instance to use DynamoDB Accelerator (DAX).

C.

Create a read replica of the DB instance. Serve read traffic from the read replica.

D.

Use Amazon Data Firehose between the application and Amazon RDS to increase the concurrency of database requests.

Question 206

A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.

Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.

The developer notices that when all the services are producing logs,ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.

How should the developer resolve this issue?

Options:

A.

Change the capacity mode from provisioned to on-demand.

B.

Double the number of shards until the throttling errors stop occurring.

C.

Change the partition key from service name to creation timestamp.

D.

Use a separate Kinesis stream for each service to generate the logs.

Question 207

A company needs to collect streaming data from several sources and store the data in the AWS Cloud. The dataset is heavily structured, but analysts need to perform several complex SQL queries and need consistent performance. Some of the data is queried more frequently than the rest. The company wants a solution that meets its performance requirements in a cost-effective manner.

Which solution meets these requirements?

Options:

A.

Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to ingest the data to save it to Amazon S3. Use Amazon Athena to perform SQL queries over the ingested data.

B.

Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads.

C.

Use Amazon Data Firehose to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads.

D.

Use Amazon Data Firehose to ingest the data to save it to Amazon S3. Load frequently queried data to Amazon Redshift using the COPY command. Use Amazon Redshift Spectrum for less frequently queried data.

Question 208

Question:

A company runs a mobile game app that stores session data (up to 256 KB) for up to 48 hours. The data updates frequently and must be deleted automatically after expiration. Restorability is also required.

Options:

Options:

A.

Use an Amazon DynamoDB table to store the session data. Enable point-in-time recovery (PITR) and TTL.

B.

Use Amazon MemoryDB and enable PITR and TTL.

C.

Store session data in S3 Standard. Enable Versioning and a Lifecycle rule to expire objects after 48 hours.

D.

Store data in S3 Intelligent-Tiering with Versioning and a Lifecycle rule to expire after 48 hours.

Question 209

A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.

The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.

Which solution meets these requirements MOST cost-effectively?

Options:

A.

Deploy an AWS Global Accelerator accelerator in front of the web servers.

B.

Deploy an Amazon CloudFront web distribution in front of the S3 bucket.

C.

Deploy an Amazon ElastiCache (Redis OSS) instance in front of the web servers.

D.

Deploy an Amazon ElastiCache (Memcached) instance in front of the web servers.

Question 210

A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC. The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC contains a NAT gateway in a public subnet to access the S3 bucket. The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy.

Which solution meets these requirements?

Options:

A.

Replace the NAT gateway with a NAT instance.

B.

Replace the NAT gateway with an internet gateway.

C.

Replace the NAT gateway with a gateway VPC endpoint.

D.

Replace the NAT gateway with an AWS Direct Connect connection.

Question 211

A company has an e-commerce site. The site is designed as a distributed web application hosted in multiple AWS accounts under one AWS Organizations organization. The web application is comprised of multiple microservices. All microservices expose their AWS services either through Amazon CloudFront distributions or public Application Load Balancers (ALBs). The company wants to protect public endpoints from malicious attacks and monitor security configurations. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage rules in AWS WAF. Use AWS Config rules to monitor the Regional and global WAF configurations.

B.

Use AWS WAF to protect the public endpoints. Apply AWS WAF rules in each account. Use AWS Config rules and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.

C.

Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage the rules in AWS WAF. Use Amazon Inspector and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.

D.

Use AWS Shield Advanced to protect the public endpoints. Use AWS Config rules to monitor the Shield Advanced configuration for each account.

Question 212

A company uses Amazon RDS (or PostgreSQL to run its applications in the us-east-1 Region. The company also uses machine learning (ML) models to forecast annual revenue based on neat real-time reports. The reports are generated by using the same RDS for PostgreSQL database. The database performance slows during business hours. The company needs to improve database performance.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create a cross-Region read replica. Configure the reports to be generated from the read replica.

B.

Activate Multi-AZ DB instance deployment for RDS for PostgreSQL. Configure the reports to be generated from the standby database.

C.

Use AWS Data Migration Service (AWS DMS) to logically replicate data lo a new database. Configure the reports to be generated from the new database.

D.

Create a read replica in us-east-1. Configure the reports to be generated from the read replica.

Question 213

A company is developing a SaaS solution for customers. The solution runs on Amazon EC2 instances that have Amazon Elastic Block Store (Amazon EBS) volumes attached.

Within the SaaS application, customers can request how much storage they need. The application needs to allocate the amount of block storage each customer requests.

A solutions architect must design an operationally efficient solution that meets the storage scaling requirement.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Migrate the data from the EBS volumes to an Amazon S3 bucket. Use the Amazon S3 Standard storage class.

B.

Migrate the data from the EBS volumes to an Amazon Elastic File System (Amazon EFS) file system. Use the EFS Standard storage class. Invoke an AWS Lambda function to increase the EFS volume capacity based on user input.

C.

Migrate the data from the EBS volumes to an Amazon FSx for Windows File Server file system. Invoke an AWS Lambda function to increase the capacity of the file system based on user input.

D.

Invoke an AWS Lambda function to increase the size of EBS volumes based on user input by using EBS Elastic Volumes.

Question 214

A company runs multiple applications on Amazon EC2 instances in a VPC. Application A runs in a private subnet that has a custom route table and network ACL. Application B runs in a second private subnet in the same VPC.

The company needs to prevent Application A from sending traffic to Application B.

Which solution will meet this requirement?

Options:

A.

Add a deny outbound rule to a security group that is associated with Application B. Configure the rule to prevent Application B from sending traffic to Application A.

B.

Add a deny outbound rule to a security group that is associated with Application A. Configure the rule to prevent Application A from sending traffic to Application B.

C.

Add a deny outbound rule to the custom network ACL for the Application B subnet. Configure the rule to prevent Application B from sending traffic to IP addresses that are associated with the Application A subnet.

D.

Add a deny outbound rule to the custom network ACL for the Application A subnet. Configure the rule to prevent Application A from sending traffic to IP addresses that are associated with the Application B subnet.

Question 215

A company uses a set of Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store images and media files.

The company wants to automate website infrastructure creation to deploy the website to multiple AWS Regions. The company also wants to provide the EC2 instances access to the S3 bucket so the instances can store and access data by using AWS Identity and Access Management (IAM).

Which solution will meet these requirements MOST securely?

Options:

A.

Create an AWS Cloud Format ion template for the web server EC2 instances. Save an IAM access key in the UserData section of the AWS;:EC2::lnstance entity in the CloudFormation template.

B.

Create a file that contains an IAM secret access key and access key ID. Store the file in a new S3 bucket. Create an AWS CloudFormation template. In the template, create a parameter to specify the location of the S3 object that contains the access key and access key ID.

C.

Create an IAM role and an IAM access policy that allows the web server EC2 instances to access the S3 bucket. Create an AWS CloudFormation template for the web server EC2 instances that contains an IAM instance profile entity that references the IAM role and the IAM access policy.

D.

Create a script that retrieves an IAM secret access key and access key ID from IAM and stores them on the web server EC2 instances. Include the script in the UserData section of the AWS::EC2::lnstance entity in an AWS CloudFormation template.

Question 216

A company stores a large dataset for an online advertising business in an Amazon RDS for MySQL DB instance. The company wants to run business reporting queries on the data without affecting write operations to the DB instance.

Which solution will meet these requirements?

Options:

A.

Deploy RDS read replicas to process the business reporting queries.

B.

Scale out the DB instance horizontally by placing the instance behind an Elastic Load Balancing (ELB) load balancer.

C.

Scale up the DB instance to a larger instance type to handle write operations and reporting queries.

D.

Configure Amazon CloudWatch to monitor the DB instance. Deploy standby DB instances when a latency metric threshold is exceeded.

Question 217

A company has a relational database workload that runs on Amazon Aurora MySQL. According to new compliance standards, the company must rotate all database credentials every 30 days. The company needs a solution that maximizes security and minimizes development effort.

Which solution will meet these requirements?

Options:

A.

Store the database credentials in AWS Secrets Manager. Configure automatic credential rotation for every 30 days.

B.

Store the database credentials in AWS Systems Manager Parameter Store. Create an AWS Lambda function to rotate the credentials every 30 days.

C.

Store the database credentials in an environment file or in a configuration file. Modify the credentials every 30 days.

D.

Store the database credentials in an environment file or in a configuration file. Create an AWS Lambda function to rotate the credentials every 30 days.

Question 218

A company is deploying a new gaming application on Amazon EC2 instances. The gaming application needs to have access to shared storage.

The company requires a high-performance solution to give the application the ability to use an existing custom protocol to access shared storage. The solution must ensure low latency and must be operationally efficient.

Which solution will meet these requirements?

Options:

A.

Create an Amazon FSx File Gateway. Create a file share that uses the existing custom protocol. Connect the EC2 instances that host the application to the file share.

B.

Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the EC2 instances that host the application to the file share.

C.

Create an Amazon Elastic File System (Amazon EFS) file system. Configure the file system to support Lustre. Connect the EC2 instances that host the application to the file system.

D.

Create an Amazon FSx for Lustre file system. Connect the EC2 instances that host the application to the file system.

Question 219

A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.

Which solution will meet these requirements?

Options:

A.

Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.

B.

Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS Cloud.

C.

Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.

D.

Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS Cloud, then perform analytics on this data in the cloud.

Question 220

A company is designing a new application that uploads files to an Amazon S3 bucket. The uploaded files are processed to extract metadata.

Processing must take less than 5 seconds. The volume and frequency of the uploads vary from a few files each hour to hundreds of concurrent uploads.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure AWS CloudTrail trails to log Amazon S3 API calls. Use AWS AppSync to process the files.

B.

Configure a new object created S3 event notification within the bucket to invoke an AWS Lambda function to process the files.

C.

Configure Amazon Kinesis Data Streams to deliver the files to the S3 bucket. Invoke an AWS Lambda function to process the files.

D.

Deploy an Amazon EC2 instance. Create a script that lists all files in the S3 bucket and processes new files. Use a cron job that runs every minute to run the script.

Question 221

A company uses a general-purpose instance class Amazon RDS for MySQL DB instance in a Multi-AZ configuration. The finance team runs SQL queries to generate reports. Customers experience performance issues during report generation.

A solutions architect needs to minimize the effect of the reporting job on the DB instance.

Which solution will meet these requirements?

Options:

A.

Create a proxy in Amazon RDS Proxy. Update the reporting job to query the proxy endpoint.

B.

Update the RDS DB instance configuration to use three Availability Zones.

C.

Add an RDS read replica. Update the reporting job to query the replica endpoint.

D.

Change the RDS configuration to a memory-optimized instance class.

Question 222

A company runs HPC workloads requiring high IOPS.

Which combination of steps will meet these requirements? (Select TWO)

Options:

A.

Use Amazon EFS as a high-performance file system.

B.

Use Amazon FSx for Lustre as a high-performance file system.

C.

Create an Auto Scaling group of EC2 instances. Use Reserved Instances. Configure a spread placement group. Use AWS Batch for analytics.

D.

Use Mountpoint for Amazon S3 as a high-performance file system.

E.

Create an Auto Scaling group of EC2 instances. Use mixed instance types and a cluster placement group. Use Amazon EMR for analytics.

Question 223

A company runs multiple applications in multiple AWS accounts within the same organization in AWS Organizations. A content management system (CMS) runs on Amazon EC2 instances in a VPC. The CMS needs to access shared files from an Amazon Elastic File System (Amazon EFS) file system that is deployed in a separate AWS account. The EFS account is in a separate VPC.

Which solution will meet this requirement?

Options:

A.

Mount the EFS file system on the EC2 instances by using the EFS Elastic IP address.

B.

Enable VPC sharing between the two accounts. Use the EFS mount helper to mount the file system on the EC2 instances. Redeploy the EFS file system in a shared subnet.

C.

Configure AWS Systems Manager Run Command to mount the EFS file system on the EC2 instances.

D.

Install the amazon-efs-utils package on the EC2 instances. Add the mount target in the efs-config file. Mount the EFS file system by using the EFS access point.

Question 224

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.

Which storage solution meets these requirements?

Options:

A.

Amazon S3 Standard

B.

Amazon S3 Intelligent-Tiering

C.

Amazon S3 Glacier Deep Archive

D.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Question 225

A company is deploying a new application to a VPC on existing Amazon EC2 instances. The application has a presentation tier that uses an Auto Scaling group of EC2 instances. The application also has a database tier that uses an Amazon RDS Multi-AZ database.

The VPC has two public subnets that are split between two Availability Zones. A solutions architect adds one private subnet to each Availability Zone for the RDS database. The solutions architect wants to restrict network access to the RDS database to block access from EC2 instances that do not host the new application.

Which solution will meet this requirement?

Options:

A.

Modify the RDS database security group to allow traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

B.

Associate a new ACL with the private subnets. Deny all incoming traffic from IP addresses that belong to any EC2 instance that does not host the new application.

C.

Modify the RDS database security group to allow traffic from the security group that is associated with the EC2 instances that host the new application.

D.

Associate a new ACL with the private subnets. Deny all incoming traffic except for traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

Question 226

A company wants to design a microservices architecture for an application. Each microservice must perform operations that can be completed within 30 seconds.

The microservices need to expose RESTful APIs and must automatically scale in response to varying loads. The APIs must also provide client access control and rate limiting to maintain equitable usage and service availability.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 to host each microservice. Use Amazon API Gateway to manage the RESTful API requests.

B.

Deploy each microservice as a set of AWS Lambda functions. Use Amazon API Gateway to manage the RESTful API requests.

C.

Host each microservice on Amazon EC2 instances in Auto Scaling groups behind an Elastic Load Balancing (ELB) load balancer. Use the ELB to manage the RESTful API requests.

D.

Deploy each microservice on Amazon Elastic Beanstalk. Use Amazon CloudFront to manage the RESTful API requests.

Question 227

An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company ' s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create a cross-Region read replica and promote the read replica to the primary instance

B.

Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.

C.

Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket

D.

Copy automatic snapshots to another Region every 24 hours.

Page: 1 / 76
Total 758 questions