Curious about Actual Amazon Professional (SAP-C02) Exam Questions?

Here are sample Amazon AWS Certified Solutions Architect - Professional (SAP-C02) Exam questions from real exam. You can get more Amazon Professional (SAP-C02) Exam premium practice questions at TestInsights.

Page: 1 /
Total 461 questions
Question 1

A company uses AWS Organizations to manage its development environment. Each development team at the company has its own AWS account Each account has a single VPC and CIDR blocks that do not overlap.

The company has an Amazon Aurora DB cluster in a shared services account All the development teams need to work with live data from the DB cluster

Which solution will provide the required connectivity to the DB cluster with the LEAST operational overhead?


Correct : B

Create a Transit Gateway:

In the shared services account, create a new AWS Transit Gateway. This serves as a central hub to connect multiple VPCs, simplifying the network topology and management.

Configure Transit Gateway Attachments:

Attach the VPC containing the Aurora DB cluster to the transit gateway. This allows the shared services VPC to communicate through the transit gateway.

Create Resource Share with AWS RAM:

Use AWS Resource Access Manager (AWS RAM) to create a resource share for the transit gateway. Share this resource with all development accounts. AWS RAM allows you to securely share your AWS resources across AWS accounts without needing to duplicate them.

Accept Resource Shares in Development Accounts:

Instruct each development team to log into their respective AWS accounts and accept the transit gateway resource share. This step is crucial for enabling cross-account access to the shared transit gateway.

Configure VPC Attachments in Development Accounts:

Each development account needs to attach their VPC to the shared transit gateway. This allows their VPCs to route traffic through the transit gateway to the Aurora DB cluster in the shared services account.

Update Route Tables:

Update the route tables in each VPC to direct traffic intended for the Aurora DB cluster through the transit gateway. This ensures that network traffic is properly routed between the development VPCs and the shared services VPC.

Using a transit gateway simplifies the network management and reduces operational overhead by providing a scalable and efficient way to interconnect multiple VPCs across different AWS accounts.

Reference

AWS Database Blog on RDS Proxy for Cross-Account Access48.

AWS Architecture Blog on Cross-Account and Cross-Region Aurora Setup49.

DEV Community on Managing Multiple AWS Accounts with Organizations51.


Options Selected by Other Users:
Mark Question:

Start a Discussions

Submit Your Answer:
0 / 1500
Question 2

A company has an application that analyzes and stores image data on premises The application receives millions of new image files every day Files are an average of 1 MB in size The files are analyzed in batches of 1 GB When the application analyzes a batch the application zips the images together The application then archives the images as a single file in an on-premises NFS server for long-term storage

The company has a Microsoft Hyper-V environment on premises and has compute capacity available The company does not have storage capacity and wants to archive the images on AWS The company needs the ability to retrieve archived data within t week of a request.

The company has a 10 Gbps AWS Direct Connect connection between its on-premises data center and AWS. The company needs to set bandwidth limits and schedule archived images to be copied to AWS dunng non-business hours.

Which solution will meet these requirements MOST cost-effectively?


Correct : B

Deploy DataSync Agent:

Install the AWS DataSync agent as a VM in your Hyper-V environment. This agent facilitates the data transfer between your on-premises storage and AWS.

Configure Source and Destination:

Set up the source location to point to your on-premises NFS server where the image batches are stored.

Configure the destination location to be an Amazon S3 bucket with the Glacier Deep Archive storage class. This storage class is cost-effective for long-term storage with retrieval times of up to 12 hours.

Create DataSync Tasks:

Create and configure DataSync tasks to manage the data transfer. Schedule these tasks to run during non-business hours to minimize bandwidth usage during peak times. The tasks will handle the copying of data batches from the NFS server to the S3 bucket.

Set Bandwidth Limits:

In the DataSync configuration, set bandwidth limits to control the amount of data being transferred at any given time. This ensures that your network's performance is not adversely affected during business hours.

Delete On-Premises Data:

After successfully copying the data to S3 Glacier Deep Archive, configure the DataSync task to delete the data from your on-premises NFS server. This helps manage storage capacity on-premises and ensures data is securely archived on AWS.

This approach leverages AWS DataSync for efficient, secure, and automated data transfer, and S3 Glacier Deep Archive for cost-effective long-term storage.

Reference

AWS DataSync Overview41.

AWS Storage Blog on DataSync Migration40.

Amazon S3 Transfer Acceleration Documentation42.


Options Selected by Other Users:
Mark Question:

Start a Discussions

Submit Your Answer:
0 / 1500
Question 3

A company has a web application that uses Amazon API Gateway. AWS Lambda and Amazon DynamoDB A recent marketing campaign has increased demand Monitoring software reports that many requests have significantly longer response times than before the marketing campaign

A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that errors are occurring on 20% of the requests. In CloudWatch. the Lambda function. Throttles metric represents 1% of the requests and the Errors metric represents 10% of the requests Application logs indicate that, when errors occur there is a call to DynamoDB

What change should the solutions architect make to improve the current response times as the web application becomes more popular'?


Correct : B

Enable DynamoDB Auto Scaling:

Navigate to the DynamoDB console and select the table experiencing high demand.

Go to the 'Capacity' tab and enable auto scaling for both read and write capacity units. Auto scaling adjusts the provisioned throughput capacity automatically in response to actual traffic patterns, ensuring the table can handle the increased load.

Configure Auto Scaling Policies:

Set the minimum and maximum capacity units to define the range within which auto scaling can adjust the provisioned throughput.

Specify target utilization percentages for read and write operations, typically around 70%, to maintain a balance between performance and cost.

Monitor and Adjust:

Use Amazon CloudWatch to monitor the auto scaling activity and ensure it is effectively handling the increased demand.

Adjust the auto scaling settings if necessary to better match the traffic patterns and application requirements.

By enabling DynamoDB auto scaling, you ensure that the database can handle the fluctuating traffic volumes without manual intervention, improving response times and reducing errors.

Reference

AWS Compute Blog on Using API Gateway as a Proxy for DynamoDB60.

AWS Database Blog on DynamoDB Accelerator (DAX)59.


Options Selected by Other Users:
Mark Question:

Start a Discussions

Submit Your Answer:
0 / 1500
Question 4

A solutions architect has deployed a web application that serves users across two AWS Regions under a custom domain The application uses Amazon Route 53 latency-based routing The solutions architect has associated weighted record sets with a pair of web servers in separate Availability Zones for each Region

The solutions architect runs a disaster recovery scenario When all the web servers in one Region are stopped. Route 53 does not automatically redirect users to the other Region

Which of the following are possible root causes of this issue1? (Select TWO)


Correct : D, E

Evaluate Target Health Setting:

Ensure that the 'Evaluate Target Health' setting is enabled for the latency alias resource record sets in Route 53. This setting helps Route 53 determine the health of the resources associated with the alias record and redirect traffic appropriately.

HTTP Health Checks:

Configure HTTP health checks for all weighted resource record sets. Health checks monitor the availability and performance of the web servers, allowing Route 53 to reroute traffic to healthy servers in case of a failure.

Verify that the health checks are correctly set up and associated with the resource record sets. This ensures that Route 53 can detect server failures and redirect traffic to the servers in the other Region.

By enabling the 'Evaluate Target Health' setting and configuring HTTP health checks, Route 53 can effectively manage traffic during failover scenarios, ensuring high availability and reliability.

Reference

AWS Route 53 Documentation on Latency-Based Routing50.

AWS Architecture Blog on Cross-Account and Cross-Region Setup49.


Options Selected by Other Users:
Mark Question:

Start a Discussions

Submit Your Answer:
0 / 1500
Question 5

A company has multiple lines of business (LOBs) that toll up to the parent company. The company has asked its solutions architect to develop a solution with the following requirements

* Produce a single AWS invoice for all of the AWS accounts used by its LOBs.

* The costs for each LOB account should be broken out on the invoice

* Provide the ability to restrict services and features in the LOB accounts, as defined by the company's governance policy

* Each LOB account should be delegated full administrator permissions regardless of the governance policy

Which combination of steps should the solutions architect take to meet these requirements'? (Select TWO.)


Correct : B, E

Create AWS Organization:

In the AWS Management Console, navigate to AWS Organizations and create a new organization in the parent account.

Invite LOB Accounts:

Invite each Line of Business (LOB) account to join the organization. This allows centralized management and governance of all accounts.

Enable Consolidated Billing:

Enable consolidated billing in the billing console of the parent account. Link all LOB accounts to ensure a single consolidated invoice that breaks down costs per account.

Apply Service Control Policies (SCPs):

Implement Service Control Policies (SCPs) to define the services and features permitted for each LOB account as per the governance policy, while still delegating full administrative permissions to the LOB accounts.

By consolidating billing and using AWS Organizations, the company can achieve centralized billing and governance while maintaining independent administrative control for each LOB account


Options Selected by Other Users:
Mark Question:

Start a Discussions

Submit Your Answer:
0 / 1500
Page:    1 / 93   
Total 461 questions