AWS-SOLUTION-ARCHITECT-ASSOCIATE
Exam AWS Amazon AWS-SOLUTION-ARCHITECTASSOCIATE Exam
Title
AWS Certified Solutions Architect –
Associate
Product
Type
1062 Q&A
AWS-SOLUTION-ARCHITECT-ASSOCIATE
QUESTION: 1
A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS
for greater scalability and elasticity. The web server currently shares read-only data using a network
distributed file system The app server tier uses a clustering mechanism for discovery and shared
session state that depends on IP multicast The database tier uses shared-storage clustering to
provide database fall over capability, and uses several read slaves for scaling Data on all servers and
the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?
A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment and one or more read replicas. Backup: web servers, app servers, and database
backed up weekly to Glacier using snapshots.
B. Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time.
App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with
multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up
weekly via AMIs, database backed up via DB snapshots.
C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via
AMIs, database backed up via DB snapshots.
D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App
servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-
AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB
snapshots.
Answer: A
Explanation:
https://d0.awsstatic.com/whitepapers/Storage/AWS%20Storage%20Services%20Whitepaper-v9.pdf
Amazon Glacier doesn’t suit all storage situations. Listed following are a few storage needs for which
you should consider other AWS storage options instead of Amazon Glacier.
Data that must be updated very frequently might be better served by a storage solution with lower
read/write latencies, such as Amazon EBS, Amazon RDS, Amazon DynamoDB, or relational databases
running on EC2.
QUESTION: 2
Your customer wishes to deploy an enterprise application to AWS which will consist of several web
servers, several application servers and a small (50GB) Oracle database information is stored, both in
the database and the file systems of the various servers. The backup system must support database
recovery whole server and whole disk restores, and individual file restores with a recovery time of no
more than two hours. They have chosen to use RDS Oracle as the database.
Which backup architecture will meet these requirements?
A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and
supplement with file-level backup to S3 using traditional enterprise backup software to provide file
level restore
AWS-SOLUTION-ARCHITECT-ASSOCIATE
B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement
by copying file system data to S3 to provide file level restore.
C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and
supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to
provide file level restore
D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and
supplement with EBS snapshots for individual volume restore.
Answer: A
Explanation:
You need to use enterprise backup software to provide file level restore. See
https://d0.awsstatic.com/whitepapers/Backup_and_Recovery_Approaches_Using_AWS.pdf
Page 18:
If your existing backup software does not natively support the AWS cloud, you can use AWS storage
gateway products. AWS Storage Gateway is a virtual appliance that provides seamless and secure
integration between your data center and the AWS storage infrastructure.
QUESTION: 3
Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software
with a multi-regional deployment on AWS in Japan, Europe and US
A. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence.
Each region has deployed its own database.
In the HQ region you run an hourly batch process reading data from every region to compute crossregional
reports that are sent by email to all offices this batch process must be completed as fast as
possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements’?
A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the
HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the
HQ region
D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to
the HQ region
E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the
batch process
Answer: A
QUESTION: 4
A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web
application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on
an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual
consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model.
The application is exhibiting undesirable behavior because the database is not able to handle the
volume of writes. How can you reduce the load on your on-premises database resources in the most
cost-effective way?
AWS-SOLUTION-ARCHITECT-ASSOCIATE
A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the
on-premises database and a Hadoop cluster on AWS.
B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush
the queue to the on-premises database.
C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to
write to the on-premises database.
D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two
databases using Data Pipeline.
Answer: B
Explanation:
References:
QUESTION: 5
Company B is launching a new game app for mobile devices. Users will log into the game using their
existing social media account to streamline data capture. Company B would like to directly save
player data and scoring information from the mobile app to a DynamoDS table named Score Data
When a user saves their game the progress data will be stored to the Game state S3 bucket. What is
the best approach for storing data to DynamoDB and S3?
A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data
DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web
services.
B. Use temporary security credentials that assume a role providing access to the Score Data
DynamoDB table and the Game State S3 bucket using web identity federation.
C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile
app with access to the Score Data DynamoDB table and the Game State S3 bucket.
D. Use an IAM user with access credentials assigned a role providing access to the Score Data
DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
Answer: B
Explanation:
The requirements state “Users will log into the game using their existing social media account to
streamline data capture.” This is what Cognito is used for, ie Web Identity Federation. Amazon also
recommend to “build your app so that it requests temporary AWS security credentials dynamically
when needed using web identity federation.”
QUESTION: 6
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate
a large and undetermined amount of traffic that will create many database writes. To be certain that
you do not drop any writes to a database hosted on AWS. Which service should you use?
A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to
the database.
AWS-SOLUTION-ARCHITECT-ASSOCIATE
C. Amazon ElastiCache to store the writes until the writes are committed to the database.
D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write
throughput.
Answer: B
Explanation:
https://aws.amazon.com/sqs/faqs/
There is no limit on the number of messages that can be pushed onto SQS. The retention period
of the SQS is 4 days by default and it can be changed to 14 days. This will make sure that no writes
are missed.
QUESTION: 7
You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The
EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The two
EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is
provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the
instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write
performance Sometime later in order to increase the total random I/O performance of the instance,
you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is
provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance
Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total
random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?
A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume
storage of each of the 6 EBS volumes to 1TB
B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized
instance that provides larger throughput.
C. Small block sizes cause performance degradation, limiting the I'O throughput, configure the
instance device driver and file system to use 64KB blocks to increase throughput.
D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but
increase each Provisioned IOPS EBS volume to 6.000 IOPS.
E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume
to also be a 500GB 4.000 Provisioned IOPS volume.
Answer: E
QUESTION: 8
You have recently joined a startup company building sensors to measure street noise and air quality
in urban areas. The company has been running a pilot deployment of around 100 sensors for 3
months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB
of sensor data per month in the database.
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances
and a PostgreSQL RDS database with 500GB standard storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential
investors. The business plan requires a deployment of at least 100K sensors which needs to be