web analytics

[Feb-2017 Dumps] Reliable PassLeader AWS Certified DevOps Engineer – Professional Braindump with VCE and PDF Dumps Free Download

New Updated AWS Certified DevOps Engineer – Professional Exam Questions from PassLeader AWS Certified DevOps Engineer – Professional PDF dumps! Welcome to download the newest PassLeader AWS Certified DevOps Engineer – Professional VCE dumps: http://www.passleader.com/aws-devops-engineer-professional.html (190 Q&As)

Keywords: AWS Certified DevOps Engineer – Professional exam dumps, AWS Certified DevOps Engineer – Professional exam questions, AWS Certified DevOps Engineer – Professional VCE dumps, AWS Certified DevOps Engineer – Professional PDF dumps, AWS Certified DevOps Engineer – Professional practice tests, AWS Certified DevOps Engineer – Professional study guide, AWS Certified DevOps Engineer – Professional braindumps, AWS Certified DevOps Engineer – Professional Exam

p.s. Free AWS Certified DevOps Engineer – Professional dumps download from Google Drive: https://drive.google.com/open?id=0B-ob6L_QjGLpblF1NzNWWjFiRGc

NEW QUESTION 1
Due to compliance regulations, management has asked you to provide a system that allows for cost-effective long-term storage of your application logs and provides a way for support staff to view the logs more quickly. Currently your log system archives logs automatically to Amazon S3 every hour, and support staff must wait for these logs to appear in Amazon S3, because they do not currently have access to the systems to view live logs. What method should you use to become compliant while also providing a faster way for support staff to have access to logs?

A.    Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and add a new policy to push all log entries to Amazon SQS for ingestion by the support team.
B.    B. Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and use or write a service to also stream your application logs to CloudWatch Logs.
C.    Update Amazon Glacier lifecycle policies to pull new logs from Amazon S3, and in the Amazon EC2 console, enable the CloudWatch Logs Agent on all of your application servers.
D.    Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier. key can be different from the tableEnable Amazon S3 partial uploads on your Amazon S3 bucket, and trigger an Amazon SNS notification when a partial upload occurs.
E.    Use or write a service to stream your application logs to CloudWatch Logs. Use an Amazon Elastic Map Reduce cluster to live stream your logs from CloudWatch Logs for ingestion by the support team, and create a Hadoop job to push the logs to S3 in five-minute chunks.

Answer: E

NEW QUESTION 2
You want to securely distribute credentials for your Amazon RDS instance to your fleet of web server instances. The credentials are stored in a file that is controlled by a configuration management system. How do you securely deploy the credentials in an automated manner across the fleet of web server instances, which can number in the hundreds, while retaining the ability to roll back if needed?

A.    Store your credential files in an Amazon S3 bucket.
Use Amazon S3 server-side encryption on the credential files.
Have a scheduled job that pulls down the credential files into the instances every 10 minutes.
B.    Store the credential files in your version-controlled repository with the rest of your code.
Have a post-commit action in version control that kicks off a job in your continuous integration system which securely copses the new credential files to all web server instances.
C.    Insert credential files into user data and use an instance lifecycle policy to periodically refresh the file from the user data.
D.    Keep credential files as a binary blob in an Amazon RDS MySQL DB instance, and have a script on each Amazon EC2 instance that pulls the files down from the RDS instance.
E.    Store the credential files in your version-controlled repository with the rest of your code.
Use a parallel file copy program to send the credential files from your local machine to the Amazon EC2 instances.

Answer: D

NEW QUESTION 3
You are using a configuration management system to manage your Amazon EC2 instances. On your Amazon EC2 Instances, you want to store credentials for connecting to an Amazon RDS DB instance. How should you securely store these credentials?

A.    Give the Amazon EC2 instances an IAM role that allows read access to a private Amazon S3 bucket.
Store a file with database credentials in the Amazon S3 bucket.
Have your configuration management system pull the file from the bucket when it is needed.
B.    Launch an Amazon EC2 instance and use the configuration management system to bootstrap the instance with the Amazon RDS DB credentials.
Create an AMI from this instance.
C.    Store the Amazon RDS DB credentials in Amazon EC2 user data.
Import the credentials into the Instance on boot.
D.    Assign an IAM role to your Amazon RDS instance, and use this IAM role to access the Amazon RDS DB from your Amazon EC2 instances.
E.    Store your credentials in your version control system, in plaintext.
Check out a copy of your credentials from the version control system on boot.
Use Amazon EBS encryption on the volume storing the Amazon RDS DB credentials.

Answer: D

NEW QUESTION 4
Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

A.    Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website.
B.    Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C.    Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials.
D.    Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.

Answer: C

NEW QUESTION 5
You need to implement A/B deployments for several multi-tier web applications. Each of them has Its Individual infrastructure:
Amazon Elastic Compute Cloud (EC2) front-end servers, Amazon ElastiCache clusters, Amazon Simple Queue Service (SQS) queues, and Amazon Relational Database (RDS) Instances.
Which combination of services would give you the ability to control traffic between different deployed versions of your application? (Choose one.)

A.    Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application.
New versions would be deployed a-eating Elastic Beanstalk environments and using the Swap URLs feature.
B.    Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application.
New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route 53.
C.    Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application.
New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53.
D.    Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application.
New versions would be deployed updating the Elastic Beanstalk application version for the current Elastic Beanstalk environment.

Answer: B

NEW QUESTION 6
You work for an insurance company and are responsible for the day-to-day operations of your company’s online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements:
– All log entries must be retained by the system, even during unplanned instance failure.
– The customer insight team requires immediate access to the logs from the past seven days.
– The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available.
How would you meet these requirements in a cost-effective manner? (Choose three.)

A.    Configure your application to write logs to the instance’s ephemeral disk, because this storage is free and has good write performance.
Create a script that moves the logs from the instance to Amazon S3 once an hour.
B.    Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
C.    Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
D.    Configure your application to write logs to the instance’s default Amazon EBS boot volume, because this storage already exists.
Create a script that moves the logs from the instance to Amazon S3 once an hour.
E.    Configure your application to write logs to a separate Amazon EBS volume with the “delete on termination” field set to false.
Create a script that moves the logs from the instance to Amazon S3 once an hour.
F.    Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability.
The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files.
Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

Answer: CEF

NEW QUESTION 7
You have an application running on Amazon EC2 in an Auto Scaling group. Instances are being bootstrapped dynamically, and the bootstrapping takes over 15 minutes to complete. You find that instances are reported by Auto Scaling as being In Service before bootstrapping has completed. You are receiving application alarms related to new instances before they have completed bootstrapping, which is causing confusion. You find the cause: your application monitoring tool is polling the Auto Scaling Service API for instances that are In Service, and creating alarms for new previously unknown instances. Which of the following will ensure that new instances are not added to your application monitoring tool before bootstrapping is completed?

A.    Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete.
Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state.
B.    Use the default Amazon CloudWatch application metrics to monitor your application’s health.
Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients.
C.    Tag all instances on launch to identify that they are in a pending state.
Change your application monitoring tool to look for this tag before adding new instances, and the use the Amazon API to set the instance state to ‘pending’ until bootstrapping is complete.
D.    Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances.

Answer: A

NEW QUESTION 8
You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs. What technique should you use to meet these requirements?

A.    Store your log in Amazon CloudWatch Logs.
B.    Store your logs in Amazon Glacier.
C.    Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.
D.    Store your logs in HDFS on an Amazon EMR cluster.
E.    Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.

Answer: C

NEW QUESTION 9
You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, although memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users?

A.    Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances.
Update the Auto Scaling group with the new launch configuration.
Auto Scaling will then update the instance type of all running instances.
B.    Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance type.
Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate.
C.    Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type.
Run a stack update with the new template.
Auto Scaling will then update the instances with the new instance type.
D.    Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type.
Also add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate.
Run a stack update with the new template.

Answer: D

NEW QUESTION 10
You’ve been tasked with implementing an automated data backup solution for your application servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour. How can you implement this through a script that a scheduling daemon runs daily on the application servers?

A.    Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date time group, and copy backup data to a second Amazon EBS volume.
Use the ec2-describe-volumes API to enumerate existing backup volumes.
Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-tine group older than 30 days.
B.    Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group.
Use the list vaults API to enumerate existing backup archives.
Call the delete vault API to prune backup archives that are tagged with a date-time group older than 30 days.
C.    Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group.
Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots.
Call the ec2-delete-snapShot API to prune Amazon EBS snapshots that are tagged with a date-time group older than 30 days.
D.    Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to back up data to the new Amazon EBS volume.
Use the ec2- describe-snapshot API to enumerate existing backup volumes.
Call the ec2-delete-snaphot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days.

Answer: C

NEW QUESTION 11
Your application uses CloudFormation to orchestrate your application’s resources. During your testing phase before the application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting In the loss of test data. How should you prevent this from occurring in the future?

A.    Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type.
B.    Use an AWS CloudFormation stack policy to deny updates to the instance. Only allow UpdateStack permission to IAM principals that are denied SetStackPolicy.
C.    In the AWS CloudFormation template, set the AWS::RDS::DBInstance’s DBlnstanceClass property to be read-only.
D.    Subscribe to the AWS CloudFormation notification “BeforeResourceUpdate,” and call CancelStackUpdate if the resource identified is the Amazon RDS instance.
E.    In the AWS CloudFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance’s DeletionPolicy property to “Retain.”

Answer: E

NEW QUESTION 12
Your company develops a variety of web applications using many platforms and programming languages with different application dependencies. Each application must be developed and deployed quickly and be highly evadable to satisfy your business requirements. Which of the following methods should you use to deploy these applications rapidly?

A.    Develop the applications in Docker containers, and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
B.    Use the AWS CloudFormation Docker import service to build and deploy the applications with high availability in multiple Availability Zones.
C.    Develop each application’s code in DynamoDB, and then use hooks to deploy it to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
D.    Store each application’s code in a Git repository, develop custom package repository managers for each application’s dependencies, and deploy to AWS OpsWorks in multiple Availability Zones.

Answer: A

NEW QUESTION 13
You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? (Choose two.)

A.    Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process.
Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs.
Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics.
B.    On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier.
Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated.
Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
C.    On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket.
Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated.
Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.
D.    Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process.
Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour.

Answer: AC

NEW QUESTION 14
You have been tasked with deploying a scalable distributed system using AWS OpsWorks. Your distributed system is required to scale on demand. As it is distributed, each node must hold a configuration file that includes the hostnames of the other instances within the layer. How should you configure AWS OpsWorks to manage scaling this application dynamically?

A.    Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer.
B.    Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instances.
Configure your base AMI to execute this script on Operating System startup.
C.    Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to execute when instances are launched.
D.    Configure your AWS OpsWorks layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.

Answer: A

NEW QUESTION 15
You have an application running on an Amazon EC2 instance and you are using IAM roles to securely access AWS Service APIs. How can you configure your application running on that instance to retrieve the API keys for use with the AWS SDKs?

A.    When assigning an EC2 IAM role to your instance in the console, in the “Chosen SDK” drop-down list, select the SDK that you are using, and the instance will configure the correct SDK on launch with the API keys.
B.    Within your application code, make a GET request to the IAM Service API to retrieve credentials for your user.
C.    When using AWS SDKs and Amazon EC2 roles, you do not have to explicitly retrieve API keys, because the SDK handles retrieving them from the Amazon EC2 MetaData service.
D.    Within your application code, configure the AWS SDK to get the API keys from environment variables, because assigning an Amazon EC2 role stores keys in environment variables on launch.

Answer: C

NEW QUESTION 16
When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? (Choose two.)

A.    Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system.
B.    Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system.
C.    Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation.
D.    Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.
E.    Use Amazon Simple Workflow Service (SWF) to maintain an Amazon DynamoDB database that contains a whitelist of instances that have been previously launched, and allow the Amazon SWF worker to remove information from the configuration management system.

Answer: AD

NEW QUESTION 17
You have enabled Elastic Load Balancing HTTP health checking. After looking at the AWS Management Console, you see that all instances are passing health checks, but your customers are reporting that your site is not responding. What is the cause?

A.    The HTTP health checking system is misreporting due to latency in inter-instance metadata synchronization.
B.    The health check in place is not sufficiently evaluating the application function.
C.    The application is returning a positive health check too quickly for the AWS Management Console to respond.
D.    Latency in DNS resolution is interfering with Amazon EC2 metadata retrieval.

Answer: B

NEW QUESTION 18
You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.)

A.    Deploy your web application as an AWS Elastic Beanstalk application.
Use the default Elastic Beanstalk Cloudwatch metrics to capture 500 Internal Server Errors.
Set a CloudWatch alarm on that metric.
B.    Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
C.    Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
D.    Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors.
Set a CloudWatch alarm on that metric.
E.    Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
F.    Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.

Answer: BDE

NEW QUESTION 19
After a daily scrum with your development teams, you’ve agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement?

A.    Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types.
B.    Using an AWS CloudFormation template, re-deploy your application behind a load balancer, launch a new AWS CloudFormation stack during each deployment, update your load balancer to send half your traffic to the new stack while you test, after verification update the load balancer to send 100% of traffic to the new stack, and then terminate the old stack.
C.    Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new identical Auto Scaling group, and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group.
D.    Using an AWS OpsWorks stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning, during deployment create a new version of your application, tell OpsWorks to launch the new version behind your load balancer, and when the new version is launched, terminate the old OpsWorks stack.

Answer: C

NEW QUESTION 20
You have a complex system that involves networking, IAM policies, and multiple, three-tier applications. You are still receiving requirements for the new system, so you don’t yet know how many AWS components will be present in the final design. You want to start using AWS CloudFormation to define these AWS resources so that you can automate and version-control your infrastructure. How would you use AWS CloudFormation to provide agile new environments for your customers in a cost-effective, reliable manner?

A.    Manually create one template to encompass all the resources that you need for the system, so you only have a single template to version-control.
B.    Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation, and maintain several templates to version-control.
C.    Create multiple separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon Elastic Compute Cloud (EC2) instance running the SDK for finer granularity of control.
D.    Manually construct the networking layer using Amazon Virtual Private Cloud (VPC) because this does not change often, and then use AWS CloudFormation to define all other ephemeral resources.

Answer: B

NEW QUESTION 21
Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment. Which of the following should you do?

A.    Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (S3) bucket with encryption enabled.
Assign AWS Identity and Access Management (IAM) users to each developer so they can download the credentials file.
B.    Place an internally created private key into a secure S3 bucket with server-side encryption using customer keys and configuration management, create a service account on all the instances using this private key, and assign IAM users to each developer so they can download the file.
C.    Place each developer’s own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user’s public keys into the appropriate account.
D.    Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.

Answer: C

NEW QUESTION 22
As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

A.    Ensure that the I/O block sizes for the test are randomly selected.
B.    Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
C.    Ensure that snapshots of the Amazon EBS volumes are created as a backup.
D.    Ensure that the Amazon EBS volume is encrypted.
E.    Ensure that the Amazon EBS volume has been pre-warmed by creating a snapshot of the volume before the test.

Answer: B

NEW QUESTION 23
After reviewing the last quarter’s monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost?

A.    Update your Amazon S3 buckets’ lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application’s bucket.
B.    Create a new DynamoDB table. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3.
Any time a new object is uploaded, update the application’s internal Amazon S3 object metadata cache from DynamoDB.
C.    Using Amazon SNS, create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB table to store all metadata about the new object.
Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB table.
D.    Upload all images to Amazon SQS, set up SQS lifecycles to move all images to Amazon S3, and initiate an Amazon SNS notification to your application to update the application’s Internal Amazon S3 object metadata cache.
E.    Upload all images to an ElastiCache filecache server. Update your application to now read all file metadata from the ElastiCache filecache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

Answer: C

NEW QUESTION 24
Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

A.    Publish your data to CloudWatch Logs, and configure your application to autoscale to handle the load on demand.
B.    Publish your log data to an Amazon S3 bucket.
Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is configured to pull down your log files stored an Amazon S3.
C.    Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is configured to process your logging data.
D.    Configure an Auto Scaling group to increase the size of your Amazon EMR duster.
E.    Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer: C

NEW QUESTION 25
You are using Elastic Beanstalk to manage your e-commerce store. The store is based on an open source e- commerce platform and is deployed across multiple instances in an Auto Scaling group. Your development team often creates new “extensions” for the e-commerce store. These extensions include PHP source code as well as an SQL upgrade script used to make any necessary updates to the database schema. You have noticed that some extension deployments fail due to an error when running the SQL upgrade script. After further investigation, you realize that this is because the SQL script is being executed on all of your Amazon EC2 instances. How would you ensure that the SQL script is only executed once per deployment regardless of how many Amazon EC2 instances are running at the time?

A.    Use a “Container command” within an Elastic Beanstalk configuration file to execute the script, ensuring that the “leader only” flag is set to true.
B.    Make use of the Amazon EC2 metadata service to query whether the instance is marked as the leader” in the Auto Scaling group.
Only execute the script if “true” is returned.
C.    Use a “Solo Command” within an Elastic Beanstalk configuration file to execute the script.
The Elastic Beanstalk service will ensure that the command is only executed once.
D.    Update the Amazon RDS security group to only allow write access from a single instance in the Auto Scaling group; that way, only one instance will successfully execute the script on the database.

Answer: A

NEW QUESTION 26
You are administering a continuous integration application that polls version control for changes and then launches new Amazon EC2 instances for a full suite of build tests. What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible?

A.    Perform syntax checking on the continuous integration system before launching a new Amazon EC2 instance for build test, unit and integration tests.
B.    Perform syntax and build tests on the continuous integration system before launching the new Amazon EC2 instance unit and integration tests.
C.    Perform all tests on the continuous integration system, using AWS OpsWorks for unit, integration, and build tests.
D.    Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit, integration, and build tests.

Answer: B

NEW QUESTION 27
You are doing a load testing exercise on your application hosted on AWS. While testing your Amazon RDS MySQL DB instance, you notice that when you hit 100% CPU utilization on it, your application becomes non- responsive. Your application is read-heavy. What are methods to scale your data tier to meet the application’s needs? (Choose three.)

A.    Add Amazon RDS DB read replicas, and have your application direct read queries to them.
B.    Add your Amazon RDS DB instance to an Auto Scaling group and configure your CloudWatch metric based on CPU utilization.
C.    Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance.
D.    Use ElastiCache in front of your Amazon RDS DB to cache common queries.
E.    Shard your data set among multiple Amazon RDS DB instances.
F.    Enable Multi-AZ for your Amazon RDS DB instance.

Answer: ADE

NEW QUESTION 28
Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? (Choose two.)

A.    Create an Amazon S3 bucket per user, and use your application to generate the S3 URI for the appropriate content.
B.    Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C.    Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects.
D.    Authenticate your users at the application level, and send an SMS token message to the user.
Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
E.    Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.

Answer: CE

NEW QUESTION 29
You have an Auto Sealing group of Instances that processes messages from an Amazon Simple Queue Service (SQS) queue. The group scales on the size of the queue. Processing Involves calling a third-party web service. The web service is complaining about the number of failed and repeated calls it is receiving from you. You have noticed that when the group scales in, instances are being terminated while they are processing. What cost-effective solution can you use to reduce the number of incomplete process attempts?

A.    Create a new Auto Scaling group with minimum and maximum of 2 and instances running web proxy software.
Configure the VPC route table to route HTTP traffic to these web proxies.
B.    Modify the application running on the instances to enable termination protection while it processes a task and disable it when the processing is complete.
C.    Increase the minimum and maximum size for the Auto Scaling group, and change the scaling policies so they scale less dynamically.
D.    Modify the application running on the instances to put itself into an Auto Scaling Standby state while it processes a task and return itself to InService when the processing is complete.

Answer: D

NEW QUESTION 30
The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? (Choose two.)

A.    Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent.
B.    Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail.
C.    Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools.
D.    Using AWS CloudFormation, create a CloudWatch Logs LogGroup.
Because the Cloudwatch Log agent automatically sends all operating system logs, you only have to configure the application logs for sending off-machine.
E.    Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.

Answer: AC

NEW QUESTION 31
The project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production. How should you do this in a way that accommodates each department, using their existing workflows?

A.    Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networking and security groups and IAM information for Security.
B.    Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control.
C.    Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department’s use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation.
D.    Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments to approve changes before altering the application for future deployments.

Answer: B

NEW QUESTION 32
You currently run your infrastructure on Amazon EC2 instances behind an Auto Scaling group> All logs for you application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug. Which technique should you use to make sure you are able to review your logs after your instances have shut down?

A.    Configure the ephemeral policies on your Auto Scaling group to back up on terminate.
B.    Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate.
C.    Install the CloudWatch Logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs.
D.    Install the CloudWatch monitoring agent on your AMI, and set up new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive.
E.    Install the CloudWatch monitoring agent on your AMI, Update your Auto Scaling policy to enable automated CloudWatch Log copy.

Answer: C

NEW QUESTION 33
Management has reported an increase in the monthly bill from Amazon web services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost. How can you provide management with a better insight into data transfer use?

A.    Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.
B.    Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.
C.    Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.
D.    Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.

Answer: C

NEW QUESTION 34
During metric analysis, your team has determined that the company’s website is experiencing response times during peak hours that are higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows. How can you improve your Auto Scaling policy to reduce this high response time? Choose 2 answers.

A.    Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have better fine-grain insight.
B.    Increase your Auto Scaling group’s number of max servers.
C.    Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer.
D.    Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.
E.    Update the CloudWatch metric used for your Auto Scaling policy, and enable sub-minute granularity to allow auto scaling to trigger faster.

Answer: BD

NEW QUESTION 35
You are responsible for your company’s large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer. While reviewing metrics, you’ve started noticing an upwards trend for slow customer page load time. Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second. Which technique would you use to solve this issue?

A.    Re-deploy your infrastructure using an AWS CloudFormation template.
Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed.
B.    Re-deploy your infrastructure using an AWS CloudFormation template.
Spin up a second AWS CloudFormation stack.
Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack.
C.    Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scaling.
Set up your Auto Scaling group policies to scale based on the number of requests per second as well as the current customer load time.
D.    Re-deploy your application using an Auto Scaling template.
Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

Answer: C

NEW QUESTION 36
Your company has multiple applications running on AWS. Your company wants to develop a tool that notifies on-call teams immediately via email when an alarm is triggered in your environment. You have multiple on-cal teams that work different shifts, and the tool should handle notifying the correct teams at the correct times. How should you implement this solution?

A.    Create an Amazon SNS topic and an Amazon SQS queue.
Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic.
Configure CloudWatch alarms to notify this topic when an alarm is triggered.
Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0.
Worker nodes in this group spawn when messages are added to the queue.
Workers then use Amazon Simple Email Service to send messages to your on call teams.
B.    Create an Amazon SNS topic and configure your on-call team email addresses as subscribers.
Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic.
Notifications will be sent to on-call users when a CloudWatch alarm is triggered.
C.    Create an Amazon SNS topic and configure your on-call team email addresses as subscribers.
Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggered.
Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered.
Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on-call engineers receive alerts.
D.    Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers.
Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered.
Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered.
Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.

Answer: D

NEW QUESTION 37
Your company releases new features with high frequency while demanding high application availability. As part of the application’s A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working flawlessly after each deployment. If the logs show arty anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

A.    Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
B.    Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
C.    Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
D.    Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.
E.    Store the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis.

Answer: C

NEW QUESTION 38
You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? (Choose two.)

A.    Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning.
B.    Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning.
C.    Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances.
Use these credentials to securely access the Amazon S3 bucket when deploying code.
D.    Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application’s Amazon EC2 instances with this role.
E.    Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis.
F.    Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon S3 bucket to your Amazon EC2 instances.

Answer: BD

NEW QUESTION 39
You have an application consisting of a stateless web server tier running on Amazon EC2 instances behind load balancer, and are using Amazon RDS with read replicas. Which of the following methods should you use to implement a self-healing and cost-effective architecture? (Choose two.)

A.    Set up a third-party monitoring solution on a cluster of Amazon EC2 instances in order to emit custom CloudWatch metrics to trigger the termination of unhealthy Amazon EC2 instances.
B.    Set up scripts on each Amazon EC2 instance to frequently send ICMP pings to the load balancer in order to determine which instance is unhealthy and replace it.
C.    Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon RDS DB CPU utilization CloudWatch metric to scale the instances.
D.    Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon EC2 CPU utilization CloudWatch metric to scale the instances.
E.    Use a larger Amazon EC2 instance type for the web server tier and a larger DB instance type for the data storage layer to ensure that they don’t become unhealthy.
F.    Set up an Auto Scaling group for the database tier along with an Auto Scaling policy that uses the Amazon RDS read replica lag CloudWatch metric to scale out the Amazon RDS read replicas.
G.    Use an Amazon RDS Multi-AZ deployment.

Answer: AD

NEW QUESTION 40
Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided to use a Blue/Green deployment strategy. How should you implement this for each deployment?

A.    Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
B.    Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance.
C.    Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing.
D.    Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

Answer: C

NEW QUESTION 41
……


Download the newest PassLeader AWS Certified DevOps Engineer – Professional dumps from passleader.com now! 100% Pass Guarantee!

AWS Certified DevOps Engineer – Professional PDF dumps & AWS Certified DevOps Engineer – Professional VCE dumps: http://www.passleader.com/aws-devops-engineer-professional.html (190 Q&As) (New Questions Are 100% Available and Wrong Answers Have Been Corrected! Free VCE simulator!)

p.s. Free AWS Certified DevOps Engineer – Professional dumps download from Google Drive: https://drive.google.com/open?id=0B-ob6L_QjGLpblF1NzNWWjFiRGc

Theme: Overlay by Kaira