After presenting a working proof of concept for a new application that uses AWS API Gateway, a Developer must set up a team development environment for the project. Due to a tight timeline, the Developer wants to minimize time spent on infrastructure setup, and would like to reuse the code repository created for the proof of concept. Currently, all source code is stored in AWS CodeCommit. Company policy mandates having alpha, beta, and production stages with separate Jenkins servers to build code and run tests for every stage. The Development Manager must have the ability to block code propagation between admins at any time. The Security team wants to make sure that users will not be able to modify the environment without permission. How can this be accomplished?
A. Create API Gateway alpha, beta, and production stages. Create a CodeCommit trigger to deploy code to the different stages using an AWS Lambda function.
B. Create API Gateway alpha, beta, and production stages. Create an AWS CodePipeline that pulls code from the CodeCommit repository. Create CodePipeline actions to deploy code to the API Gateway stages.
C. Create Jenkins servers for the alpha, beta, and production stages on Amazon EC2 instances. Create multiple CodeCommit triggers to deploy code to different stages using an AWS Lambda function.
D. Create an AWS CodePipeline pipeline that pulls code from the CodeCommit repository. Create alpha, beta, and production stages with Jenkins servers on CodePipeline.
Your DevOps team is responsible for a multi-tier, Windows-based web application consisting of web servers, Amazon RDS database instances, and a load balancer behind Amazon Route53. You have been asked by your manager to build a cost-effective rolling deployment solution for this web application. What method should you use?
A. Re-deploy your application on an AWS OpsWorks stack. Use the AWS OpsWorks done stack feature to allow updates between duplicate stacks.
B. Re-deploy your application on Elastic Beanstalk and take advantage of Elastic BeanStalk rolling updates.
C. Re-deploy your application using an AWS CloudFormation template, launch a new AWS CloudFormation stack during each deployment, and then tear down the old stack.
D. Re-deploy your application using an AWS CloudFormation template. Use AWS CloudFormation rolling deployment policies, create a new policy for your AWS CloudFormation stack, and initiate an update stack operation to deploy new code.
You manage a web advertising platform on a single AWS account. This platform produces realtime ad-click data that you store as objects in an Amazon S3 bucket called "dick-data." Your advertising partners want to use Amazon Elastic MapReduce in their own AWS accounts to do analytics on the ad-click data. They have asked for immediate access to the ad-dick data so that they can run analytics. Which two choices are required to facilitate secure access to this data? (Choose two.)
A. Create a cross-account TAM role with a trust policy that contains partner AWS account IDs and a unique external ID.
B. Create a new IAM group for AWS Data Pipeline users with a trust policy that contains partner AWS account IDs.
C. Configure an Amazon S3 bucket policy for the "click-data" bucket that allows Read-Only access to the objects, and associate this policy with an IAM role.
D. Configure the Amazon S3 bucket access control list to allow access to the partners Amazon Elastic MapReduce cluster.
E. Configure AWS Data Pipeline in the partner AWS accounts to use the web Identity Federation API to access data in the "click-data" bucket.
F. Configure AWS Data Pipeline to transfer the data from the ''click-data" bucket to the partner's Amazon Elastic MapReduce cluster.
You set up a scalable continuous integration platform on AWS. The platform consists of a master node that can delegate project build jobs to multiple slave nodes, all running on Amazon EC2. The build output will be stored in Amazon S3.
You always have five slave nodes deployed. Each slave node can handle 10 build jobs simultaneously. Your master node publishes a custom Amazon CloudWatch metric with the name "RunningBuildiobs" that Slows you to programmatically
track how many build jobs are running across your platform.
Which two configuration options will allow you to flexibly scale your platform to support more than 50 simultaneous build jobs while minimizing costs? (Choose two.)
A. Place your fleet of slave nodes in an Auto Scaling group. Configure a CloudWatch alarm that triggers an Auto Scaling policy to launch Amazon EC2 Instances when "RunningBuildJobs" is greater than 45 for more than five minutes.
B. Configure a CloudWatch alarm that sends an alert when "RunningBuildJobs" is greater than 45 for more than five minutes. Use Amazon Simple Queue Service to process additional build jobs when the CloudWatch alarm is triggered.
C. Configure your fleet of slave nodes to fully utilize all of your purchased Amazon EC2 Heavy Utilization Reserved Instances. Configure a CloudWatch alarm that launches new Amazon EC2 instances when "RunningBuildJobs" is less than 40 for more than five minutes.
D. Run your fleet of slave nodes in an Auto Scaling group. Configure a Cloudwatch alarm that launches new Amazon EC2 Dedicated Instances when "RunningBuildJobs" is less than 40 for more than five minutes.
E. Place your fleet of slave nodes in an Auto Scaling group. Configure a CloudWatch alarm that triggers an Auto Scaling policy to terminate Amazon EC2 instances when "RunningBuildJobs" is less than 40 for more than five minutes.
A DevOps engineer is implementing governance controls for a company that requires its infrastructure to be housed within the United States. The engineer must restrict which Regions can be used, and ensure an alert is sent as soon as possible if any activity outside the governance policy takes place. The controls should be automatically enabled on any new Region outside the United States.
Which combination of actions will meet these requirements? (Choose two.)
A. Create an AWS Organizations SCP that denies access to all non-global services in non-US Regions. Attach the policy to the root of the organization.
B. Configure AWS CloudTrail to send logs to Amazon CloudWatch Logs and enable it for all Regions. Use a CloudWatch Logs metric filter to send an alert on any service activity in non-US Regions.
C. Use an AWS Lambda function that checks for AWS service activity and deploy it to all Regions. Write an Amazon CloudWatch Events rule that runs the Lambda function every hour, sending an alert if activity is found in a non-US Region.
D. Use an AWS Lambda function to query Amazon Inspector to look for service activity in non-US Regions and send alerts if any activity is found.
E. Write an SCP using the aws:RequestedRegion condition key limiting access to US Regions. Apply the policy to all users, groups, and roles.
For auditing, analytics, and troubleshooting purposes, a DevOps Engineer for a data analytics application needs to collect all of the application and Linux system logs from the Amazon EC2 instances before termination. The company, on
average, runs 10,000 instances in an Auto Scaling group. The company requires the ability to quickly find logs based on instance IDs and date ranges.
Which is the MOST cost-effective solution?
A. Create an EC2 Instance-terminate Lifecycle Action on the group, write a termination script for pushing logs into Amazon S3, and trigger an AWS Lambda function based on S3 PUT to create a catalog of log files in an Amazon DynamoDB table with the primary key being Instance ID and sort key being Instance Termination Date.
B. Create an EC2 Instance-terminate Lifecycle Action on the group, write a termination script for pushing logs into Amazon CloudWatch Logs, create a CloudWatch Events rule to trigger an AWS Lambda function to create a catalog of log files in an Amazon DynamoDB table with the primary key being Instance ID and sort key being Instance Termination Date.
C. Create an EC2 Instance-terminate Lifecycle Action on the group, create an Amazon CloudWatch Events rule based on it to trigger an AWS Lambda function for storing the logs in Amazon S3, and create a catalog of log files in an Amazon DynamoDB table with the primary key being Instance ID and sort key being Instance Termination Date.
D. Create an EC2 Instance-terminate Lifecycle Action on the group, push the logs into Amazon Kinesis Data Firehose, and select Amazon ES as the destination for providing storage and search capability.
A company is running an application on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones.
After a recent application update, users are getting HTTP 502 Bad Gateway errors from the application URL. The DevOps Engineer cannot analyze the problem because Auto Scaling is terminating all EC2 instances shortly after launch for being unhealthy.
What steps will allow the DevOps Engineer access to one of the unhealthy instances to troubleshoot the deployed application?
A. Create an image from the terminated instance and create a new instance from that image. The Application team can then log into the new instance.
B. As soon as a new instance is created by AutoScaling, put the instance into a Standby state as this will prevent the instance from being terminated.
C. Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state.
D. Edit the Auto Scaling group to enable termination protection as this will protect unhealthy instances from being terminated.
A company uses Amazon S3 to store proprietary information. The Development team creates buckets for new projects on a daily basis. The Security team wants to ensure that all existing and future buckets have encryption, logging, and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.
What should a DevOps Engineer do to meet these requirements?
A. Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.
B. Enable AWS Config rules and configure automatic remediation using AWS Systems Manager documents.
C. Enable AWS Trusted Advisor and configure automatic remediation using Amazon CloudWatch Events.
D. Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.
A company is deploying a container-based application using AWS CodeBuild. The Security team mandates that all containers are scanned for vulnerabilities prior to deployment using a password- protected endpoint. All sensitive information must be stored securely.
Which solution should be used to meet these requirements?
A. Encrypt the password using AWS KMS. Store the encrypted password in the buildspec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
B. Import the password into an AWS CloudHSM key. Reference the CloudHSM key in the buildpec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
C. Store the password in the AWS Systems Manager Parameter Store as a secure string. Add the Parameter Store key to the buildspec.yml file as an environment variable under the parameter-store mapping. Reference the environment variable to initiate scanning.
D. Use the AWS Encryption SDK to encrypt the password and embed in the buildspec.yml file as a variable under the secrets mapping. Attach a policy to CodeBuild to enable access to the required decryption key.
A DevOps engineer is developing an application for a company. The application needs to persist files to Amazon S3. The application needs to upload files with different security classifications that the company defines. These classifications include confidential, private, and public. Files that have a confidential classification must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations.
The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user's ID to each confidential object that is uploaded to Amazon S3.
Which set of additional steps must the DevOps engineer take to meet the company's requirements?
A. Modify the S3 bucket's ACL to grant bucket-owner-read access to the uploading user's IAM role. Create an IAM policy that grants s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
B. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
C. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
D. Modify the S3 bucket's ACL to grant authenticated-read access when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.