What worked for me passing the AWS Certified DevOps Professional Exam

Ryan Rafferty
6 min readDec 10, 2020


aws devops engineer

First thing first go through Linux Academy and ACloudGuru (Previously I done these separately when the companies were not merged). Watch each course and this one also, scan around you might find it cheaper elsewhere


Reading Stamina & Time Management. Before getting into what to learn an important note I just thought would mention is around time management, For anyone that had already sat the exams will know a lot of time and sacrifice is required, so with a little kid, a full time job and home life responsibilities swallow most of the time up. What worked for me was getting up before starting your day and doing a hour and a half studying online before starting your actual work. Find a time that works for you and stick with it!

Now that I’ve got that bit out of the way getting into the content and what you will expect to be tested on in the exam. Knowing the majority of the services at a high level is absolutely 1000000% recommended as it will help also in ruling out incorrect answers in the exam.

Read the below white papers and understand the concepts — Get cheap highlighters and notepads from Office Works


Next PRACTICE PRACTICE PRACTICE as many exams online as you can and pay attention to your concentration. When I failed the exam previously I was literally finding myself drifting in concentration after question 37 thinking if I remembered to bring the washing in from outside periodically and literally working off intuition for the remaining question + I got hungry too.

Topics you will be tested on.

Literally KNOW the difference between when to use services such as OPSWORKS, ELASTIC BEANSTALK
Understand Blue/Green deployment scenarios for CodeDeploy, ElasticBeanstalk, Lambda, API Gateway, AutoScaling Groups. Most of the questions I saw are around High availability & DR.

Go through this tutorial in your own AWS account — https://aws.amazon.com/quickstart/architecture/blue-green-deployment/

Understand the below deployment strategies and what will cause downtime. One of the questions on the exam was “Your company wants to do a deployment and they don’t want to have any downtime whatsoever while shifting to an updated set of EC2 instances, they don’t want subsequent deployments and want to gradually test traffic”. Eliminating the obvious wrong answers by quickly identifying the ones that don’t match these in bold will be key.

  • All at Once
  • Immutable
  • Rolling
  • Rolling with Additional Batch
  • Canary
  • A/B testing

All at Once — Will deploy all at once and unsuitable for workloads that cannot afford downtime. (If the company don’t care about downtime and want things done super fast, choose this one!)

Immutable — starts by deploying your application code to a single newly created EC2 instance. Once the deployment succeeds on the first instance, the remaining number of instances required to create a parallel fleet are created and the application code is deployed to them. After succeeds on deploying to new, the old is removed.

Rolling with additional batch — starts by deploying your application code to a single batch of newly created EC2 instances. Once the deployment succeeds on the first batch of instances, the application code is deployed to the remaining instances in batches until the last batch of instances remain. At this point the last batch of instances is terminated. This deployment policy ensures that the impact of a failed deployment is limited to a single batch of instances and enables your application to serve traffic at full capacity during an ongoing deployment.

Rolling — starts by deploying your application code to a single batch of existing EC2 instances. Once the deployment succeeds on the first batch, the application code is deployed to the remaining instances, in batches. This deployment policy ensures that the impact of a failed deployment is limited to a single batch. However, since no new instances are created during the deployment, you application can serve traffic at a reduced capacity (i.e.: a single batch of instances are out of service at any given time during the deployment).

Canary — Read this https://engineering.klarna.com/simple-canary-releases-in-aws-how-and-why-bf051a47fb3f

A/B Testing — Read this https://medium.com/buildit/a-b-testing-on-aws-cloudfront-with-lambda-edge-a22dd82e9d12

Understand High Availability and Disaster Recovery Strategies

You will be asked on the exam on S3 and storing files that need to be “geographically separated more than 500 miles apart” — This is a different region, however a good point someone recently made

was what happens if you have the Paris and London regions. These are actually “geographically separated” which was a very good point lol! But nevertheless it was region and not AZ in this question case.

Understand Route53 Health Checks and Alias records and how infrastructure can be self healing.
(Database Read Replica’s stored in another region with a periodic Lambda copying an updated read replica every hour with Route53 set to Failover. Lambda automatically switching traffic)

Failover = Fast
Latency = Slower
Geo Routing = Specify countries

Understand CloudFormation patterns and Blue/Green Deployment approaches — (Take a copy of everything including a new Autoscaling group with a launch config in the other AZ, with userdata bootstrapped and set in the ASG, store this all in the other region, have a CloudWatch event monitoring it connected with Route 53 for a failover)..

Understand RPO and RTO — If it question is asking that within 2 hours everything needs to be restored choose the answer that would work with this timeframe — (if not before if DB is involved to allow a little more time).


I cannot stress enough the importance of paying attention to the last bit of the question When it asks “which is the MOST COST EFFECTIVE” compared with “which is the MOST EFFICIENT” compared with “which requires the LEAST AMOUNT OF OPERATIONAL EFFORT”.


Most Cost Effective = Serverless or Lambdas
Least amount of operational management = Systems Centre Manager, Opsworks, SSM run commands
Most Efficient — Take the AMI user data and put it in an Autoscaling group


Read CodeBuild FAQ + Developer Documentation



  • CloudFormation Updates — Stack Updates
  • Stack Sets
  • Intrinsic Functions
  • Change Sets
  • Removing hardcoding values from templates and utilising SSM Parameter store

Read Troubleshooting CloudFormation

On the exam on CloudFormation it asks many questions on Blue / Green Deployment with Stacks. Also one of the questions I got was what to do if a stack cannot be deleted (The answer is most likely because the S3 Bucket is full and you must use a Lambda to empty it)..




Scaling RDS


SAM — (Basic understanding)


You 100000% will need to know what all these are at a high level.

CFN — DeletionPolicies — Retain / Input Parameters
Elastic Beanstalk
CodeDeploy — CodeDeploy Agent for onpremise etc
AutoScaling Groups — Scheduled Scaling etc
ECS and Docker Containers
AWS Config
CloudWatch Alarms + Metrics + CloudWatch Events + Filters + CloudWatch Logs Agent
Trusted Advisor + Personal Health Dashboard
Secrets Manager
SSM Parameter Store — Securestring
IAM (Cross Account Access, GIT Credential How to)
S3 Bucket Access Policies and IAM roles
AWS Systems Manager — Agent on on-premise etc + EC2 Instances
Scheduled Scaling Actions on Auto Scaling Groups
Route 53 Health Checks
Private VPC Endpoints
SNS alerting — Literally everything on the exam it was the answer rather than selecting SES
Kinesis Firehose, Streams, Analytics + Kinesis Agent — When to use each
Athena — Cheapest
Observation of Monthly AWS billing for Data Transfers
Dynamo DB — Primary Keys, Sort Keys and LSI’s

That’s it. So with all this I wish you the best of luck and don’t give up! Thanks for reading! And if any questions please reach out.



Ryan Rafferty