Free exam guide: AWS Certified Developer Associate


This exam guide provides the steps to help you pass the AWS Developer certification (DVA-C02) exam.

I’ve mostly used the content that was provided for free by AWS using their AWS Skillbuilder program, AWS Whitepapers, and the official AWS documentation.

I’ve curated the things that you should know for the exam, which means that the technical notes in this blog post are very dense and to the point.

If you wish to dive deeper, then you can always read further in the links that I’ve provided throughout the guide.

So let’s get started! Here are the detailed steps to help you pass the AWS Developer Associate exam.

Who should take the AWS Certified Developer – Associate exam?

According to AWS, they recommend you have the following experience and skills prior to pass the exam:

  • At least 1 year of hands-on experience developing and maintaining an AWS-based application
  • Know how to write code for serverless applications such as AWS Lambda
  • Understand how the application lifecycle works including the deployment of the code
  • Understand how to use containers in the development process
  • Being able to use CI/CD e.g. AWS Codepipeline to deploy applications on AWS

How to prepare for the AWS Developer Associate certification exam

In this guide, we’ll follow the domains and topics that are provided in the content outline of the official AWS Certified Developer – Associate (DVA-C01) Exam Guide.

For each domain, we’ll let you know what AWS expects from you (knowledge-wise) and then I provide the technical notes that help you prepare and meet up these expectations.

Exam overview

This is what you can expect when you schedule the AWS Developer Associate exam:

  • Consists of 65 multiple-choice, multiple-answer questions.
  • The exam needs to be completed within 130 min (Note: follow this advice to permanently receive 30 minutes extra time for your AWS exams)
  • Costs $150,-
  • The minimum passing score is 720 points (max: 1000)
  • The exam is available in English, Japanese, Korean, and Simplified Chinese.

Content outline

The content outline of the exam consists of 6 separate domains, each with its own weighting.

The table below lists the domains with their weightings:

Domain% of exam
Domain 1: Deployment22%
Domain 2: Security26%
Domain 3: Development with AWS Services30%
Domain 4: Refactoring10%
Domain 5: Monitoring and Troubleshooting12%
Total100%
AWS Certified Developer Associate (DVA-C02) content outline

Further on in the guide, a more detailed explanation is added to each domain to give a rough idea of what you should know.

AWS Certified Developer Associate exam detailed content outline
AWS Certified Developer Associate (DVA-C02) exam detailed content outline

Technical preparation notes

In this section, I’ve bundled up my notes which you can use when you’re preparing for the AWS Certified Developer Associate exam.

Prior to this Blogpost, I also released a guide for the AWS Cloud Practitioner exam technical preparation notes.

This contains the foundational information which also helps for this exam, so I highly recommend reading the notes from that guide.

Moving on to the preparation, I’ve written technical notes which highlight all the important details regarding developing on AWS that are worth remembering for the exam.

To simplify the learning process, I’ve categorized my technical notes into the domain sections as it’s displayed in the content outline.

Domain 1: Deployment – 22%

You should be comfortable knowing the following in this domain:

  • How to use AWS Elastic beanstalk to deploy applications
  • Deploy serverless applications e.g. in AWS Lambda
  • Deploy code via AWS Codepipeline, AWS CodeBuild, AWS Codedeploy, etc.

There is a lot of overlap between this domain and the content that I’ve provided in my AWS DevOps Engineer Professional exam guide.

I’d suggest giving that a read since it expands further on the developer automation skills that are needed to further improve your development in AWS.

AWS CodeBuild
  • A fully managed build service: Build your application from sources like AWS CodeCommit, S3, Bitbucket, and GitHub
  • Build and test code: Debugging locally with an AWS CodeBuild agent is possible
  • To configure build steps you create a buildspec.yml in the source code of your repository.

This is what a typical AWS CodeBuild buildspec.yml looks like:

AWS CodeBuild buildspec.yml build steps with an explanation
AWS CodeBuild buildspec.yml build steps with an explanation
AWS CodeDeploy
  • Minimizes downtime because of a controlled deployment strategy
  • Centralized control
  • Iteratively release new features
  • Three types of deployments: In-place, rolling, blue-green deployments
  • Three sorts of deployment configurations: OneAtATime, HalfAtATime, AllAtOnce
  • Ability to install CodeDeploy agents on EC2 instances and on-prem services to do deploys
  • To specify what commands you want to run for each phase of the deployment you use an AppSpec configuration file.
  • The appspec.yml should be placed in the root of the application source code directory.

This is what a typical AWS CodeDeploy appspec.yml looks like:

AWS CodeDeploy appspec.yml deployment steps with an explanation
AWS CodeDeploy appspec.yml deployment steps with an explanation
AWS CodePipeline
  • Gives you the ability to add a manual approval step
  • Pipeline actions look like this:
  • Source: CodeCommit, S3, GitHub
  • Build & Test: CodeBuild, Jenkins, TeamCity
  • Deploy: AWS CodeDeploy / AWS CloudFormation / AWS Elastic beanstalk / AWS OpsWorks
  • Invoke: Specify a custom function to invoke e.g. AWS Lambda
  • Approval: Publish the SNS topic for manual approval
AWS Serverless Application Model (SAM)

AWS SAM is an open-source framework for building serverless applications. Underneath it uses AWS CloudFormation and just like CloudFormation, you can use it to deploy a template to AWS.

In order to do so, you need to call sam package to create the deployment package and use sam deploy to deploy to AWS. This is similar to running aws cloudformation package and aws cloudformation deploy.

AWS Lambda

You can deploy your Lambda function in two ways:

  • A .zip file that includes your application code and its dependencies. The archive needs to be uploaded to S3 and can be done using various tools such as AWS SAM, AWS CDK, or AWS CLI.
  • A container image that is built by docker. You can upload the image to Amazon ECR for deployment.
AWS Elastic Beanstalk

There are three ways to configure environments in Elastic Beanstalk:

  1. .ebextensions allow you to insert a configuration file in the root directory of your source code package.
  2. Saved configurations allow you to generate a template from an existing beanstalk environment by using the EB CLI, AWS CLI, or Elastic Beanstalk console.
  3. EB CLI configuration file is a separate environment configuration file that can be used by the EB CLI to deploy directly from your local machine to AWS.
Deployment strategies

For services like AWS CodeDeploy, CloudFormation, AWS Beanstalk, and AWS OpsWorks you can apply several deployment strategies. Each has its pros and cons.

The cheat sheet below shows the types of deployments and shows how well they rank on these columns: impact, deployment time, zero downtime, rollback process, and deploy target.

AWS DevOps Deployment strategies cheat sheet

Domain 2: Security – 26%

You should be comfortable knowing the following in this domain:

  • Doing secure authenticated calls to different AWS services. You should know how to assume IAM roles, making sure to apply the least privilege when creating services that communicate with other services.
  • Know how to encrypt data at rest and in transit, most important for services such as DynamoDB, CloudFront, and ALB.
  • Being able to implement application authentication and authorization using Amazon Cognito.

I’ve you want to dive deeper into security on AWS and get your skills to the next level, I’d recommend giving my AWS Security Specialty exam preparation guide a read!

Amazon Cognito

User pools are user directories that provide sign-up and sign-in options for your app users. The user pool provides:

  • Sign-up and sign-in services
  • A built-in, customizable web UI to sign in users
  • Social sign-in with Facebook, Google, and Login with Amazon, and through SAML and OIDC identity providers from your user pool
  • User directory management and user profiles
  • Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification
  • Customized workflows and user migration through AWS Lambda triggers

Identity pools enable you to grant your users access to other AWS services by providing temporary AWS credentials for users who are guests (unauthenticated/anonymous) as well as the following identity providers:

  • Amazon Cognito user pools
  • Social sign-in with Facebook, Google, and Login with Amazon
  • OpenID Connect (OIDC) providers
  • SAML identity providers
  • Developer authenticated identities

What’s the difference between Amazon Cognito user pools and identity pools?

Use Amazon Cognito user pools when you need to:

Use Amazon Cognito identity pools when you need to:

Amazon API Gateway

There are several ways to control and manage access to a REST API in Amazon API Gateway:

How throttling of API requests is applied (in the following order) to improve the throughput of API Gateway:

  1. Set per-client throttling limits to API key usage to avoid over usage and overwhelming the API Gateway.
  2. Per-method throttling limits that you set for an API stage.
  3. Account level throttling per AWS Region
  4. AWS Regional throttling
Amazon CloudFront

You can create a CloudFront function that validates JSON Web Tokens (JWT) in the header of the requests.

Amazon DynamoDB

DynamoDB supports client-side encryption and service-side encryption.

Server-side encryption provides:

  • Encryption by default. All (global)tables, streams, and backups are encrypted by default and there is no way to turn them on or off.
  • By default, it’s encrypted by an AWS-owned key. You can choose to use AWS Managed Key or Customer Managed Key (CMK) KMS to encrypt your tables.

Client-side encryption is done via the DynamoDB Encryption client and provides the following:

  • End-to-end encryption so your data is protected in transit and rest
  • Sign table items with a calculated signature so you can detect unauthorizing changes.
  • Choose your own cryptographic keys using CloudHSM or CMK KMS
  • It won’t encrypt the entire table. Parts that are not encrypted are attribute names, primary key attribute names, or values.

Domain 3: Development with AWS Services – 30%

You should be comfortable knowing the following in this domain:

  • Writing code for serverless applications
  • Write code that interacts with AWS Services by using APIs, SDKs, and AWS CLI
  • Implement application design into application code
  • Translate functional requirements into application design
Amazon DynamoDB

Amazon DynamoDB supports two primary keys:

  1. Partition key (also called a “hash attribute”): This is a simple primary key that is composed of a single attribute. The attribute can be of any data type, and the key is used to uniquely identify an item in the table.
  2. Composite primary key (also called a “hash and range attribute”): This primary key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key (also called the “range attribute”). The combination of these two attributes is used to uniquely identify an item in the table.

You can choose which primary key to use depending on the access patterns of your application.

If you need to retrieve items based on the value of a single attribute, a partition key is sufficient. If you need to retrieve items based on the value of a combination of attributes, a composite primary key is required.

Best practices and recommendations for setting up a partition key schema:

  • Choose a partition key that has a large number of unique values and is evenly distributed. This will ensure that your data is distributed evenly across multiple partition key values, which can help to improve the performance of your DynamoDB table.
  • Consider using a composite partition key (a partition key combined with a sort key) if you need to store multiple items with the same partition key value and need to retrieve them in a specific order.
  • Choose a partition key that is relevant to your use case. For example, if you are building a user profile database, the user ID might be a good partition key.

To calculate Read Capacity Units (RCU) for DynamoDB:

  • All reads are 4kb
  • Eventually consistent reads consist of 2 reads per second
  • Strongly consistent reads consist of 1 read per second

Example: Items being stored in DynamoDB will be 7KB in size and reads are strongly consistent with a maximum read rate of 3 items per second. How much is the read capacity unit?

  • First we divide the reads: 7kb/4kb = 1.75kb rounding up to 2kb
  • Next, since it’s strongly consistent it can do 1 read per second. That means we need to multiply the following: 3 * 2(kb) = 6 read capacity units

To calculate Write Capacity Units (WCU) for DynamoDB provisioned throughput:

  • All writes are 1kb
  • All writes consist of 1 write per second

Example: Items being stored in DynamoDB will be 7KB in size and the maximum write rate is 10 items per second. How much is the write capacity unit?

  • We need to multiply the following: 10 * 7(kb) = 70 write capacity units
Amazon SQS

Amazon SQS provides two types of queues:

  1. Standard queues are a traditional queueing service that provides a best-effort, once-and-only-once delivery of messages. Standard queues support a nearly unlimited number of transactions per second (TPS) but may experience occasional delays.
  2. FIFO (First-In, First-Out) queues are designed to guarantee that messages are processed exactly once, in the order that they are sent. FIFO queues support up to 300 requests per second and are ideal for applications that require strict message sequencing and in-order processing.

SQS can do short polling or long polling to retrieve messages from the queue:

  • Short polling is the default behavior of SQS, in which the Amazon SQS service sends a request to the queue at regular intervals and returns all messages that are available in the queue at the time of the request. Short polling can help to reduce the time it takes to retrieve messages from the queue, but it can also result in higher request and response latencies.
  • Long Polling sends a request to the queue and waits for a specified amount of time (up to 20 seconds) for messages to become available. If no messages are available, the service sends a response indicating that no messages were found, and the request is terminated. If messages are found, they are returned in the response, and the request is terminated. Long polling can help to reduce the number of empty responses and can improve the overall performance of your application.

Visibility timeout is an important feature that allows you to make a message invisible to other consumers for up to 12 hours.

This can be useful if you have multiple consumers reading from the same queue, as it prevents other consumers from reading and processing the same message.

Amazon SQS message lifecycle:

  • Messages are kept in the queue for max 14 days (default 4)
  • max 256KB of text in any format
  • The consumer picks up the message, and visibility timeout kicks in (message locked for further processing). default is 30-sec the max is 12 hours.
  • The message gets deleted by the consumer once it is processed.

Domain 4: Refactoring – 10%

You should be comfortable knowing the following in this domain:

  • Refactor applications so they’re able to use other AWS Services e.g. move sessions from the server to Amazon Elasticache.
  • Migrating existing application code to run on AWS
Amazon Elasticache

There are two types of storage options for Amazon Elasticache. Option 1 is to use Memcache to cache data for your application if:

  • You want the simplest model possible
  • You need to run large nodes with multiple cores or threads
  • Need the ability to scale out
  • Shard data across multiple nodes
  • Need to cache objects such as a database

Option 2 is to use Redis for your application if:

  • You need complex data types such as strings, hashes, lists, and sets
  • Need to sort or rank in-memory datasets
  • Want persistence of your key store
  • replicate data from primary to read replicas for availability
  • need automatic failover if any of your primary nodes fail
  • You want publish- and subscribe capabilities
  • backup and restore capabilities

Domain 5: Monitoring and Troubleshooting – 12%

You should be comfortable knowing the following in this domain:

  • Write code that can be monitored e.g. for serverless applications it’s important to look into AWS X-Ray and Amazon CloudWatch
  • Perform root cause analysis on faults in the test or production environments e.g. look into your CI/CD pipelines like AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline to identify issues. For serverless and microservices look into Amazon CloudWatch and AWS X-Ray.

AWS Certified Developer Associate study material

On the internet, you’ll find a lot of study material for the AWS certified developer associate certification exam. So it can be really overwhelming if you need to search for great quality material. 

Lucky for you, I’ve spent some time curating the available study material and highlighting some of the stuff worth reading.

AWS Study guides

If you’re into books, I’d highly recommend giving the official AWS Certified Developer study guide a go.

AWS Certified Developer (DVA-C01) - official study guide

Conclusion

In conclusion, this guide provided the technical notes that I created which helped me pass the AWS Certified Developer Associate exam.

The AWS Certification exam covers a range of topics like deployment and development with AWS Services, refactoring applications to use AWS Services, and monitoring and troubleshooting.

AWS recommends having at least one year of hands-on experience in developing and maintaining an AWS-based application.

You should be familiar with developing and deploying code for serverless applications such as AWS Lambda.

AWS Certified Developer Associate exam – FAQ

  1. Is the AWS Developer Associate harder than the Solutions Architect Associate exam?

    If you already have a background in developing applications and are familiar with deploying and testing code, then I’d suggest you take the Developer Associate exam.

    If you have less practical experience with AWS Services, then I’d suggest taking the AWS Solutions Architect Associate exam, because the questions are more focused on building efficient solutions with keeping costs in mind.

  2. How difficult is the AWS Certified Developer Associate exam?

    The exam is not difficult if you’re already comfortable using the AWS CLI and AWS SDK on AWS to develop and deploy applications.

    However, If you do lack the knowledge or experience developing applications on AWS, then I would suggest familiarizing yourself with services such as AWS Lambda, Elastic Beanstalk, API Gateway, and Cognito.



Danny Steenman

A Senior AWS Cloud Engineer with over 9 years of experience migrating workloads from on-premises to AWS Cloud.

I have helped companies of all sizes shape their cloud adoption strategies, optimizing operational efficiency, reducing costs, and improving organizational agility.

Connect with me today to discuss your cloud aspirations, and let’s work together to transform your business by leveraging the power of AWS Cloud.

I need help with..
stacked cubes
Improving or managing my CDK App.Maximize the potential of your AWS CDK app by leveraging the expertise of a seasoned CDK professional.
Reducing AWS Costs.We can start by doing a thorough assessment of your current AWS infrastructure, identifying areas with potential for cost reduction and efficiency improvement.
Verifying if my infrastructure is reliable and efficient.We’ve created a comprehensive AWS Operations Checklist that you can utilize to quickly verify if your AWS Resources are set up reliably and efficiently.