💸 Catch expensive AWS mistakes before deployment! See cost impact in GitHub PRs for Terraform & CDK. Join the Free Beta!
AWS CDK Best Practices: The Complete Guide [2026]

AWS CDK Best Practices: The Complete Guide [2026]

Master AWS CDK best practices with Projen setup, testing, cdk-nag compliance, GitHub Actions CI/CD, and anti-patterns to avoid.

January 6th, 2026
22 min read
0 views
--- likes

Most CDK tutorials teach you how to deploy your first Lambda function. Few prepare you for what happens when your team grows to 10 engineers, your stacks multiply to 50, and a refactoring mistake deletes your production database.

This guide covers the AWS CDK best practices that prevent those disasters. By the end, you'll understand AWS's official best practice categories, know how to structure projects with Projen (yes, it's a necessity, not optional), test your infrastructure code, implement security guardrails with cdk-nag, and avoid the anti-patterns that cause production incidents.

I've based these recommendations on AWS's official documentation, the AWS re:Invent 2023 advanced CDK session (embedded below), and the patterns I've implemented in the aws-cdk-starter-kit that you can use as a reference implementation.

If you're new to CDK, start with our beginner's guide to AWS CDK and install AWS CDK before diving into best practices.

Why CDK Best Practices Matter

Before diving into tactical advice, let me explain why investing time in best practices pays dividends.

The AWS CDK was built around a model where your entire application is defined in code, not just business logic but also infrastructure and configuration. At deployment time, CDK synthesizes a cloud assembly containing CloudFormation templates for all target environments plus file assets like Lambda code and Docker images.

This everything-in-code model is powerful but dangerous without guardrails. Every commit in your main branch represents a complete, deployable version of your application. That's great for automation but means mistakes propagate quickly.

The difference between a working CDK app and a production-ready CDK app comes down to following proven patterns that prevent:

  • Data loss from logical ID changes that accidentally replace databases
  • Security incidents from overly permissive IAM policies
  • Cost explosions from untagged resources and orphaned assets
  • Deployment failures from circular dependencies and oversized stacks
  • Team friction from unclear ownership boundaries

The Cost of Ignoring Best Practices

Let me give you some concrete examples of what goes wrong without best practices.

Logical ID changes destroy stateful resources. In CDK, every resource gets a logical ID derived from its construct ID and position in the tree. Change either one, and CloudFormation sees it as a new resource, replacing the old one. For a DynamoDB table or RDS database, "replace" means "delete the old one and create a new empty one." I've seen teams lose production data this way.

Hardcoded names prevent multi-environment deployment. If you hardcode bucketName: 'my-app-assets', you can't deploy the same stack twice in the same account, not for dev/staging/prod environments, not even for testing. Names are precious resources in AWS.

Untested infrastructure breaks production. Without tests asserting your Lambda has the right memory configuration or your security group allows the correct ports, you're deploying blind. CloudFormation will happily deploy misconfigured infrastructure.

AWS's Four Categories of Best Practices

AWS organizes CDK best practices into four broad categories that I'll cover throughout this guide:

  1. Organization best practices: How to structure teams and adoption at the organizational level
  2. Coding best practices: How to organize CDK code, repositories, and packages
  3. Construct best practices: How to develop reusable, composable constructs
  4. Application best practices: How to combine constructs and make architectural decisions

Now that you understand why best practices matter, let's start with the foundation: how you structure your CDK project.

Project Structure and Projen (Essential Tooling)

Project structure might seem like a minor concern, but it determines how maintainable your CDK application becomes as it grows. More importantly, how you manage that structure matters even more than the structure itself.

I'm going to make a strong statement here: Projen is a necessity for CDK projects, not an optional nice-to-have. If you're still manually maintaining package.json, tsconfig.json, and other configuration files, you're creating technical debt that will slow you down.

Why Projen is a Necessity, Not Optional

Let me explain why manual configuration management is an anti-pattern.

When you run cdk init, you get a basic project with configuration files you're expected to maintain by hand. This works fine for a weekend project. But in a team environment, those files drift. Someone updates a dependency in package.json but forgets to update tsconfig.json. Someone else copies configuration from Stack Overflow without understanding it. Six months later, you have a mess of conflicting settings that nobody fully understands.

Projen solves this by treating configuration as code. Instead of editing package.json directly, you define your project in a .projenrc.ts file:

import { awscdk } from 'projen';

const project = new awscdk.AwsCdkTypeScriptApp({
  cdkVersion: '2.175.0',
  defaultReleaseBranch: 'main',
  name: 'my-cdk-app',
  projenrcTs: true,

  // Dependencies
  deps: ['@aws-cdk/aws-lambda-python-alpha'],
  devDeps: ['cdk-nag'],

  // Testing
  jest: true,
  jestOptions: {
    jestConfig: {
      testMatch: ['**/*.test.ts'],
    },
  },

  // Linting
  eslint: true,
  prettier: true,
});

project.synth();

Run npx projen, and Projen generates all your configuration files consistently. The generated files include a warning not to edit them manually, because Projen owns them.

This approach provides several benefits:

  • No configuration drift: Every team member gets identical configuration
  • Automated dependency management: Projen keeps dependencies compatible
  • Built-in best practices: ESLint, Jest, and TypeScript are pre-configured correctly
  • Easy upgrades: Update the Projen version to get the latest recommended settings

Projen isn't some obscure tool either. It's the underlying technology for blueprint synthesis in Amazon CodeCatalyst. AWS uses it internally because manual configuration doesn't scale.

For a deep dive on project organization patterns, see my detailed guide on structuring your CDK project.

The aws-cdk-starter-kit: Your Reference Implementation

Rather than just telling you what best practices look like, I've created a reference implementation you can clone and study: the aws-cdk-starter-kit.

This repository demonstrates:

  • Projen configuration for CDK TypeScript projects
  • Project structure that scales from starter to enterprise
  • Testing setup with Jest and CDK assertions
  • Security validation with cdk-nag integration
  • CI/CD workflows using GitHub Actions with OpenID Connect

You can use it as a starting point for new projects or as a reference when restructuring existing ones. The starter kit implements every best practice covered in this guide.

Repository Organization Patterns

AWS recommends that every CDK application starts with a single package in a single repository. This keeps things simple and avoids premature complexity.

Resist the urge to put multiple applications in the same repository, especially if you're using automated pipelines. Here's why:

  • Changes to one application trigger deployment of all applications
  • A broken build in one app prevents deployment of others
  • The "blast radius" of any change increases dramatically

When you need to share code between applications, move shared constructs to their own repository and publish them as packages via CodeArtifact or npm. Shared packages require their own testing strategy because they must be validated independently from the applications that consume them.

Infrastructure and Runtime Code Colocation

One of CDK's superpowers is bundling runtime code (Lambda functions, Docker images) alongside infrastructure definitions. Embrace this by colocating related code in self-contained constructs.

For example, a construct that creates a Lambda function should include the Lambda's source code in the same directory:

lib/
├── api-handler/
│   ├── api-handler-construct.ts    # CDK construct
│   ├── handler.ts                  # Lambda runtime code
│   └── handler.test.ts             # Lambda unit tests

This colocation enables:

  • Synchronized versioning: Infrastructure and code evolve together
  • Isolated testing: Test the construct and its runtime code independently
  • Easy sharing: Publish the entire construct as a reusable package

With your project properly structured, let's dive into how to design constructs effectively.

Construct Design Best Practices

Constructs are the building blocks of every CDK application. Understanding how to design and use them correctly determines whether your infrastructure code is reusable and maintainable or a tangled mess.

Understanding L1, L2, and L3 Constructs

AWS CDK constructs come in three levels of abstraction, and knowing when to use each is fundamental to CDK development.

L1 Constructs (CFN Resources) map directly to single CloudFormation resources. They're named with a Cfn prefix (like CfnBucket) and offer no abstraction. You get complete control over every property, but you're responsible for all the configuration. New CloudFormation resources become available as L1 constructs within about a week.

L2 Constructs (Curated Constructs) provide a higher-level, intent-based API with sensible defaults. A Bucket (L2) versus CfnBucket (L1) includes best-practice security policies by default, helper methods for permissions (.grantRead()), and generates boilerplate automatically. Most of your CDK code should use L2 constructs.

L3 Constructs (Patterns) combine multiple resources into complete architectures. ApplicationLoadBalancedFargateService creates a Fargate service with a load balancer, target groups, security groups, and IAM roles, all properly configured to work together.

For a comprehensive explanation of construct levels with code examples, see my deep dive on CDK constructs.

Model with Constructs, Deploy with Stacks

This principle from AWS is worth memorizing: Model your application with constructs, but use stacks only for deployment.

Everything in a CDK stack deploys together. If you model your website as a Stack containing S3, API Gateway, Lambda, and RDS, you can't reuse that website in another context without copying code.

Instead, model the website as a Construct containing those resources. Then instantiate that construct in stacks for different deployment scenarios:

// The construct models the application
class WebsiteConstruct extends Construct {
  constructor(scope: Construct, id: string, props: WebsiteProps) {
    super(scope, id);
    // S3, API Gateway, Lambda, RDS defined here
  }
}

// Stacks define deployment boundaries
class DevStack extends Stack {
  constructor(scope: Construct, id: string) {
    super(scope, id);
    new WebsiteConstruct(this, 'Website', { environment: 'dev' });
  }
}

class ProdStack extends Stack {
  constructor(scope: Construct, id: string) {
    super(scope, id);
    new WebsiteConstruct(this, 'Website', { environment: 'prod' });
  }
}

This separation improves reusability, testing, and modularity.

Protecting Logical IDs for Stateful Resources

Every resource in CDK gets a logical ID derived from its construct ID and position in the construct tree. Changing a logical ID causes CloudFormation to replace the resource, which for stateful resources like databases means data loss.

Protect yourself by writing unit tests that assert logical IDs remain stable:

test('database logical ID remains stable', () => {
  const app = new App();
  const stack = new MyStack(app, 'TestStack');
  const template = Template.fromStack(stack);

  // This test fails if you accidentally rename the database construct
  template.hasResource('AWS::DynamoDB::Table', {
    // Asserts the logical ID contains 'UserTable'
  });
});

If you need to rename constructs or move resources between stacks, use the new CDK Refactor feature (available since September 2025) which safely reorganizes infrastructure without replacing resources.

Constructs Aren't Enough for Compliance

Many enterprises create wrapper constructs (sometimes called L2+ constructs) that enforce security policies like encryption or specific IAM configurations. While useful for surfacing guidance early in development, don't rely on wrapper constructs as your sole compliance mechanism.

Developers can bypass your wrappers by using L1 constructs directly or third-party constructs from Construct Hub. Instead, enforce compliance at multiple levels:

  • Service Control Policies (SCPs) and permission boundaries at the AWS Organizations level
  • cdk-nag to validate constructs before deployment (covered below)
  • CloudFormation Guard for template-level validation
  • Aspects to apply cross-cutting validations to all constructs

Now that you understand construct design, let's look at the coding patterns that keep your CDK code maintainable.

Coding Best Practices

These coding patterns apply to all CDK projects regardless of size. Following them from the start saves significant refactoring later.

Start Simple, Add Complexity Only When Needed

The guiding principle from AWS is simple: keep things as simple as possible, but no simpler. Don't architect for every possible scenario upfront. CDK enables refactoring, so you can add complexity when requirements actually demand it.

If you're building a single Lambda function, don't create an abstract "FunctionFactory" pattern on day one. Start with the simplest thing that works. Add abstraction when you have a second, third, or fourth function that genuinely needs it.

Make Decisions at Synthesis Time

Although CloudFormation supports deploy-time decisions using Conditions, Fn::If, and Parameters, AWS recommends against using them with CDK. The types of values and operations available in CloudFormation conditions are limited compared to TypeScript or Python.

Instead, make all decisions in your CDK code using programming language features:

// Good: Decision at synthesis time
if (props.environment === 'prod') {
  new Alarm(this, 'HighErrorAlarm', {
    threshold: 10,
    evaluationPeriods: 3,
  });
}

// Avoid: CloudFormation conditions
const isProd = new CfnCondition(this, 'IsProd', {
  expression: Fn.conditionEquals(props.environment, 'prod'),
});

Treat CloudFormation as an implementation detail for reliable deployments, not as a programming language.

Use Generated Resource Names

Hardcoding resource names like bucketName: 'my-app-data' creates several problems:

  • You can't deploy the stack twice in the same account (dev and prod environments)
  • You can't replace the resource if an immutable property changes
  • CloudFormation can't safely replace resources (old and new need the same name)

Let CDK generate names instead. Pass generated names to consumers through:

  • Environment variables for Lambda functions
  • AWS Systems Manager Parameter Store
  • References between stacks in the same CDK app
  • Static from methods like Table.fromTableArn() for cross-app references

Define Removal Policies for Stateful Resources

By default, CloudFormation retains stateful resources when you delete a stack, leaving orphaned S3 buckets and DynamoDB tables in your account. CDK provides explicit removal policies:

const bucket = new Bucket(this, 'DataBucket', {
  removalPolicy: RemovalPolicy.RETAIN, // Default: keeps bucket on stack delete
  // RemovalPolicy.DESTROY, // Deletes bucket when stack is deleted
  // RemovalPolicy.SNAPSHOT, // For databases: snapshot before delete
});

For resources that don't expose a removalPolicy prop, use the escape hatch:

const cfnBucket = bucket.node.findChild('Resource') as CfnBucket;
cfnBucket.applyRemovalPolicy(RemovalPolicy.DESTROY);

Configure with Properties, Not Environment Variables

Environment variable lookups inside constructs are a common anti-pattern:

// Anti-pattern: Creates machine dependency
class MyConstruct extends Construct {
  constructor(scope: Construct, id: string) {
    super(scope, id);
    const env = process.env.ENVIRONMENT; // Don't do this
  }
}

This creates a dependency on the machine where synthesis runs and introduces configuration that lives outside your codebase.

Instead, accept configuration through a properties object:

// Good: Full configurability in code
interface MyConstructProps {
  environment: string;
}

class MyConstruct extends Construct {
  constructor(scope: Construct, id: string, props: MyConstructProps) {
    super(scope, id);
    const env = props.environment; // Configuration explicit in code
  }
}

Environment variables should be limited to the top-level of your CDK app for development convenience, never inside constructs or stacks.

Writing good code is only half the battle. Next, let's ensure your CDK code actually works as expected through testing.

Testing Your CDK Code

Testing infrastructure code might seem unusual if you're used to only testing application logic. But CDK makes infrastructure testable, and you should take advantage of that.

Untested CDK code is a liability. You're trusting that your Lambda has the right memory, your security group allows the correct ports, and your IAM policies follow least privilege, all without verification.

Two Testing Approaches: Assertions vs Snapshots

The CDK assertions module supports two complementary testing approaches:

Fine-grained assertions test specific aspects of your synthesized CloudFormation templates. They're useful for verifying critical properties and catching regressions:

test('Lambda function has correct memory', () => {
  const app = new App();
  const stack = new MyStack(app, 'TestStack');
  const template = Template.fromStack(stack);

  template.hasResourceProperties('AWS::Lambda::Function', {
    MemorySize: 1024,
    Timeout: 30,
  });
});

Snapshot tests compare your entire synthesized template against a stored baseline:

test('stack matches snapshot', () => {
  const app = new App();
  const stack = new MyStack(app, 'TestStack');
  const template = Template.fromStack(stack);

  expect(template.toJSON()).toMatchSnapshot();
});

Snapshots enable confident refactoring because any template change triggers a test failure. However, CDK version upgrades can change generated templates, so don't rely solely on snapshots.

Setting Up Your Testing Framework

For TypeScript projects, CDK uses Jest. If you're using Projen (which you should be), testing is already configured. Otherwise, add these dependencies:

{
  "devDependencies": {
    "jest": "^29.0.0",
    "@types/jest": "^29.0.0",
    "ts-jest": "^29.0.0"
  }
}

Create tests following the naming convention *.test.ts in your test/ directory.

Fine-Grained Assertion Examples

Here are practical examples of assertions you should write:

import { App } from 'aws-cdk-lib';
import { Template, Match } from 'aws-cdk-lib/assertions';
import { MyStack } from '../lib/my-stack';

describe('MyStack', () => {
  let template: Template;

  beforeAll(() => {
    const app = new App();
    const stack = new MyStack(app, 'TestStack');
    template = Template.fromStack(stack);
  });

  test('S3 bucket has encryption enabled', () => {
    template.hasResourceProperties('AWS::S3::Bucket', {
      BucketEncryption: {
        ServerSideEncryptionConfiguration: Match.arrayWith([
          Match.objectLike({
            ServerSideEncryptionByDefault: {
              SSEAlgorithm: 'aws:kms',
            },
          }),
        ]),
      },
    });
  });

  test('Lambda function uses correct runtime', () => {
    template.hasResourceProperties('AWS::Lambda::Function', {
      Runtime: 'nodejs20.x',
    });
  });

  test('IAM role follows least privilege', () => {
    template.hasResourceProperties('AWS::IAM::Policy', {
      PolicyDocument: {
        Statement: Match.arrayWith([
          Match.objectLike({
            Effect: 'Allow',
            Action: Match.arrayWith(['s3:GetObject']),
            // Not s3:* or s3:GetObject*
          }),
        ]),
      },
    });
  });
});

Integration Testing with integ-tests-alpha

For testing that requires actual AWS resources, use the @aws-cdk/integ-tests-alpha module. Integration tests deploy real infrastructure and verify behavior:

import { IntegTest, ExpectedResult } from '@aws-cdk/integ-tests-alpha';

const integ = new IntegTest(app, 'ApiIntegTest', {
  testCases: [stack],
});

// Make assertions against deployed resources
integ.assertions
  .httpApiCall('https://api.example.com/health')
  .expect(ExpectedResult.objectLike({ statusCode: 200 }));

Integration tests are slower and cost money (they deploy real resources), so use them sparingly for critical paths.

With tested infrastructure, let's ensure it's also secure with CDK's security features.

Security and Compliance Best Practices

Security in CDK goes beyond writing secure code. It includes managing permissions for deployments, enforcing least privilege between resources, and validating infrastructure before it reaches production.

IAM Best Practices and Grant Methods

L2 constructs provide grant methods that create least-privilege IAM policies automatically. Always prefer these over manual policy definitions:

// Good: Uses grant method for least privilege
bucket.grantRead(lambdaFunction);

// Also good: Specific permissions when needed
bucket.grantReadWrite(lambdaFunction);

// Avoid: Overly permissive manual policies
lambdaFunction.addToRolePolicy(new PolicyStatement({
  actions: ['s3:*'], // Too permissive
  resources: ['*'],   // Too broad
}));

Each grant method creates a unique IAM role with only the permissions needed. If you need custom permissions, be explicit about actions and resources:

lambdaFunction.addToRolePolicy(new PolicyStatement({
  actions: ['s3:GetObject', 's3:ListBucket'],
  resources: [bucket.bucketArn, bucket.arnForObjects('*')],
}));

Validating with cdk-nag

cdk-nag is an open-source tool that checks your CDK applications against compliance rule packs. It uses CDK Aspects to validate every construct in your application.

Available rule packs include:

  • AWS Solutions: General AWS best practices
  • HIPAA Security: Healthcare compliance rules
  • NIST 800-53 rev 4/5: Government security controls
  • PCI DSS 3.2.1: Payment card industry standards

Add cdk-nag to your application:

import { AwsSolutionsChecks, HIPAASecurityChecks } from 'cdk-nag';
import { Aspects } from 'aws-cdk-lib';

const app = new App();

// Apply compliance checks to all stacks
Aspects.of(app).add(new AwsSolutionsChecks({ verbose: true }));
Aspects.of(app).add(new HIPAASecurityChecks({ verbose: true }));

cdk-nag reports violations during synthesis, catching issues before deployment:

[Error at /MyStack/Bucket/Resource] AwsSolutions-S1: The S3 Bucket does not have server access logging enabled.

You can suppress specific rules when you have a valid reason:

import { NagSuppressions } from 'cdk-nag';

NagSuppressions.addResourceSuppressions(bucket, [
  {
    id: 'AwsSolutions-S1',
    reason: 'Access logging handled by CloudTrail data events',
  },
]);

For detailed implementation steps, see the AWS Prescriptive Guidance on cdk-nag implementation.

Using Aspects for Cross-Cutting Concerns

Aspects use the visitor pattern to apply operations across all constructs in your application. They're powerful for enforcing standards that apply everywhere.

Read-only aspects validate without modifying:

class BucketVersioningChecker implements IAspect {
  public visit(node: IConstruct): void {
    if (node instanceof Bucket) {
      if (!node.versioned) {
        Annotations.of(node).addError('Bucket must have versioning enabled');
      }
    }
  }
}

Aspects.of(app).add(new BucketVersioningChecker());

Mutating aspects modify constructs during synthesis:

class TaggingAspect implements IAspect {
  public visit(node: IConstruct): void {
    if (Tags.of(node)) {
      Tags.of(node).add('Environment', 'production');
      Tags.of(node).add('ManagedBy', 'CDK');
    }
  }
}

Aspects.of(app).add(new TaggingAspect());

With secure, tested code, let's set up continuous delivery with GitHub Actions.

CI/CD with GitHub Actions

For CDK deployments, I recommend GitHub Actions over CDK Pipelines. While CDK Pipelines is a valid option, GitHub Actions provides a simpler, more widely understood approach that integrates better with existing workflows.

The critical piece is using OpenID Connect (OIDC) for AWS authentication instead of long-lived credentials. Storing AWS access keys in GitHub secrets is a security anti-pattern.

Why GitHub Actions Over CDK Pipelines

CDK Pipelines adds significant complexity to your CDK application. Your pipeline becomes CDK code that deploys itself, which can be confusing to debug and requires understanding both CodePipeline and CDK internals.

GitHub Actions, by contrast, is platform-agnostic and widely understood. Most developers already know how to work with GitHub Actions workflows. When something fails, you're debugging a YAML workflow, not synthesized CodePipeline resources.

Additionally, GitHub Actions with OIDC provides better security than CDK Pipelines' default bootstrap roles. You have explicit control over the IAM trust policy.

Secure AWS Access with OpenID Connect

OpenID Connect (OIDC) lets GitHub Actions assume an IAM role without storing long-lived credentials. GitHub acts as an identity provider, and AWS trusts tokens from specific repositories and branches.

Setting up OIDC requires:

  1. Creating a GitHub OIDC provider in your AWS account
  2. Creating an IAM role with a trust policy for GitHub
  3. Configuring your workflow to use OIDC authentication

For step-by-step instructions, see my detailed guide on setting up OpenID Connect with GitHub.

GitHub Actions Workflow Structure

Here's a production-ready workflow structure for CDK deployments:

name: CDK Deploy

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

permissions:
  id-token: write
  contents: read

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linting
        run: npm run lint

      - name: Run tests
        run: npm test

      - name: Synthesize CDK
        run: npx cdk synth

  deploy-dev:
    needs: build-and-test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    environment: development
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/GitHubDeployRole
          aws-region: us-east-1

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Deploy to dev
        run: npx cdk deploy --all --require-approval never

  deploy-prod:
    needs: deploy-dev
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::987654321098:role/GitHubDeployRole
          aws-region: us-east-1

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Deploy to production
        run: npx cdk deploy --all --require-approval never

For a complete CI/CD implementation, check the aws-cdk-starter-kit CI/CD documentation.

Multi-Environment Deployment Patterns

For multi-environment deployments, use GitHub environments with protection rules:

  • Development: Deploys automatically on push to main
  • Staging: Requires manual approval before deployment
  • Production: Requires manual approval and passes all staging tests

Configure environment protection in GitHub repository settings. The environment key in your workflow jobs enforces these rules.

Before bootstrapping each environment, ensure you've configured CDK bootstrap with appropriate trust relationships for your GitHub OIDC role.

Now that you know the best practices, let's look at what NOT to do: the anti-patterns that cause production incidents.

Common Anti-Patterns to Avoid

Knowing what to avoid is as important as knowing what to do. These anti-patterns cause real problems in production CDK applications.

Hardcoding Physical Resource Names

// Anti-pattern: Hardcoded names
new Bucket(this, 'DataBucket', {
  bucketName: 'my-company-data-bucket', // Don't do this
});

Consequence: You can't deploy this stack twice in the same account. Want dev and prod in the same account? Want to test a feature branch? Can't do it. The name is taken.

Solution: Let CDK generate names. Pass them to consumers through environment variables, Parameter Store, or stack references.

Environment Variable Lookups in Constructs

// Anti-pattern: Environment variable lookup in construct
class MyConstruct extends Construct {
  constructor(scope: Construct, id: string) {
    super(scope, id);
    const region = process.env.AWS_REGION; // Machine dependency
  }
}

Consequence: Your infrastructure depends on whatever machine runs synthesis. Works on your laptop, fails in CI. Works today, fails tomorrow when someone changes their shell profile.

Solution: Accept all configuration through properties. Environment variables belong only at the top level of your app for development convenience.

Manual Infrastructure Modifications

Anti-pattern: Making changes through the AWS Console or CLI after CDK deployment.

Consequence: Configuration drift. Your CDK code says one thing, AWS has something different. Next deployment either fails mysteriously or overwrites your manual changes.

Solution: All infrastructure changes go through CDK code in version control. If you need a quick fix, make it in code and deploy, even for "temporary" changes.

Bypassing Code Review and Testing

Anti-pattern: Running cdk deploy directly from your laptop to production.

Consequence: Untested, unreviewed changes reach production. No audit trail. When something breaks, nobody knows what changed.

Solution: All production deployments go through CI/CD with pull request reviews and automated tests. No exceptions, not even for "quick fixes."

Multiple Applications in One Repository

Anti-pattern: Multiple CDK apps sharing a single repository with a shared pipeline.

Consequence: Changing one application triggers deployment of all applications. A bug in App A prevents deployment of App B. The blast radius of any change is massive.

Solution: One application per repository. Share code through published packages, not filesystem proximity.

Checking In Secrets

Anti-pattern: Committing credentials, API keys, database passwords, or sensitive configuration to version control.

// Anti-pattern: Secrets in code
new Function(this, 'MyFunction', {
  environment: {
    DB_PASSWORD: 'super-secret-password', // Never do this
    API_KEY: 'sk-1234567890abcdef',        // This either
  },
});

Consequence: Security breach waiting to happen. Secrets in Git history persist even after deletion. Automated scanners constantly search public repositories for exposed credentials. Even in private repositories, anyone with read access sees your secrets.

Solution: Use AWS Secrets Manager or Systems Manager Parameter Store for all secrets:

import { Secret } from 'aws-cdk-lib/aws-secretsmanager';

const dbSecret = Secret.fromSecretNameV2(this, 'DbSecret', 'prod/db/password');

new Function(this, 'MyFunction', {
  environment: {
    DB_SECRET_ARN: dbSecret.secretArn,
  },
});

// Grant the function permission to read the secret
dbSecret.grantRead(fn);

Additionally:

  • Add secret patterns to .gitignore (.env, *.pem, credentials.json)
  • Use pre-commit hooks like git-secrets to scan for accidentally committed credentials
  • Rotate any secrets that were ever committed, even briefly

Beyond avoiding mistakes, there are also optimization opportunities for performance and cost.

Performance and Cost Optimization

CDK applications can accumulate waste over time if you're not proactive about optimization. Here are the key areas to address.

Tagging for Cost Allocation

Tags propagate from parent constructs to all taggable children. Apply tags at the app or stack level to ensure every resource is tagged:

import { Tags } from 'aws-cdk-lib';

const app = new App();
Tags.of(app).add('Project', 'MyProject');
Tags.of(app).add('Environment', 'production');
Tags.of(app).add('CostCenter', 'engineering');

These tags appear in your AWS Cost Explorer, enabling cost allocation by project, environment, or team.

Managing CDK Assets with Garbage Collection

Over time, CDK bootstrap buckets and ECR repositories accumulate old assets from previous deployments. This increases storage costs unnecessarily.

CDK Garbage Collection (currently in preview) cleans up isolated assets:

cdk gc --unstable=gc

This command works at the environment level, identifying and deleting assets that are no longer referenced by any deployed stack. Configure safety buffers to avoid deleting assets needed for rollbacks:

cdk gc --unstable=gc --rollback-buffer-days 14 --created-buffer-days 1

Warning: Don't run garbage collection during active deployments. Wait until all deployments are complete.

Finally, let's look at the newest CDK features you should know about.

Recent CDK Features (2024-2025)

CDK continues to evolve rapidly. These recent features address longstanding pain points and open new possibilities.

CDK Refactor for Safe Infrastructure Changes

Released in September 2025, CDK Refactor enables safe refactoring of infrastructure by renaming constructs, moving resources between stacks, and reorganizing CDK applications without replacing existing resources.

Previously, renaming a construct or moving it to a different stack caused CloudFormation to delete the old resource and create a new one. For stateful resources like databases, this was catastrophic.

CDK Refactor uses CloudFormation's refactor capabilities to compute the mapping between old and new logical IDs automatically:

cdk refactor

Use cases include:

  • Breaking monolithic stacks into domain-specific stacks
  • Renaming constructs to follow naming conventions
  • Reorganizing applications after team structure changes

This feature is available in all AWS Regions where CDK is supported.

CLI and Library Split

Starting February 2025, the CDK CLI and CDK Construct Library have independent release cadences:

  • CDK CLI versions: 2.1000.0, 2.1001.0, etc.
  • CDK Construct Library versions: 2.175.0, 2.176.0, etc.

The CLI source code moved to a new GitHub repository: github.com/aws/aws-cdk-cli.

Practical impact: Keep your CDK CLI at the latest version. The CLI is backward-compatible with all Construct Library versions released before it. Update your CI/CD pipelines to install the latest 2.x CLI rather than pinning specific versions.

New L2 Constructs

Several services gained L2 constructs in 2024-2025:

Amazon Data Firehose L2 (February 2025): Define streaming data infrastructure with familiar programming patterns. Programmatically configure delivery streams to S3, Redshift, and other destinations.

AWS AppSync Events L2 (February 2025): Create WebSocket APIs for real-time applications. Define event APIs and channel namespaces, grant access to specific channels, and integrate with Lambda functions.

Amazon EKS v2 L2 (alpha): Uses native CloudFormation resources and modern Access Entry-based authentication. Supports EKS Auto Mode and multiple clusters per stack.

These new L2 constructs replace verbose L1 configurations with intent-based APIs that include sensible defaults.

For even more advanced patterns, check out this AWS re:Invent session from experts with 4+ years of CDK experience.

Advanced Patterns: Learning from 4 Years of CDK (Video)

This AWS re:Invent 2023 session (COM302) covers advanced CDK patterns from engineers who have used CDK in production for over four years. The talk goes deeper into topics like testing strategies, organizational patterns, and lessons learned from large-scale deployments.

AWS re:Invent 2023 - Advanced AWS CDK: Lessons learned from 4 years of use (COM302)

Key topics covered include:

  • Organizational patterns for multi-team CDK adoption
  • Advanced testing strategies beyond unit tests
  • Performance optimization for large CDK applications
  • Governance and compliance at scale
  • Real-world failure scenarios and how to avoid them

The session complements this guide by providing visual explanations and live demonstrations of advanced patterns.

Your CDK Best Practices Checklist

Here's what to remember from this guide:

  1. Use Projen for project configuration management, not manual file editing
  2. Model with constructs, deploy with stacks to maximize reusability
  3. Test your infrastructure with fine-grained assertions and snapshots
  4. Validate with cdk-nag before deployment to catch compliance issues
  5. Use GitHub Actions with OIDC for secure CI/CD without long-lived credentials
  6. Avoid anti-patterns like hardcoded names, environment variable lookups, and manual changes
  7. Let CDK generate resource names and pass them to consumers explicitly
  8. Define removal policies for all stateful resources
  9. Tag everything for cost allocation and governance
  10. Keep CDK CLI updated to get the latest features and compatibility

Your next step: Clone the aws-cdk-starter-kit and explore how it implements these best practices. Use it as a starting point for your next project or as a reference when restructuring existing applications.

For more detailed guidance on specific topics, check out these related guides:

What best practices have made the biggest difference in your CDK projects? I'd love to hear about patterns that have worked well for your team.

Need Help with Your AWS CDK Implementation?

We review CDK codebases for best practices, security issues, and architectural improvements. Get expert guidance on testing, CI/CD, and organizational patterns that scale.

Share this article on ↓

Subscribe to our Newsletter

Join ---- other subscribers!