How to set a CloudWatch Logs Retention Policy for all log groups


Amazon CloudWatch Logs is a great tool to help you collect, monitor, and analyze your logs.

When you create a log group, it is important to consider how long you need to retain the log data for compliance reasons.

In this blog post, we will look at how to set a CloudWatch Logs Retention Policy for all log groups in an AWS region using Python and Boto3.

How to set a CloudWatch Logs Retention Policy to x number of days for all log groups

Before you can start, you’re required to have done the following prerequisites before you can run the Python script on your AWS account.

  1. Install the AWS CLI and configure an AWS profile
  2. Setting up the Python Environment

If you’ve already done this, you can proceed to step 3.

1. Install AWS CLI and configure an AWS profile

The AWS CLI is a command line tool that allows you to interact with AWS services in your terminal.

Depending on if you’re running LinuxmacOS, or Windows the installation goes like this:

# macOS install method:
brew install awscli

# Windows install method:
wget https://awscli.amazonaws.com/AWSCLIV2.msi
msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi

# Linux (Ubuntu) install method:
sudo apt install awscli

In order to access your AWS account with the AWS CLI, you first need to configure an AWS Profile. There are 2 ways of configuring a profile:

  • Access and secret key credentials from an IAM user
  • AWS Single Sign-on (SSO) user

In this article, I’ll briefly explain how to configure the first method so that you can proceed with running the python script on your AWS account.

If you wish to set up the AWS profile more securely, then I’d suggest you read and apply the steps described in setting up AWS CLI with AWS Single Sign-On (SSO).

In order to configure the AWS CLI with your IAM user’s access and secret key credentials, you need to login to the AWS Console.

Go to IAM > Users, select your IAM user, and click on the Security credentials tab to create an access and secret key.

Then configure the AWS profile on the AWS CLI as follows:

➜ aws configure
AWS Access Key ID [None]: <insert_access_key>
AWS Secret Access Key [None]: <insert_secret_key>
Default region name [None]: <insert_aws_region>
Default output format [json]: json

Your was credentials are stored in ~/.aws/credentials and you can validate that your AWS profile is working by running the command:

➜ aws sts get-caller-identity
{
    "UserId": "AIDA5BRFSNF24CDMD7FNY",
    "Account": "012345678901",
    "Arn": "arn:aws:iam::012345678901:user/test-user"
}

2. Setting up the Python Environment

To be able to run the Python boto3 script, you will need to have Python installed on your machine.

Depending on if you’re running LinuxmacOS, or Windows the installation goes like this:

# macOS install method:
brew install python

# Windows install method:
wget https://www.python.org/ftp/python/3.11.2/python-3.11.2-amd64.exe
msiexec.exe /i https://www.python.org/ftp/python/3.11.2/python-3.11.2-amd64.exe

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py

# Linux (Ubuntu) install method:
sudo apt install python3 python3-pip

Once you have installed Python, you will need to install the Boto3 library.

You can install Boto3 using pip, the Python package manager, by running the following command in your terminal:

pip install boto3

3. Create the Python Script to set the CloudWatch Logs Retention Policy on all log groups

single AWS Region

Once you have our environment set up, you can create the Python script.

Copy the following code into a new file on the desired location and name it: python set_cloudwatch_logs_retention.py.

#  https://github.com/dannysteenman/aws-toolbox
#
#  License: MIT
#
# This script will set a CloudWatch Logs Retention Policy to x number of days for all log groups in the region that you exported in your cli.

import argparse
import boto3

cloudwatch = boto3.client("logs")


def get_cloudwatch_log_groups():
    kwargs = {"limit": 50}
    cloudwatch_log_groups = []

    while True:  # Paginate
        response = cloudwatch.describe_log_groups(**kwargs)

        cloudwatch_log_groups += [log_group for log_group in response["logGroups"]]

        if "NextToken" in response:
            kwargs["NextToken"] = response["NextToken"]
        else:
            break

    return cloudwatch_log_groups


def cloudwatch_set_retention(args):
    retention = vars(args)["retention"]
    cloudwatch_log_groups = get_cloudwatch_log_groups()

    for group in cloudwatch_log_groups:
        print(group)
        if "retentionInDays" not in group or group["retentionInDays"] != retention:
            print(f"Retention needs to be updated for: {group['logGroupName']}")
            cloudwatch.put_retention_policy(
                logGroupName=group["logGroupName"], retentionInDays=retention
            )
        else:
            print(
                f"CloudWatch Loggroup: {group['logGroupName']} already has the specified retention of {group['retentionInDays']} days."
            )


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Set a retention in days for all your CloudWatch Logs in a single region."
    )
    parser.add_argument(
        "retention",
        metavar="RETENTION",
        type=int,
        choices=[
            1,
            3,
            5,
            7,
            14,
            30,
            60,
            90,
            120,
            150,
            180,
            365,
            400,
            545,
            731,
            1827,
            3653,
        ],
        help="Enter the retention in days for the CloudWatch Logs.",
    )
    args = parser.parse_args()
    cloudwatch_set_retention(args)

This script sets a retention policy for all your CloudWatch logs in an AWS region.

You can set the retention period by passing the number of days as a parameter when running the script.

First, the script gets all the log groups in the region using the describe_log_groups method of the CloudWatch client.

Since AWS limits the number of log groups returned to 50 per API call, the script paginates through the results.

The cloudwatch_set_retention function sets the retention policy by using the put_retention_policy method of the CloudWatch client.

It takes the retention period as a parameter, which is obtained from the command-line arguments using the argparse library.

4. Run the python script on your AWS account

To run the script, simply execute the following command in your terminal or command prompt:

python set_cloudwatch_logs_retention.py <retention period in days>

For example, if you want to set the retention period to 30 days, you should run the following command:

➜ python cloudwatch/set_cloudwatch_logs_retention.py 30

{'logGroupName': 'CloudTrail/audit-log', 'creationTime': 1677752758182, 'retentionInDays': 14, 'metricFilterCount': 0, 'arn': 'arn:aws:logs:eu-central-1:123456789012:log-group:CloudTrail/audit-log:*', 'storedBytes': 7537107}
{'logGroupName': 'log-group-1', 'creationTime': 1678716652351, 'metricFilterCount': 0, 'arn': 'arn:aws:logs:eu-central-1:123456789012:log-group:log-group-1:*', 'storedBytes': 0}
Retention needs to be updated for: log-group-1
{'logGroupName': 'log-group-2', 'creationTime': 1678716660646, 'metricFilterCount': 0, 'arn': 'arn:aws:logs:eu-central-1:123456789012:log-group:log-group-2:*', 'storedBytes': 0}
Retention needs to be updated for: log-group-2

Retention needs to be updated for: CloudTrail/audit-log

Once you execute the command, the script will start running and will print the log groups for which the retention period has been updated or that already have the specified retention period.

Conclusion

In conclusion, setting a CloudWatch Logs retention policy is essential for managing log data and keeping costs under control.

With the help of the Python script provided in this article, setting a retention policy for all log groups in a single region can be done quickly and easily.

Keep in mind that setting an appropriate retention period will vary based on your business requirements and industry regulations.

So it’s important to regularly review and adjust your retention policy as needed.


Want to join us? Join for tips, strategies, and resources that I use in my solo cloud agency to build well-architected, resilient, and cost-optimized AWS solutions on AWS.

Join 1k+ AWS Cloud enthusiasts
Loved by engineers worldwide


Danny Steenman

A Senior AWS Cloud Engineer with over 9 years of experience migrating workloads from on-premises to AWS Cloud.

I have helped companies of all sizes shape their cloud adoption strategies, optimizing operational efficiency, reducing costs, and improving organizational agility.

Connect with me today to discuss your cloud aspirations, and let’s work together to transform your business by leveraging the power of AWS Cloud.

I need help with..
stacked cubes
Improving or managing my CDK App.Maximize the potential of your AWS CDK app by leveraging the expertise of a seasoned CDK professional.
Reducing AWS Costs.We can start by doing a thorough assessment of your current AWS infrastructure, identifying areas with potential for cost reduction and efficiency improvement.
Verifying if my infrastructure is reliable and efficient.We’ve created a comprehensive AWS Operations Checklist that you can utilize to quickly verify if your AWS Resources are set up reliably and efficiently.