Managing AWS Snapshot using AWS Lamda & Cloudwatch

This is the era of an automation. The process may be slow but it is for sure the unavoidable future. With this flow of technology, AWS can't remain untouched. Managing AWS resources can be time costly and unfeasible, if you are unaware of this trend. Therefore, with the proverb "Work Smart but not harder", I will explain to you few steps to achieve automatic snapshot management in AWS using AWS Lambda.

If you are questioning, why AWS Snapshot? Is not it to be fully secured from Amazon side? The answer is Yes and No. Yes, because you don't have to worry about the hardware and any underlying software except your OS. No, because Amazon will not take care of your OS Management. That means, you are the master of your own and you have the complete right to make the OS level security by your own. In technical term, it is called Shared Responsibility Model .

Login to your AWS console and go to

Select Create Function .

AWS Lambda Create Snapshot Script

Open the created Function and change accordingly.

Python Code for Creating Snapshot

Please be sure to keep the same name in all fields as shown above. For example, if your function name is Snapshot_erstellen15Days , you have to put the same name for the file name ( ) and before lambda_handler ( Snapshot_erstellen15days.lambda_handler )

# Backup SiemensDC in-use volumes in all regions
import boto3
Enter your instances here: ex. ['X-XXXXXXXX', 'X-XXXXXXXX']
volume = 'vol-01XXXXXXXXXXXXdd'
Set list of region
region = 'eu-central-1'
def lambda_handler(event, context):
Connect to region
ec2 = boto3.client('ec2', region_name=region)
print "Backing up %s in %s" % (volume, region)
# Create snapshot
ec2.create_snapshot(VolumeId=volume,Description='Created by Lambda backup function ebs-create-snapshots-every-15days')

This script will backup the volume vol-01XXXXXXXXXXXXdd on time interval of every 15 days. Save the file.

Only half of the job is done yet. It is now time to create another function, which can delete the old snapshot. For this, we will use time interval of 30 days ; meaning any snapshot older than 30 days will be deleted automatically to save the space as well as reduce the cost.

For this, create a similar function "Snapshot_loeschen_Jeden_30Days" . Feel free to choose the name but be careful with the file name and lambda_handler parameters.

Automatic Snapshot delete
# Delete snapshots older than retention period

import boto3
from botocore.exceptions import ClientError
from datetime import datetime,timedelta

def delete_snapshot(snapshot_id, reg):
    print "Deleting snapshot %s " % (snapshot_id)
        ec2resource = boto3.resource('ec2', region_name=reg)
        snapshot = ec2resource.Snapshot(snapshot_id)
    except ClientError as e:
        print "Caught exception: %s" % e
def lambda_handler(event, context):
    # Get current timestamp in UTC
    now =

	# AWS Account ID    
    account_id = '394845505142'
    # Define retention period in days
    retention_days = 30
    # Create EC2 client
    ec2 = boto3.client('ec2')
    # Get list of regions
    regions = ec2.describe_regions().get('Regions',[] )

    # Iterate over regions
    for region in regions:
        print "Checking region %s " % region['RegionName']
        # Connect to region
        ec2 = boto3.client('ec2', region_name=reg)
        # Filtering by snapshot timestamp comparison is not supported
        # So we grab all snapshot id's
        result = ec2.describe_snapshots( OwnerIds=[account_id] )
        for snapshot in result['Snapshots']:
            print "Checking snapshot %s which was created on %s" % (snapshot['SnapshotId'],snapshot['StartTime'])
            # Remove timezone info from snapshot in order for comparison to work below
            snapshot_time = snapshot['StartTime'].replace(tzinfo=None)
            # Subtract snapshot time from now returns a timedelta 
            # Check if the timedelta is greater than retention days
            if (now - snapshot_time) > timedelta(retention_days):
                print "Snapshot is older than configured retention of %d days" % (retention_days)
                delete_snapshot(snapshot['SnapshotId'], reg)
                print "Snapshot is newer than configured retention of %d days so we keep it" % (retention_days)

We have now created two functions, one for taking the snapshot of the specific volume and another for deleting the old snapshot. The work is not yet done. Scripts are ready but they should be executed to give the result. It can be done by the AWS service "CloudWatch", which you can compare with the Windows Task Scheduler Service. You can basically tell AWS when to run these scripts.

Since we are making snapshot twice in a month and removing old snapshot older than 1 month, we can tell AWS Cloudwatch to run snapshot creating script every 15 days and Snapshot removing script in a every month interval.

For AWS Cloudwatch go to

We need to create rules here. These rules will act like Cron-Jobs, running in a specific time. Let's say we want to run a backup script in every 15 days, then we can create a rule that runs in every 15 days with the action from Lambda.

Create a new rule

On the right hand side, click add target. You can add at most 5 targets. Since, we have already written our python program in Lambda, we will select the Lambda function and click Configure details.

Adding Lambda function as target
Configure rule details

Follow the same steps for creating the new rule to delete the older snapshot. Be careful choosing the right Lambda function.

Once the rules are defined and enabled, you should see something like this :

Rules for creating automatic Snapshot
Rules for deleting older Snapshot

In summary, Lambda functions tell what to do and Cloudwatch rules tell when to execute the Lamda functions. It is as simple as that. 🙂

Good Luck !!!


Anup Chhetri

IT system administrator

You may also like...

error: Content is protected !!