in AWS, aws lambda, devops, python, terraform

Optimizing Image Processing in AWS Lambda: A Deep Dive into Using ImageMagick with Docker

I this post I’m showing what I learned and the end code for generating optimized thumbnails for images uploaded to an S3 bucket.

The project demanded support for a range of image formats beyond the standard ones like PNG, JPG, GIF, and WebP. Therefore, Pillow, a common simple to use python imaging library, was not suitable for the requeriments.

Enter ImageMagick, the open-source library renowned for its extensive support of file formats, making it the ideal tool for our requirements.

Throughout the project, my approach was methodical:

  1. I started by creating a prototype using Pillow.
  2. Next, I set up an AWS Lambda function, compiled ImageMagick, and packaged it as a layer for the Lambda function.
  3. As the project evolved, I continued to add new image formats to the ImageMagick layer and updated the AWS Lambda accordingly.
  4. Eventually, I transitioned to using a compiled release of ImageMagick, packaging it as a Lambda layer.
  5. For the sake of simplifying orchestration, I ultimately migrated everything to Docker, creating a self-contained image. This approach proved to be more straightforward for development, testing, and deployment.

Below are the results and insights from this journey.

Dockerfile for imagemagick/lambda

FROM ubuntu:23.10


#libheif-dev libheif-dev libheif1
RUN apt-get update
RUN apt-get install -y python3.11 python3-pip curl libharfbuzz-dev libfribidi-dev
# imagemagick libheif-examples libheif-dev libheif1
#RUN curl -sL --output ImageMagick.AppImage
RUN curl -sL --output ImageMagick.AppImage
RUN chmod a+x ImageMagick.AppImage
RUN ./ImageMagick.AppImage --appimage-extract
RUN cp -a /tmp/squashfs-root/usr/* /usr
RUN rm -rf /tmp/squashfs-root /tmp/ImageMagick
#RUN apt-get install libharfbuzz-dev libfribidi-dev
RUN mkdir -p /task
RUN pip install --target /task awslambdaric
#RUN yum install -y ImageMagick libheif
COPY requirements.txt /task
RUN pip install -r requirements.txt --target /task
COPY src/* /task
#RUN identify -list format

# RUN convert /demo/heic.HEIC /demo/heic.jpg
ENTRYPOINT [ "/usr/bin/python3", "-m", "awslambdaric" ]
CMD [ "main.handler" ]

Makefile for docker/lambda

LAMBDA_FUNCTION_NAME ?= imagemagick-lambda
ECR ?=
BUILD = build --pull --push -t $(ECR):latest .
BUILDX = buildx build --pull --platform linux/x86_64 --push -t $(ECR):latest .

.PHONY: all

all: docker install

aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin
docker buildx create --use
docker $(BUILDX)

aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin
docker $(BUILD)

#aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin
#docker push $(ECR):latest
aws --no-cli-pager lambda update-function-code --function-name $(LAMBDA_FUNCTION_NAME) --image-uri $(ECR):latest

deactivate || true
python3 -m venv venv
venv/bin/pip install -r requirements.txt

docker run -p 9000:9000 $(ECR):latest

docker run --entrypoint /bin/bash -ti --rm $(ECR):latest



  • ECR setup in AWS
  • Lambda function setup in AWS
  • Your code in src/

Update the Makefile LAMBDA_FUNCTION_NAME and ECR according to your project.


make build
make install

Terraform module and code

The terraform code is designed to be in a module an instanciated like this:

module "s3_thumbnails" {
  source     = "./s3-thumbnails"
  bucket_arn = aws_s3_bucket.bucket.arn
  bucket     =
  project    = var.project

The module code to be placed in ./s3-thumbnails:

resource "aws_ecr_repository" "docker_registry" {
  name                 = "${var.project}-thumbnailer"
  image_tag_mutability = "MUTABLE"
  image_scanning_configuration {
    scan_on_push = true
  tags = {
    project = var.project
    stage   = "prod"

data "aws_iam_policy_document" "assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = [""]

resource "aws_iam_role_policy" "Thumbnailer" {
  name = "${var.project}_Thumbnailer"
  policy = data.aws_iam_policy_document.lambda_policy.json
  role   =

resource "aws_iam_role" "thumbnailer" {
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
  name               = "${var.project}-Thumbnailer"

data "aws_iam_policy_document" "lambda_policy" {
  statement {
    actions = [
    resources = [

  statement {
    actions = [
    resources = [

  statement {
    actions = [
    resources = ["*"]

resource aws_lambda_function thumbnailer {
  function_name    = "${var.project}-thumbnailer"
  image_uri = "${aws_ecr_repository.docker_registry.repository_url}:latest"
  package_type = "Image"
  role             = aws_iam_role.thumbnailer.arn
  memory_size      = 6000
  timeout = 300

  ephemeral_storage {

  environment {
    variables = {
#      MAGICK_HOME = "/opt/imagemagick"
  tags = {
    project = var.project
    stage   = "prod"

resource "aws_lambda_permission" "allow_bucket" {
  statement_id  = "AllowExecutionFromS3Bucket"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.thumbnailer.function_name
  principal     = ""
  source_arn    = var.bucket_arn

resource aws_s3_bucket_notification s3-notification {
  bucket = var.bucket

  lambda_function {
    lambda_function_arn = aws_lambda_function.thumbnailer.arn
    events              = ["s3:ObjectCreated:*"]

  depends_on = [aws_lambda_permission.allow_bucket]

variable "project" {
  description = "The project name. It will be used with the stage to create the resources names and tag them."

variable "bucket_arn" {}

variable "bucket" {}

variable "thumbnail-runtime" {
  default = "python3.11"

Write a Comment