Skip to content
Published on

Running LocalStack using Docker-Compose

Prerequisites

  • Have docker and docker-compose installed on your machine. Make sure the command are executable in your terminal.
  • At least 500 MB of memory is allocated for running the service.

Getting the Docker-Compose file

You can get the latest docker-compose.yml from the LocalStack official GitHub repository. Here is the docker-compose file content available on Aug 20, 2022:

version: "3.8"

services:
  localstack:
    container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
    image: localstack/localstack
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:4510-4559:4510-4559"  # external services port range
      - "127.0.0.1:53:53"                # DNS config (only required for Pro)
      - "127.0.0.1:53:53/udp"            # DNS config (only required for Pro)
      - "127.0.0.1:443:443"              # LocalStack HTTPS Gateway (only required for Pro)
    environment:
      - DEBUG=${DEBUG-}
      - PERSISTENCE=${PERSISTENCE-}
      - LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
      - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-}  # only required for Pro
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

You can create a directory/folder anywhere on your machine, name it anything you want, and put the docker-compose.yml in that folder.

Running LocalStack

Open your command prompt or terminal and go to the directory containing the docker-compose file. Execute the docker-compose command:

docker-compose up -d

After it is done building and running, you can check the service status using:

docker-compose ps

If the result is similar to the below output, then the service is running well.

NAME                COMMAND                  SERVICE             STATUS              PORTS
localstack_main     "docker-entrypoint.sh"   localstack          running (healthy)   127.0.0.1:53->53/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:4510-4559->4510-4559/tcp, 127.0.0.1:4566->4566/tcp, 127.0.0.1:53->53/udp, 5678/tcp

Using AWS CLI inside the container

AWS CLI is available inside the LocalStack service container and we just need to enter the container bash interface using the command:

docker-compose exec localstack bash

After you’re inside the container, you can start checking the AWS CLI command available. For example, you can start creating an s3 bucket named sample-bucket using this command:

awslocal s3api create-bucket --bucket sample-bucket

and then confirm it by checking the bucket list using command:

awslocal s3api list-buckets

You could always exit the container by typing exit and enter.

Note

All the resources you have created (like buckets in S3) will be deleted when you restarted localstack service/container. Localstack is just a service that mimics AWS API and it’s not a real thing.

Resources Stats

Running the services using docker could use the resources of your machine. Here are the approximate stats of the resources taken.

CONTAINER ID   NAME              CPU %     MEM USAGE / LIMIT     MEM %     NET I/O         BLOCK I/O   PIDS
bc38a1a738f1   localstack_main   0.02%     319.1MiB / 11.97GiB   2.60%     53kB / 25.9kB   0B / 0B     11