Docker images¶
Pull images¶
Docker images are stored in GitHub Container Registry (GHCR), which is a Docker registry like Docker Hub. Public Docker images can be pulled anonymously from ghcr.io
. The inboard images are based on the official Python Docker images.
Simply running docker pull ghcr.io/br3ndonland/inboard
will pull the latest FastAPI image (Docker uses the latest
tag by default). If specific versions of inboard or Python are desired, specify the version number at the beginning of the Docker tag as shown below (new in inboard version 0.6.0). All the available images are also provided with Alpine Linux builds, which are available by appending -alpine
, and Debian "slim" builds, which are available by appending -slim
(new in inboard version 0.11.0). Alpine and Debian slim users should be aware of their limitations.
Please see inboard Git tags, inboard PyPI release history, and inboard Docker images on GHCR for the latest version numbers and available Docker tags.
Example Docker tags
# Pull latest FastAPI image (Docker automatically appends the `latest` tag)
docker pull ghcr.io/br3ndonland/inboard
# Pull latest version of each image
docker pull ghcr.io/br3ndonland/inboard:base
docker pull ghcr.io/br3ndonland/inboard:fastapi
docker pull ghcr.io/br3ndonland/inboard:starlette
# Pull image from specific release
docker pull ghcr.io/br3ndonland/inboard:0.38.0-fastapi
# Pull image from latest minor version release (new in inboard 0.22.0)
docker pull ghcr.io/br3ndonland/inboard:0.38-fastapi
# Pull image with specific Python version
docker pull ghcr.io/br3ndonland/inboard:fastapi-python3.11
# Pull image from latest minor release and with specific Python version
docker pull ghcr.io/br3ndonland/inboard:0.38-fastapi-python3.11
# Append `-alpine` to image tags for Alpine Linux (new in inboard 0.11.0)
docker pull ghcr.io/br3ndonland/inboard:latest-alpine
docker pull ghcr.io/br3ndonland/inboard:0.38-fastapi-alpine
# Append `-slim` to any of the above for Debian slim (new in inboard 0.11.0)
docker pull ghcr.io/br3ndonland/inboard:latest-slim
docker pull ghcr.io/br3ndonland/inboard:0.38-fastapi-slim
Use images in a Dockerfile¶
For a Hatch project with the following directory structure:
repo/
package_name/
__init__.py
main.py
prestart.py
tests/
Dockerfile
pyproject.toml
README.md
The pyproject.toml could look like this:
Example pyproject.toml for Hatch project
[build-system]
build-backend = "hatchling.build"
requires = ["hatchling"]
[project]
authors = [{email = "you@example.com", name = "Your Name"}]
dependencies = [
"inboard[fastapi]",
]
description = "Your project description here."
dynamic = ["version"]
license = "MIT"
name = "package-name"
readme = "README.md"
requires-python = ">=3.8.1,<4"
[project.optional-dependencies]
checks = [
"black",
"flake8",
"isort",
"mypy",
"pre-commit",
]
docs = [
"mkdocs-material",
]
tests = [
"coverage[toml]",
"httpx",
"pytest",
"pytest-mock",
"pytest-timeout",
]
[tool.coverage.report]
exclude_lines = ["if TYPE_CHECKING:", "pragma: no cover"]
fail_under = 100
show_missing = true
[tool.coverage.run]
command_line = "-m pytest"
source = ["package_name", "tests"]
[tool.hatch.build.targets.sdist]
include = ["/package_name"]
[tool.hatch.envs.ci]
dev-mode = false
features = [
"checks",
"tests",
]
path = ".venv"
[tool.hatch.envs.default]
dev-mode = true
features = [
"checks",
"docs",
"tests",
]
path = ".venv"
[tool.hatch.envs.production]
dev-mode = false
features = []
path = ".venv"
[tool.hatch.version]
path = "package_name/__init__.py"
[tool.isort]
profile = "black"
src_paths = ["package_name", "tests"]
[tool.mypy]
files = ["**/*.py"]
plugins = "pydantic.mypy"
show_error_codes = true
strict = true
[tool.pytest.ini_options]
addopts = "-q"
minversion = "6.0"
testpaths = ["tests"]
The Dockerfile could look like this:
Example Dockerfile for Hatch project
FROM ghcr.io/br3ndonland/inboard:fastapi
# Set environment variables
ENV APP_MODULE=package_name.main:app
# Install Python requirements
COPY pyproject.toml README.md /app/
WORKDIR /app
RUN hatch env prune && hatch env create production
# Install Python app
COPY package_name /app/package_name
# RUN command already included in base image
Syncing dependencies with Hatch
Hatch does not have a direct command for syncing dependencies, and hatch env create
won't always sync dependencies if they're being installed into the same virtual environment directory (as they would be in a Docker image). Running hatch env prune && hatch env create <env_name>
should do the trick.
For a standard pip
install:
repo/
package_name/
__init__.py
main.py
prestart.py
tests/
Dockerfile
requirements.txt
README.md
Packaging would be set up separately as described in the Python packaging user guide.
The requirements.txt could look like this:
Example requirements.txt for pip
project
inboard[fastapi]
The Dockerfile could look like this:
Example Dockerfile for pip
project
FROM ghcr.io/br3ndonland/inboard:fastapi
# Set environment variables
ENV APP_MODULE=package_name.main:app
# Install Python requirements
COPY requirements.txt /app/
WORKDIR /app
RUN python -m pip install -r requirements.txt
# Install Python app
COPY package_name /app/package_name
# RUN command already included in base image
Organizing the Dockerfile this way helps leverage the Docker build cache. Files and commands that change most frequently are added last to the Dockerfile. Next time the image is built, Docker will skip any layers that didn't change, speeding up builds.
The image could then be built with:
cd /path/to/repo
docker build . -t imagename:latest
The final argument is the Docker image name (imagename
in this example). Replace with your image name.
Run containers¶
Run container:
docker run -d -p 80:80 imagename
Run container with mounted volume and Uvicorn reloading for development:
cd /path/to/repo
docker run -d -p 80:80 \
-e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
-v $(pwd)/package:/app/package imagename
Details on the docker run
command:
-e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true"
will instructstart.py
to run Uvicorn with reloading and without Gunicorn. The Gunicorn configuration won't apply, but these environment variables will still work as described:APP_MODULE
HOST
PORT
LOG_COLORS
LOG_FORMAT
LOG_LEVEL
RELOAD_DIRS
WITH_RELOAD
-v $(pwd)/package:/app/package
: the specified directory (/path/to/repo/package
in this example) will be mounted as a volume inside of the container at/app/package
. When files in the working directory change, Docker and Uvicorn will sync the files to the running Docker container.
Docker and Hatch¶
This project uses Hatch for Python dependency management and packaging, and uses pipx
to install Hatch in Docker:
ENV PATH=/opt/pipx/bin:/app/.venv/bin:$PATH
is set first to prepare the$PATH
.pip
is used to installpipx
.pipx
is used to install Hatch, withPIPX_BIN_DIR=/opt/pipx/bin
used to specify the location wherepipx
installs the Hatch command-line application, andPIPX_HOME=/opt/pipx/home
used to specify the location forpipx
itself.hatch env create
is used withHATCH_ENV_TYPE_VIRTUAL_PATH=.venv
andWORKDIR=/app
to create the virtualenv at/app/.venv
and install the project's packages into the virtualenv.
With this approach:
- Subsequent
python
commands use the executable atapp/.venv/bin/python
. - As long as
HATCH_ENV_TYPE_VIRTUAL_PATH=.venv
andWORKDIR /app
are retained, subsequent Hatch commands use the same virtual environment at/app/.venv
.
Docker and Poetry¶
This project now uses Hatch for Python dependency management and packaging. Poetry 1.1 was used before Hatch. If you have a downstream project using the inboard Docker images with Poetry, you can add RUN pipx install poetry
to your Dockerfile to install Poetry for your project.
As explained in python-poetry/poetry#1879, there were two conflicting conventions to consider when working with Poetry in Docker:
- Docker's convention is to not use virtualenvs, because containers themselves provide sufficient isolation.
- Poetry's convention is to always use virtualenvs, because of the reasons given in python-poetry/poetry#3209.
This project used pipx
to install Poetry in Docker:
ENV PATH=/opt/pipx/bin:/app/.venv/bin:$PATH
was set first to prepare the$PATH
.pip
was used to installpipx
.pipx
was used to install Poetry.poetry install
was used withPOETRY_VIRTUALENVS_CREATE=true
,POETRY_VIRTUALENVS_IN_PROJECT=true
andWORKDIR /app
to install the project's packages into the virtualenv at/app/.venv
.
With this approach:
- Subsequent
python
commands used the executable atapp/.venv/bin/python
. - As long as
POETRY_VIRTUALENVS_IN_PROJECT=true
andWORKDIR /app
were retained, subsequent Poetry commands used the same virtual environment at/app/.venv
.
Linux distributions¶
Alpine¶
The official Python Docker image is built on Debian Linux by default, with Alpine Linux builds also provided. Alpine is known for its security and small Docker image sizes.
Runtime determination of the Linux distribution
To determine the Linux distribution at runtime, it can be helpful to source /etc/os-release
, which contains an ID
variable specifying the distribution (alpine
, debian
, etc).
Alpine differs from Debian in some important ways, including:
- Shell (Alpine does not use Bash by default)
- Packages (Alpine uses
apk
as its package manager, and does not include some common packages likecurl
by default) - C standard library (Alpine uses
musl
instead ofgcc
)
The different C standard library is of particular note for Python packages, because binary package distributions may not be available for Alpine Linux. To work with these packages, their build dependencies must be installed, then the packages must be built from source. Users will typically then delete the build dependencies to keep the final Docker image size small.
The basic build dependencies used by inboard include gcc
, libc-dev
, and make
. These may not be adequate to build all packages. For example, to install psycopg
, it may be necessary to add more build dependencies, build the package, (optionally delete the build dependencies) and then include its libpq
runtime dependency in the final image. A set of build dependencies for this scenario might look like the following:
Example Alpine Linux Dockerfile for PostgreSQL project
# syntax=docker/dockerfile:1
ARG INBOARD_DOCKER_TAG=fastapi-alpine
FROM ghcr.io/br3ndonland/inboard:${INBOARD_DOCKER_TAG}
ENV APP_MODULE=mypackage.main:app
COPY pyproject.toml README.md /app/
WORKDIR /app
RUN <<HEREDOC
. /etc/os-release
if [ "$ID" = "alpine" ]; then
apk add --no-cache --virtual .build-project \
build-base freetype-dev gcc libc-dev libpng-dev make openblas-dev postgresql-dev
fi
hatch env create production
if [ "$ID" = "alpine" ]; then
apk del .build-project
apk add --no-cache libpq
fi
HEREDOC
COPY mypackage /app/mypackage
Alpine Linux virtual packages
Adding --virtual .build-project
creates a "virtual package" named .build-project
that groups the rest of the dependencies listed. All of the dependencies can then be deleted as a set by simply referencing the name of the virtual package, like apk del .build-project
.
Python packages with Rust extensions on Alpine Linux
As described above, Python packages can have C extensions. In addition, an increasing number of packages also feature Rust extensions. Building Python packages with Rust extensions will typically require installation of Rust and Cargo (apk add --no-cache rust cargo
), as well as installation of a Python plugin like maturin
or setuptools-rust
(python3 -m pip install --no-cache-dir setuptools-rust
). Remember to uninstall after (python3 -m pip uninstall -y setuptools-rust
). The installed rust
package should be retained.
In addition to build dependencies, Rust also has runtime dependencies, which are satisfied by the rust
package installed with apk
. The addition of the Rust runtime dependencies bloats Docker image sizes, and may make it impractical to work with Python packages that have Rust extensions on Alpine Linux. For related discussion, see rust-lang/rust#88221 and rust-lang/rustup#2213.
The good news - Python now supports binary package distributions built for musl
-based Linux distributions like Alpine Linux. See PEP 656 and cibuildwheel
for details.
Debian slim¶
The official Python Docker image provides "slim" variants of the Debian base images. These images are built on Debian, but then have the build dependencies removed after Python is installed. As with Alpine Linux, there are some caveats:
- Commonly-used packages are removed, requiring reinstallation in downstream images.
- The overall number of security vulnerabilities will be reduced as compared to the Debian base images, but vulnerabilities inherent to Debian will still remain.
- If
/etc/os-release
is sourced, the$ID
will still bedebian
, so custom environment variables or other methods must be used to identify images as "slim" variants.
A Dockerfile equivalent to the Alpine Linux example might look like the following:
Example Debian Linux slim Dockerfile for PostgreSQL project
# syntax=docker/dockerfile:1
ARG INBOARD_DOCKER_TAG=fastapi-slim
FROM ghcr.io/br3ndonland/inboard:${INBOARD_DOCKER_TAG}
ENV APP_MODULE=mypackage.main:app
COPY pyproject.toml README.md /app/
WORKDIR /app
ARG INBOARD_DOCKER_TAG
RUN <<HEREDOC
. /etc/os-release
if [ "$ID" = "debian" ] && echo "$INBOARD_DOCKER_TAG" | grep -q "slim"; then
apt-get update -qy
apt-get install -qy --no-install-recommends \
gcc libc-dev make wget
fi
hatch env create production
if [ "$ID" = "debian" ] && echo "$INBOARD_DOCKER_TAG" | grep -q "slim"; then
apt-get purge --auto-remove -qy \
gcc libc-dev make wget
fi
HEREDOC
COPY mypackage /app/mypackage
Redeclaring Docker build arguments
Why is ARG INBOARD_DOCKER_TAG
repeated in the example above? To understand this, it is necessary to understand how ARG
and FROM
interact. Any ARG
s before FROM
are outside the Docker build context. In order to use them again inside the build context, they must be redeclared.
Here-documents in Dockerfiles
The RUN
commands in the Dockerfiles above use a special syntax called a here-document, or "heredoc". This syntax allows multiple lines of text to be passed into a shell command, enabling Dockerfile RUN
commands to be written like shell scripts, instead of having to jam commands into long run-on lines. Heredoc support was added to Dockerfiles in the 1.4.0 release.
For more info, see: