In a previous post, I wrote about how to use uv to manage a python project dependencies. We also need to extend uv to use it in docker image builds and CI/CD pipelines.
Firstly, I modified the dockerfile to support multi-stage builds consisting of an initial builder stage and a subsequent deployment stage. This is the recommended approach to creating smaller image sizes as we can create a virtualenv and install the dependencies in a temporary build image and copy the artifacts across from the build stage to the deployment stage. The initial stage or builder stage uses the uv image to install the dependencies.
The dockerfile for the builder stage is provided below:
FROM ghcr.io/astral-sh/uv:python3.13-trixie-slim AS builder
SHELL [ "/bin/bash", "-c" ]
# SET ENV
ENV SHELL=/bin/bash
ENV LIBRARY_PATH=/lib:/usr/lib
ENV UV_PYTHON_INSTALL_DIR=/opt/uv/python
# Enable bytecode compilation
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy
ENV UV_PYTHON_DOWNLOADS=0
WORKDIR /app
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-dev --no-install-project
COPY . .
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --locked --no-devWe used the debian trixie version of the uv image with python 3.13 preinstalled as the base image. We set the default shell to be bash, followed by the following environment variables:
- UV_LINK_MODE to slience warnings about using cache volumes.
- UV_COMPILE_BYTECODE parameter to speed up compilation of python modules.
- UV_PYTHON_INSTALL_DIR to install python a fixed location so we can reference it later in the subsequent stage
- UV_PYTHON_DOWNLOADS=0 to use the builder image installed python.
Next, we create two types of volume mounts: a bind mount to mount uv.lock and pyproject.toml files, which are needed for installing the dependencies; a cache mount to cache the dependencies to speed up build times. The cache mount can be shared across subsequent build stages, speeding up build times. We install the dependencies using uv sync --locked --no-install-project. The parameter --no-install-project allows for multi-stage builds especially in Docker images. By default, uv creates a virtualenv of .venv in the project directory when calling uv sync.
Next, we copy the remainder of the project files across and call uv sync --locked --no-dev to install the project.
The deployment stage copies the artifacts from the builder stage and sets up the rest of the application:
FROM python:3.13-slim-trixie AS deployment
SHELL [ "/bin/bash", "-c" ]
ENV SHELL=/bin/bash
ENV PYTHONBUFFERED=1
ENV FLASK_APP=application.py
ENV LIBRARY_PATH=/lib:/usr/lib
ENV VIRTUAL_ENV=/app/.venv
ENV PATH="/app/.venv/bin:$PATH"
# Creates non-root user
RUN groupadd --system --gid 1000 appuser \
&& useradd --system -ms /bin/bash --gid 1000 --uid 1000 --create-home appuser \
&& usermod -aG sudo appuser \
&& mkdir -p /app && mkdir -p /app/logs \
&& chown -R appuser:appuser /app \
&& chown -R appuser:appuser /app/logs
WORKDIR /app
# Copy files from builder
COPY --from=builder --chown=appuser:appuser /app /app
EXPOSE 5000
USER appuser
ENTRYPOINT ["/app/boot.sh"]We use the same base image as the builder stage. We set the default shell to be bash and set the following environment variables:
- PYTHONBUFFERED=1 to flush all outputs to STDOUT
- VIRTUAL_ENV to point to the builder stage virtualenv directory
- Add the virtualenv directory to the system PATH variable
We create a non-root user ( 1000 ) to run the application and create an application directory with the ownership scoped to the non-root user. We use the copy command to copy the application artifacts from the builder stage, changing the owner to the non-root user.
Below is the complete Dockerfile for a sample Flask application using uv:
FROM ghcr.io/astral-sh/uv:python3.13-trixie-slim AS builder
SHELL [ "/bin/bash", "-c" ]
# SET ENV
ENV SHELL=/bin/bash
ENV LIBRARY_PATH=/lib:/usr/lib
ENV UV_PYTHON_INSTALL_DIR=/opt/uv/python
# Enable bytecode compilation
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy
ENV UV_PYTHON_DOWNLOADS=0
WORKDIR /app
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-dev --no-install-project
COPY . .
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --locked --no-dev
FROM python:3.13-slim-trixie AS runner
SHELL [ "/bin/bash", "-c" ]
ENV SHELL=/bin/bash
ENV PYTHONBUFFERED=1
ENV FLASK_APP=application.py
ENV LIBRARY_PATH=/lib:/usr/lib
ENV VIRTUAL_ENV=/app/.venv
ENV PATH="/app/.venv/bin:$PATH"
# Creates non-root user
RUN groupadd --system --gid 1000 appuser \
&& useradd --system -ms /bin/bash --gid 1000 --uid 1000 --create-home appuser \
&& usermod -aG sudo appuser \
&& mkdir -p /app && mkdir -p /app/logs \
&& chown -R appuser:appuser /app \
&& chown -R appuser:appuser /app/logs
WORKDIR /app
# Copy files from builder
COPY --from=builder --chown=appuser:appuser /app /app
EXPOSE 5000
USER appuser
ENTRYPOINT ["/app/boot.sh"]To support running the application in docker compose locally, we need to register a new watch on changes to uv.lock and pyproject.toml which would trigger a container rebuild. We also need to ignore the .venv directory:
...
develop:
watch:
- action: sync
path: ./board
target: /app/board
ignore:
- __pycache__
- .venv/
- action: rebuild
path: uv.lock
- action: rebuild
path: pyproject.toml
- action: rebuild
path: ./migrationsWithin the github pipelines, we can use the astral-sh/setup-uv@v7 action to install specific version of uv and then use it in subsequent actions:
- name: Install uv
uses: astral-sh/setup-uv@v7
with:
# Install a specific version of uv.
version: "0.9.8"
- name: Setup python
uses: actions/setup-python@v5
with:
python-version-file: ".python-version"
- name: Install project
run: |
uv sync --lockedThe official uv and docker integration guide provides more information with a uv docker example project for reference.
With this approach, I was able to maintain a consistent approach to using uv both in development and deployment.