Solving the FastAPI, Alembic, Docker Problem

Written by sirstuffy | Published 2025/12/05
Tech Story Tags: fastapi | alembic-migrations | sqlalchemy | docker-compose | python-tutorials | python | sqlachemy | docker-problem

TLDRDockerize your FastAPI application that incorporates alembic and sqlachemy. Solve the permission issue between the container and host for the migrations.via the TL;DR App

I was working on a FastAPI application and wanted to dockerize for development, you know the usual, Dockerfile, Docker compose and the lot.

It was meant to be a straightforward process until I ran my alembic migration command and I realized I didn't see the version of the alembic migration in my local directory. The natural solution is to use a bind mount, which once more, should be an easy fix until it wasn't.

I checked out different solutions and I found this one that proposed I use a bind mount but can't simultaneously use the watch process of compose. I'm just too stubborn to accept that solution. I wanted to have my cake and eat it! I wanted my versions in my localhost after it's generated in my container and I also want the "watch" feature from compose. The major problem was permission issues between the host and container.

This write up was my fix and should work for folks using Flask. The brilliant UV (I highly recommend) is used for package dependency management . Django users can check this out. Enough story, let's setup.

Dockerfile

# 1. Use Python 3.13 bookworm as the base
FROM ghcr.io/astral-sh/uv:python3.13-bookworm

# 2. Set the working directory
WORKDIR /app

# 3. Set environment variables for UV
# UV_COMPILE_BYTECODE: Compiles python files to .pyc for faster startup
# UV_LINK_MODE: copy (safer for docker layers than hardlinks)
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy

# 4. create a non-root user for security
RUN useradd -u 1000 app

# 5. Set HOME explicitly to /app
# This prevents the "/nonexistent/.cache/uv" error by telling tools 
# to look for their config/cache in /app instead of /nonexistent
ENV HOME=/app

# 6. Ensures the /app directory (and everything inside) is owned by the app user
# We run this BEFORE switching users
RUN chown -R app:app /app

# 7. Copy pyproject.toml and uv.lock with correct ownership
# changing ownership during COPY is more efficient than running chown later
COPY --chown=app:app pyproject.toml uv.lock ./

# 8. Switch to the non-root user BEFORE installing dependencies
USER app

# 9. Install dependencies
# Since we are now the 'app' user, the .venv will be owned by 'app'
# and the cache will be written to /app/.cache/uv.
RUN uv sync --frozen --no-install-project --no-dev


# 10. Copy the rest of the application code
COPY --chown=app:app . .
RUN uv sync --frozen --no-dev

# 11. Expose the port
EXPOSE 8000


# 12. Command to run the application
# For production: CMD ["uv", "run", "fastapi", "run", "main.py" ...
# This assumes your main.py is in root, I usually have mine in src/main.py
# so change it accordingly
CMD ["uv", "run", "fastapi", "dev", "main.py", "--host", "0.0.0.0", "--port", "8000"]

I will explain some of my choices here, the comments should suffice for the rest.

In (4.) above, We create a user and assign a UID(user ID) value of 1000, when we create files, it is assigned a UID and GID(group ID), we specifically prefer to use 1000 because many Linux distributions assign that to the first regular user created on a system. Hence, matching 1000 can align container UID with host UID for easier file permission mapping. Feel free to switch things if this is not your use case since you now understand the rationale.

In (5.) above, I am using uv for my package and project management on my host and it just makes sense to continue that. When you install a dependency with uv, it creates a cache of that dependency globally ( i.e in your system directory) especially if you re-install a dependency or use in another project. It stores them usually in $HOME/.cache/uv, by setting this variable to /app, we are making sure it is stored in /app/.cache/uv. If we don't do that, we get /nonexistent/.cache/uv which is either “unwritable” or absent and causes errors. You can also choose not to have a cache and it will reduce you final image size greatly especially for production, but for development, I choose to do that. You can use the UV_NO_CACHE flag.

Let us also look at (9.) above. Continuing from (5.) above, you could also include the --no-cache flag here.

--frozen:  uv.lock ensures consistency installs across environments.
--no-install-project: is used to avoid installing the project here for optimal layer caching. 
By separating this from the install, when the source changes, this does not get rebuilt,
the install layer(10) is done again, making your image rebuild even faster.
--no-dev: Avoids installing development dependencies.

Lovely! congratulations if you made it through that. It becomes easier from here!

Docker Compose

Let's write the docker-compose.yaml (yes, the docker docs prefer yaml over yml). The aim is to build our fastapi application and have it connected to a postgres database. Let us write it and explain a few things:

services:

  db:
    image: postgres
    env_file:
      - .env
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql
    restart: always
  api:
    build: .
    env_file:
      - .env

    depends_on:
      - db
  
    ports: 
     - "8000:8000"

    develop:
      watch:
        - path: ./src
          action: sync
          target: /app/src
      
        - path: ./pyproject.toml
          action: rebuild
    working_dir: /app
    volumes:
      - ./migrations:/app/migrations:z
      - ./migrations/versions:/app/migrations/versions:z



volumes:
  postgres_data:

I love my docker compose file looking neat and simple and typically avoid unnecessary variable. For example, I simply usually can not wrap my head around why people choose to use environment and create a long list of environment variables in the yaml file thus excessively lengthening the file. Having a .env file and just specifying it, works a treat. You should follow these best practices for a smooth output.

Pro tip: Just use os.environ to load them in your python file.

Pro tip #2: You can view those variables with docker compose config
The key part to explain which is largely the reason why I wrote this is the “api.volumes“ section. We create a bind mount separately for the migrations and for versions under migrations. The important bit is in fact the "z" that follows. Without it, we keep getting permission errors. What does it do? In our context, it tells docker to relabel the SELinux security context of the host path so container processes can access it. Thus, I am able to use the bind mount for my migrations and hence, see my versions in my host each time I run a migration command in the container as well as use the watch feature of compose. Finally, we eat out cake and still have it!

PS: For users on macOS or Windows (which are common Docker development environments), this flag may be unnecessary and subsequently be ignored. You can test that out!

.dockerignore

__pycache__/
.venv
.ruff_cache
./src/email_templates/
.env

Finally...

I suggest creating a directory for your alembic migration say "migrations" and sub-directory "versions", run your alembic init. Do not forget to change the script_location variable in your alembic.ini as well as edit the appropriate part of your env.py.

docker compose up --watch

In a second terminal,

docker compose exec api uv run alembic revision --autogenerate -m "Initial"
docker compose exec api uv run alembic upgrade head

Conclusion

If you completed this, I hope you have learnt a thing or two. A lot of these are my personal opinion and solution so you may find yourself not agreeing with some, that's okay. Feel free to send me a message on LinkedIn. Thanks!!


Written by sirstuffy | A Dentist and Software Developer. I enjoying writing code in Python, Golang and Typescript.
Published by HackerNoon on 2025/12/05