One of the neat features of the Docker-Compose file syntax is that it allows for variable substitution, where you can put in a placeholder that will get filled in from your .env
file. This is both useful and often necessary because you need to keep your docker-compose.yml
checked into GIT / version-control, but you don’t want to store passwords directly in the code. Variable substitution lets you do exactly that:
docker-compose.yml
:
version: '3.1'
services:
db:
image: mysql:5.7
environment:
MYSQL_PASSWORD: ${PASSWORD}
.env
(not checked into version control):
PASSWORD='abc123def456'
Problem: Docker-Compose does not read .env files outside of the working directory
Here is the issue: all of the above works fine, except if your .env file lives outside of the directory where the docker-compose.yml
file resides. For example, let’s pretend that our scenario is that we have a .env
file in the project root that we want to share with multiple compose files in subdirectories. Like this:
Project Root:
.env
(contains variables).gitignore
/backend
docker-compose.yml
(consumes variables via substitution)
/frontend
docker-compose.yml
(consumes variables via substitution)
Now, ignoring that there might be ways to optimize our docker setup here and consolidate, how can we get both docker-compose files to use the top level .env
file?
Best Option (if it works for you) – Official CLI Arguments
This issue / feature has been requested a lot from users of Docker, and it looks like the Docker dev team has taken a few passes at implementing the ability to use variable substitution with a env file outside of the working directory.
project-directory
One way that this is supposed to now be possible is through the --project-directory
argument to docker-compose
. The way it is supposed to work, is that you can use it to “specify an alternate working directory”, by passing in a PATH string (CLI ref page here). For example, if I wanted to start the backend while using the parent project directory, theoretically this should work:
cd backend
docker-compose --project-directory ./../ up
However, you can find many instances (1, 2, 3) where users are saying this straight up does not work, and I would have to agree from my personal experience. And the official response is essentially “yes, this is usually broken”, despite the argument staying in the docs since Compose 3.2 (PR here).
env-file
Another option that is supposed to be viable for this purpose is the --env-file
argument. However, like the env_file
option, this appears to actually only work for passing environment values directly to the container, and does not work for docker-compose
variable substitution – in fact, it appears to only work with the run
commands, which would support this hypothesis. This is despite the fact that an issue requesting that --env-file
be useable with the compose commands was closed – you can see comments on the relevant pull request that show it still not working (this is my experience as well). If you can get this to work, this would probably be the cleanest solution to this issue.
Workarounds
Since neither of the above “official” approaches actually work for me, and many other users, I want to cover some common workarounds that do work. Albeit, not in the most optimal solutions.
Execute commands where the .env
is located
Rather running docker-compose
where the YML file is located and getting docker to pull in a .env file in a different directory, you can actually do the reverse; execute the command where the .env file is located, and tell docker the location of a compose file in a different directory. We can do this with the -f
or --file
option.
Going back to our original example, here is the example file structure:
.env
(contains variables).gitignore
/backend
docker-compose.yml
(consumes variables)
/frontend
docker-compose.yml
(consumes variables)
And here is how we could start up both docker configs.
# In the project root dir
docker-compose -f ./backend/docker-compose.yml up -d
docker-compose -f ./frontend/docker-compose.yml up -d
Symlink the .env file in
This is a really common solution to this issue, and a good solution to remember in general when programs require that files be in the same directory. Instead of copying the file to the directory as a part of a build script, we can create a symbolic link, which we only have to create once, and will make docker think that the .env file really is in the same folder as the compose file.
For our example scenario, I just have to create the two symlinks in a one time process:
(Bash example):
ln -s ../.env backend/.env
ln -s ../.env frontend/.env
Although version control, like GIT, can actually handle storing symlinks, if you are not comfortable with this, you could always have the symlinks generated with a one-time script.
Copy the env file around
If you are opposed to using symlinks, you could simply copy the top level .env file around to various spots, using some standard shell scripting. I find this method a little messy and less optimal.
Simply passing env variable values from host to container
If you don’t care about variable substitution, and all you’re looking to do is simply pass a bunch of values through to the container, then most of this post is extra information that you don’t need. Passing environment values to the container itself is pretty simple, and has multiple easy-to-use options:
- Full details
- You can use the
env_file
option to pass an entire.env
file contents to the container - Put under
environment:
but leave off value - Pass via
docker-compose run -e {VAR}={VAL} {imageName}
- You can use the
Thank you ! I just wasted hours trying to figure out how to pass the same variables to all my containers and never thought about using a symlink.