Looking for a specific post? Checkout the Blog Index.
Jul 31st, 2019 - written by Kimserey with .
Docker images package all the necessary pieces to run an application, the operating system, the application runtime, the environment and the application binaries into a reusable snapshot. To create a Docker image, we use a Dockerfile which specifies instructions to build the image. Docker images can be based on other Docker images which makes them reusable and allows us to delegate the tedious setup of an operating system with application runtime to others. Today we will explore the composition of a Dockerfile and look into examples in order to get an understanding of the set of instructions at our disposal to build a Docker.
Let’s take the example of building a Python Flask application. We start from a empty folder where we create a virtual environment:
1 py -m venv venv
We can install Flask under our virtual environment:
1 py -m pip install flask
We create a simple hello world under
1 2 3 4 5 6 7 8 9 10 11 12 """Hello World example""" from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World" if __name__ == "main": # For debugging app.run(host="0.0.0.0", debug=True, port=80)
We then create the Dockerfile:
1 2 3 FROM tiangolo/uwsgi-nginx-flask:python3.7 COPY ./app /app
The first line
FROM [...] creates a stage using
tiangolo/uwsgi-nginx-flask image tagged
python3.7. Public images like
tiangolo/uwsgi-nginx-flask are downloaded from the Docker hub at hub.docker.com. The next instruction
COPY will copy the
./app folder from the build context into the image
The build context is the folder specified by the
docker build command:
1 docker build -t hello-world ./
And that’s it, the image will build and be tagged
hello-world. To run a container we can use the tag:
1 docker run -d --name [my-container] hello-world
-d is used to run the container as detached and
--name will name the container for easier interaction with.
As part of the image build, we can have multiple stages where each stage can be named using
FROM [...] AS [name].
For example a Dockerfile for a dotnet core application created with the default ASPNET Core template would be:
1 2 3 4 5 6 7 8 9 10 11 12 13 FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build WORKDIR /src COPY HelloWorld/HelloWorld.csproj HelloWorld RUN dotnet restore HelloWorld/HelloWorld.csproj COPY . . WORKDIR /src/HelloWorld RUN dotnet publish HelloWorld.csproj -c Release -o /app FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim WORKDIR /app COPY --from=build /app . ENTRYPOINT ["dotnet", "HelloWorld.dll"]
In contrast to Python, Dotnet needs to compile the code which required dotnet SDK while the runtime only needs the ASPNET Core runtime. In that case we would make two stages, the first one dedicated to building the application and basing the image from
mcr.microsoft.com/dotnet/core/sdk image and the second stage based on the aspnet runtime
mcr.microsoft.com/dotnet/core/aspnet. For Microsoft, the format of the image URI specifies Microsoft Container Registry
mcr.microsoft.com instead of the default docker hub.
In the first stage, we use
WORKDIR to set the working directory within the image so that our relative path like
HelloWorld/ will resolve
Similarly to the Python Dockerfile, we use
COPY to copy files from the build context to the image. As a first step we copy the
csproj which we can then restore packages for. And as a second step we publish the application into a folder
In both cases,
RUN instruction is used to run a command as part of the build of the image.
In the second stage, we start from the
dotnet/core/aspnet runtime image, set the
/app and use
COPY --from=[stage name] [folder from stage] . to copy from the first
build stage the content of the
/app folder which as we just saw contains the application published.
--from is the keyword used to target the copy from a specific stage, it can even be used to copy from an external image for example:
1 COPY --from=nginx:latest /etc/nginx/nginx.conf /nginx.conf
At this point, we have all the binaries necessary to run the application in the
/app folder and we can then define our entrypoint with
ENTRYPOINT. When the container will run, it will execute
1 docker build -f ./HelloWorld/HelloWorld/Dockerfile -t dotnet-hello-world ./
-f is used to specify the path to the Dockerfile. In Dotnet, it is common to have a solution folder, and a project folder. We can then run the container:
1 docker run -p 5000:80 --name [my-container] dotnet-hello-world
You may have noticed that the Python Dockerfile does not define any
ENTRYPOINT. We simply copied the
/app folder and when we run the container, the app starts as expected. The reason behind that is that the image based on,
tiangolo/uwsgi-nginx-flask:python3.7, already defines an
ENTRYPOINT and a
CMD. We can find it on the implementation of the Dockerfile https://github.com/tiangolo/uwsgi-nginx-flask-docker/blob/master/python3.7/Dockerfile#L30.
1 2 ENTRYPOINT ["/entrypoint.sh"] CMD ["/start.sh"]
Compared to the dotnet
ENTRYPOINT, this construct separates the executable from the arguments using
CMD. This would allow us to override the argument by passing the argument as part of the run command
docker run -t [image] [command arguments]. We could have done the same for our dotnet image:
1 2 ENTRYPOINT [ "dotnet" ] CMD [ "HelloWorld.dll" ]
But that would not have provided much benefits as the goal of the image was to only execute
In contrast, for Python, we can leverage the entrypoint by not adding any on our own Dockerfile and the last defined
ENTRYPOINT and last defined
CMD will constitute the command ran when the container starts. Or if we wanted to, we could write a different
start.sh script and provide it inside our own Dockerfile by just specifying
CMD ["./my-own-start.sh"] and the resulting command will be
Another interesting point is that the
ENTRYPOINT can be defined anywhere within the stage but as a convention, it is good to define it last.
In the Dockerfile we can also specify which port our container will expose. This can be done using
EXPOSE. For example for our dotnet Dockerfile we can specify the URL to be used by our application and the exposed port:
1 2 3 4 5 6 7 8 FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim ENV ASPNETCORE_URLS=http//+:5000 EXPOSE 5000 WORKDIR /app COPY --from=build /app . ENTRYPOINT ["dotnet", "HelloWorld.dll"]
ASPNETCORE_URLS is the environment variable used to set the URL for Kestrel while
EXPOSE tells the image to expose port 5000. When we run the application, we can then use
-P to map the port to our local ports.
1 docker run -P --name dotnet-hello-world dotnet-hello-world
-P instructs the host to scan all exposed ports and map them to an available port locally. When can then find the port by listing the container:
1 2 3 $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f545d8361c3 dotnet-hello-world "dotnet WebApplicati…" 12 seconds ago Up 10 seconds 0.0.0.0:32769->5000/tcp dotnet-hello-world
http://localhost:32769 will be forwarded to the container on port
If we exposed more port for example:
1 2 3 EXPOSE 5000 EXPOSE 5100 EXPOSE 5200
We would have had the following ports randomly allocated:
1 0.0.0.0:32772->5000/tcp, 0.0.0.0:32771->5100/tcp, 0.0.0.0:32770->5200/tcp
Publishing port from the Dockerfile is not mandatory. We could remove the exposed port.
1 2 3 4 5 6 7 FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim ENV ASPNETCORE_URLS=http//+:5000 WORKDIR /app COPY --from=build /app . ENTRYPOINT ["dotnet", "HelloWorld.dll"]
run command allows to override any port by directly specifying the port mapping from the command line:
1 docker run -p 5500:5000 --name dotnet-hello-world dotnet-hello-world
-p 5500:5000 specifies that the host
5500 will map to the container
In fact, from our earlier example, we specified neither
ENV ASPNETCORE_URLS=[...] nor
EXPOSE. That’s because from the base image, the ASPNET Core URLS environment variable is already set to
http://+:80 which is carried over to all child images. We only needed to specify the port mapping from the
docker run command:
1 docker run -p 5000:80 --name dotnet-hello-world dotnet-hello-world
Or we could also have overwritten the port by providing an environment variable:
1 docker run -p 5000:5000 -e ASPNETCORE_URLS="http://+:5000" --name dotnet-hello-world dotnet-hello-world
We saw how our image can be build based on another image. But if our image is based on another image, it means that that other image might be based on another image. A question then would be what is the root image, or where does it start?
To answer this question we can trace back the image path, for example for Dotnet image, we saw that we based our image from
dotnet/core/aspnet. We can see from the image that it is based on
dotnet/core/runtime-deps which installs the dotnet core runtime dependencies:
1 2 ARG REPO=mcr.microsoft.com/dotnet/core/runtime-deps FROM $REPO:2.2-stretch-slim
dotnet/core/runtime-deps, we can see that it is based on
1 FROM alpine:3.8
Alpine is a lightweight linux distribution and following its Docker image, we can see that it is based from
1 2 3 FROM scratch ADD alpine-minirootfs-3.10.1-x86_64.tar.gz / CMD ["/bin/sh"]
Scratch is an explicit empty image which is used to build base image. So if we trace back the steps taken to build our dotnet core image:
apkthe necessary dependencies for dotnet core runtime
And that concludes today’s post on the exploration of Dockerfiles!
Today we looked at how we could create a Dockerfile used to build a Docker image. We started by looking at a simple Python example with a uWSGI Nginx Flask application, which due to the way Python works make it very simple. In contrast, we moved to see how a Dockerfile for a Dotnet Core application could be setup, which had more involvement and explore each instruction set used. We then dived into the differences mainly toward setting up entrypoints, and we finally look into how publishing ports could be done. I hope you liked this post and I see you on the next one!