Using Docker Containers for Yocto Builds

We want to build a custom Linux image with Yocto for the Raspberry Pi 3 model B (BCM2837). The Linux image contains a very simple Internet radio application using Qt 5.11 and the eglfs graphics backend. Our colleagues shall be able to repeat the build easily – now, in three years and even in ten years.

I’ll explain why Docker is an excellent choice to build custom Linux images and give you a step-by-step guide how to do it. At the end of the post, you will be able to run a simple Internet radio on a custom Linux image on a Raspberry Pi 3.

Why Docker?

Traditionally, we would sit down at our desktop computer, install the Linux distribution best suited at the moment, figure out all the steps needed for the Yocto build and document all these steps. Our colleagues would later sit down, repeat all the documented steps on their computer, adapt some steps to their slightly different environment and finally build the Linux image.

After three years of smooth operation, we become aware of serious security flaw in a library. In these three years, we and our colleagues have updated our desktop Linux systems, replaced our desktop computers, or cleaned up our computers because we needed space for other projects. The old Yocto version used for the original build fails in many different ways on the newer Linux systems. After three weeks in limbo, we finally manage to perform the build with the original setup and fix the security flaw within half a day.

We were extremely lucky. Many teams fail in similar situations or spend even more time. Docker containers help us to avoid such catastrophes.

We define all the installation steps in the so-called Dockerfile, which is an executable version of the traditional setup document. We build the Docker image from the Dockerfile. The Docker image provides the same Linux environment that we set up for our very first Yocto build in the traditional approach. The Docker image also contains all the inputs – all the Yocto meta layers with their recipes – needed to build the Linux image for the target device.

We build the root file system for the target device by running bitbake cuteradio-image in the execution environment provided by the Docker image. Similarly, we can build an SDK. A Docker container is a running instance of a Docker image and provides an execution environment that is strictly defined in the Dockerfile.

How does Docker solve the problems of the tradional approach?

We can pass the Docker image to our colleagues instead of the setup document. They build the root file system or the SDK by running the Docker container with the respective bitbake commands. The Docker container ensures that the Yocto builds are performed in exactly the same execution environment by everyone. It doesn’t matter which Linux distribution and Linux version we use and whether we still have access to the meta layers. The build result is always the same.

The situation doesn’t change in three or five years, when we have newer Linux versions on our computers or when the build experts have left the team. We spin up the Docker container and build exactly the same Linux image that we built three or five years earlier.

Docker makes the maintenance of devices with long product lifetimes so much easier than with the traditional approach. Docker enables every developer to build the Linux image for the target device – not just the build guru.

Installing Docker

We’ll go through the steps how to install Docker Community Edition (CE) on a computer running Ubuntu 16.04 LTS (xenial) natively. Installation information for other Linux distributions, Windows and MacOS can be found in the section Get Docker > Docker CE of the official documentation.

We first make sure that no old Docker installations are hanging around in our Ubuntu system.

$ sudo apt-get remove docker docker-engine \
containerd runc

The apt-get command may complain that some or all of the old Docker packages are not installed. We can safely ignore these complaints.

Docker requires a few additional Linux packages to function properly.

$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates curl \
gnupg-agent software-properties-common

We add Docker’s official GPG key.

$ curl -fsSL \
| sudo apt-key add

We make Docker’s Linux package repository known to our computer. The command lsb_release -cs returns the codename of the Linux distribution, which yields xenial for Ubuntu 16.04.

$ sudo add-apt-repository \
"deb [arch=amd64] \
$(lsb_release -cs) stable"

We update the package index on our computer to see the Docker repository and install Docker CE.

$ sudo apt-get update
$ sudo apt-get -y install docker-ce

The last command installs the server and client of Docker CE, starts the server and creates a group docker. If we add our user name to the docker group, we don’t have to run each Docker command as root.

$ sudo usermod -a -G docker "$(whoami)"

We must log out and log in to activate our membership in the docker group.

We test our Docker installation with the following command.

$ docker run --rm -ti ubuntu:latest /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6cf436f81810: Pull complete
987088a85b96: Pull complete
b4624b3efe06: Pull complete
d42beb8ded59: Pull complete
Digest: sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210
Status: Downloaded newer image for ubuntu:latest

The option –rm automatically removes the container and all its data on exit. The option -i keeps the standard input open, such that we can enter text at the command prompt. The option -t provides a terminal for the interaction with the container.

The command docker run tries to find the latest Ubuntu image locally on the host computer. As this fails, it pulls the latest Ubuntu image from the Docker hub, a repository for Docker images. Finally, it executes the Linux command /bin/bash in the context of the latest Ubuntu version ubuntu:latest, which is Ubuntu 18.10 at the time of this writing.

Thanks to the options -ti, this starts a fully functioning bash shell. We can execute the usual bash commands like ls, rm, cd and mkdir in the Docker container. We exit the Docker container by typing exit on the container’s command line.

If we execute bash –version at the command prompt of the Docker container, we get version 4.4.19. If we execute the same command in the host computer, we get an older version like 4.3.48. The different versions show that the container and the host run on different Ubuntu versions.

Although the Docker container and the host computer have different Linux images, they share the same Linux kernel. Running the command uname -r in the Container and on the host computer yields the same result: 4.15.0-43-generic. The Docker container is just another process on the host computer. As a normal process, it can access the host file system directly.

Defining the Docker Image in the Dockerfile

A Docker image is an execution environment, in which an application runs. In our case, the Yocto build is the application. The Docker image consists of a basic Ubuntu 16.04 image, Linux packages needed by the Yocto build with bitbake, a build user and the Yocto meta layers with all the recipes.

We define a Docker image in a Dockerfile. The Dockerfile contains all the installation steps to set up a computer for building a Linux image with Yocto. Traditionally, we write a document listing all these steps. Our fellow developers must work through this document step by step to bring their computer in the same state. The Dockerfile is an executable version of this document.

The complete Dockerfile can be found here. Let us go through it line by line.

The line

FROM ubuntu:16.04

installs the basic image of Ubuntu 16.04 LTS into our Docker image. The basic image does not contain all the packages needed for a Yocto build. We install the missing packages with the next line.

RUN apt-get update && apt-get -y install gawk wget git-core \
diffstat unzip texinfo gcc-multilib build-essential \
chrpath socat cpio python python3 python3-pip \
python3-pexpect xz-utils debianutils iputils-ping \
libsdl1.2-dev xterm tar locales

In Ubuntu, /bin/sh is a link to /bin/dash. The dash shell does not support the source command. However, we need the source command in the very last line of the Dockerfile. We replace dash by bash with

RUN rm /bin/sh && ln -s bash /bin/sh

The Yocto build fails, if the Linux system does not configure a UTF8-capable locale. The next three lines catch up on this omission.

RUN locale-gen en_US.UTF-8 && update-locale LC_ALL=en_US.UTF-8 \

The name of our example project is cuteradio. We use the same name for the build user and build group out of convenience. The next two lines set environment variables, which are used later in the Dockerfile.

ENV USER_NAME cuteradio
ENV PROJECT cuteradio

When we run the Docker image, it will write all the build artefacts to a host directory – outside the container. We don’t want to run the Yocto build as root but as the user $USER_NAME or cuteradio. The user cuteradio can only access the host directory, if cuteradio‘s user ID is the same as the user ID of the host directory’s owner. The user name doesn’t matter. It can differ in the container and on the host computer. Only the user IDs must be equal.

We’ll pass the host user ID host_uid and the host group ID host_gid to the docker-build command through the option –build-arg. The docker-build command assigns the option values to the ARG variables with the same name. If we don’t pass the host IDs, the docker-build command will default to 1001 for both host_uid and host_gid. Then, we create a group $USER_NAME with host_gid, add the user $USER_NAME with host_uid to this group, create the home directory (option -m) and set bash as the standard shell (option -s).

ARG host_uid=1001
ARG host_gid=1001
RUN groupadd -g $host_gid $USER_NAME && \
useradd -g $host_gid -m -s /bin/bash -u $host_uid $USER_NAME

The command


switches the user from root to $USER_NAME. From this point, the Docker build runs as user $USER_NAME. When we run the Docker image, it will execute commands like CMD as user $USER_NAME as well.

We create two build directories $BUILD_INPUT_DIR and $BUILD_OUTPUT_DIR inside the Docker image.

ENV BUILD_INPUT_DIR /home/$USER_NAME/yocto/input
ENV BUILD_OUTPUT_DIR /home/$USER_NAME/yocto/output

The docker-run command – the Yocto build – writes all the build artefacts into the directory $BUILD_OUTPUT_DIR. It also maps this directory to the host directory $PWD/yocto/output. This mapping makes $BUILD_OUTPUT_DIR visible on the host. We can easily copy the Linux image to an SD card and run the Linux image on the Raspberry Pi board.

The use of $BUILD_INPUT_DIR becomes clear with the next two commands.

RUN git clone --recurse-submodules$PROJECT.git

These commands set the working directory to $BUILD_INPUT_DIR and clone the repository $PROJECT.git with all its submodules into $BUILD_INPUT_DIR. Each submodule corresponds to a Yocto meta layer.

By storing the project repository in the Docker image, we ensure that all the data needed to build the Linux image with Yocto is available in the Docker image. If we want to build the Linux image in, say, ten years, we don’t have to search for the Yocto sources, a dusty Ubuntu image and the installation information. We simply run the Docker image, which produces the Linux image for our target device.

The final three lines set $BUILD_OUTPUT_DIR as the working directory, set up the development environment and build the Linux image $PROJECT-image.

CMD source $BUILD_INPUT_DIR/$PROJECT/sources/poky/oe-init-build-env \
build && bitbake $PROJECT-image

The command CMD in the last line is only executed when the Docker image is run and not when the Docker image is built. The script oe-init-build-env initialises the Yocto build environment. It sets some environment variables, copies some configuration files, creates the build directory $BUILD_OUTPUT_DIR/build and changes to the build directory.

If this script is called for the first time, it copies two configuration files – bblayers.conf and local.conf – from the directory $TEMPLATECONF to the directory $BUILD_OUTPUTDIR/build/conf. If this script is called the next time, the script does not copy the configuration files any more. It only sets the environment variables and changes to the build directory.

Building the Docker Image

Now, it’s high time to build the Docker image from the Dockerfile. We create a directory, say, docker-cuteradio, at any location on our host computer where we have non-root access. We change into this new directory and download the Dockerfile into this directory.

$ mkdir docker-cuteradio
$ cd docker-cuteradio
$ wget

The command to build the Docker image from scratch looks like this.

$ docker build --no-cache --build-arg "host_uid=$(id -u)" \
--build-arg "host_gid=$(id -g)" --tag "cuteradio-image:latest" .

The option –no-cache tells the docker-build command not to use any cached results from previous runs. With the two –build-arg options, we pass the effictive host user ID host_uid and the effective host group ID host_gid to the Dockerfile. The docker-build command sets the values of the ARG variables host_uid and host_gid to the effective host user ID and the host group ID, respectively.

The option –tag assigns the name cuteradio and the version latest to the Docker image. When we release a docker image, we’ll assign a proper version like 1.1 or 3.2 to it. When we are at version 7.4 five years from now, we can still build the Linux image of our project for version 1.1.

Here is the abridged output of the docker-build command

Sending build context to Docker daemon  5.632kB
Step 1/20 : FROM ubuntu:16.04
---> 7e87e2b3bf7a
Step 2/20 : RUN apt-get update && apt-get -y install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping libsdl1.2-dev xterm tar locales
---> Running in 12023c8783fe
Get:1 xenial-security InRelease [109 kB]
---> Running in 5a354702cb4a
Removing intermediate container 5a354702cb4a
---> db705d850538
---> Running in bc08a33a53a1
Removing intermediate container bc08a33a53a1
---> 3b0038534468
Step 17/20 : RUN git clone --recurse-submodules$PROJECT.git
---> Running in 040d2108b307
Cloning into 'cuteradio'…
Submodule 'sources/meta-cuteradio' ( registered for path 'sources/meta-cuteradio'
Cloning into 'sources/meta-cuteradio'…
Submodule path 'sources/meta-cuteradio': checked out '515cecfb22d076b3e5928c0bc33b4e25fbdcc120'
Removing intermediate container 040d2108b307
---> fe95d32add9b
---> Running in 224518613c13
Removing intermediate container 224518613c13
---> 8e426608d1ad
---> Running in 4227f20fbcfa
Removing intermediate container 4227f20fbcfa
---> 2478897bf11c
Step 20/20 : CMD source $BUILD_INPUT_DIR/$PROJECT/sources/poky/oe-init-build-env build && bitbake $PROJECT-image
---> Running in 09691cd68eca
Removing intermediate container 09691cd68eca
---> 1e8e6fe5fc82
Successfully built 1e8e6fe5fc82
Successfully tagged cuteradio-image:lates

Every non-comment line of the Dockerfile corresponds to a build step. Every step creates a layer in the Docker image. A layer is a snapshot of the execution enviroment (directory structure, files, environment variables, etc.) after the execution of the corresponding step.

The layers are arranged in a stack. The first layer is at the bottom of the stack, whereas the last layer is on the top of the stack. If we change the command of step 15, for example, running the docker-build command without option –no-cache will rebuild all steps from 15 to 20. This is useful while creating the Dockerfile incrementally.

The command

$ docker image ls
cuteradio-image latest 1e8e6fe5fc82 2 minutes ago 1.03GB
<none> <none> 66605cb98bff 2 days ago 971MB
ubuntu 16.04 7e87e2b3bf7a 2 weeks ago 117MB

lists all the Docker images we have built so far. The first image cuteradio-image is the one we just built.

We remove redundant images like the second one with the command

$ docker image rm -f 66605cb98bff

We create many redundant images while developing the Dockerfile incrementally. Removing redundant images saves disk storage: More than 1 GB for a full cuteradio-image.

Running the Yocto Build with Docker

The Docker image cuteradio-image provides an execution environment for the Yocto build. It provides a basic Ubuntu 16.04 LTS and all the additional Linux packages needed for the Yocto build. It sets up a UTF8-capable locale, creates a build user and group, creates a directory structure, downloads the Yocto meta layers and recipes from a git repository and sets some environment variables.

A running instance of a Docker image with its well-defined execution environment is called a container. We can run multiple containers in parallel to build multiple versions of the Linux image. Containers run in isolation and don’t influence each other.

All is set for running the Yocto build. We execute the following two commands in the same directory docker-cuteradio as the docker-build command.

$ mkdir -p yocto/output
$ docker run -it --rm \
-v $PWD/yocto/output:/home/cuteradio/yocto/output \

We know the options -it and –rm from testing the Docker installation. The option -v mounts the host directory $PWD/yocto/output in the container directory /home/cuteradio/yocto/output. It makes all the Yocto build artefacts available on the host computer and makes them persistent between two Docker runs.

The argument cuteradio-image:latest tells the docker-run command to run the version latest of the Docker image cuteradio-image. The docker-run commands shows the output familiar from Yocto builds.

$ docker run ...
You had no conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to, for
example, select a different MACHINE (target hardware). See conf/local.conf
for more information as common configuration options are commented.

You had no conf/bblayers.conf file. This configuration file has therefore been
created for you with some default values. To add additional metadata layers
into your configuration please add entries to conf/bblayers.conf.
WARNING: Layer cuteradio should set LAYERSERIES_COMPAT_cuteradio in its conf/layer.conf file to list the core layer names it is compatible with.
Parsing recipes: 100% |#########################################################################################################################|
Time: 0:00:19
Parsing of 1995 .bb files complete (0 cached, 1995 parsed). 2934 targets, 308 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION = "1.40.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "ubuntu-16.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "raspberrypi3"
DISTRO = "poky"
TUNE_FEATURES = "arm armv7ve vfp thumb neon vfpv4 callconvention-hard cortexa7"
TARGET_FPU = "hard"
meta-poky = "HEAD:3541f019a505d18263fad0b46b88d470e3fd9d62"
meta-python = "HEAD:cca27b5ea7569d2730ee5da7ee7f47b39d775d89"
meta-raspberrypi = "HEAD:a48743dc36e31170cf737e200cc88f273e13611a"
meta-qt5 = "HEAD:201fcf27cf07d60b7d6ab89c7dcefe2190217745"
meta-cuteradio = "HEAD:515cecfb22d076b3e5928c0bc33b4e25fbdcc120"

NOTE: Fetching uninative binary shim from;sha256sum=c6954563dad3c95608117c6fc328099036c832bbd924ebf5fdccb622fc0a8684
Initialising tasks: 100% |######################################################################################################################|
Time: 0:00:03
Sstate summary: Wanted 1336 Found 0 Missed 1336 Current 0 (0% match, 0% complete)
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
Currently 16 running tasks (1743 of 3800) 45%
0: binutils-cross-arm-2.31-r0 do_fetch (pid 173) 11% |##########

With the docker-exec command, we can break into a running container similar to breaking into an embedded device with a serial console or ssh. First, we find out the container ID. Then, we start a bash shell in the container, which allows us to explore the container like any Linux system.

$ docker ps
97586d8ac8d5 cuteradio-image:latest "/bin/sh -c 'source …" 22 minutes ago Up 22 minutes vigilant_poitras

$ docker exec -it 97586d8ac8d5 /bin/bash
cuteradio@97586d8ac8d5:~/yocto/output$ ls
build downloads sstate-cache

The two Docker commands are executed on the host computer, whereas the ls command is executed in the running container.

Running the Linux Image on a Raspberry Pi 3

Now that we have a Linux image, we want to try it out on a Raspberry Pi 3. The Linux image is stored in the file yocto/output/build/tmp/deploy/images/raspberrypi3/cuteradio-image-raspberrypi3.rpi-sdimg. We copy this image to an SD card with at least 1 GB capacity.

We plug the SD card in our host computer and determine the device files of the SD card with the Linux command df. The name of the device file differs from host computer to host computer. If the name is /dev/sdd1, for example, we unmount the SD card with the following command.

$ sudo umount /dev/sdd1

If we copied a Linux image to the SD card before, we would see two device files: /dev/sdd1 and /dev/sdd2, for example. Then, we must unmount both device files.

We change to the directory containing the Linux image and use the command dd to copy the image to the SD card.

$ cd yocto/output/build/tmp/deploy/images/raspberrypi3
$ sudo dd if=cuteradio-image-raspberrypi3.rpi-sdimg of=/dev/sdd \
bs=1M status=progress

Note that the output file of is the device file of the SD card but without the partition number. So, it’s /dev/sdd and not /dev/sdd1.

We plug the SD card into the slot of the Raspberry Pi 3, connect the Pi with our local area network using an Ethernet cable and power up the Pi. We see a lot of messages from booting the Linux system including the IP address of the Pi close to the end.

udhcp:  sending select for

Of course, your IP address will differ. We use ssh to log into the Pi and start the Internet radio. If we connect the Pi with a loudspeaker using a classic 3.5 mm audio cable, the Internet radio will play my favourite radio station Antenne Bayern.

$ ssh root@
root@raspberrypi3:~# /usr/local/bin/cuteradio -platform eglf

The Linux image that we just built using Docker works fine.

3 thoughts on “Using Docker Containers for Yocto Builds

  1. Nice write up!

    I would give a small warning though about trusting this approach too much, since upstream repositories have a tendency to move around etc. So in order to be sure that the build can be reproduced later in time I’d also make sure releases are built with:
    and that the resulting “mirror” is stored.

    The paranoid would then also make sure that the build can be reproduced using this mirror, by trying it out in a build which disables network using:
    BB_NO_NETWORK = “1”.

  2. Erik, you are absolutely right. The source code of each package must be stored in the container as well – like the recipes. And, thanks for the tips about BB_GENERATE_MIRROR_TARBALLS and BB_NO_NETWORK.

Comments are closed.