Five EmbedDev logo Five EmbedDev

An Embedded RISC-V Blog

Simulating and Debugging with a Docker Containerized Toolchain

March 25, 2022 (toolchain,docker)

Containerized Development

In a previous post I used docker to containerize building for a RISC-V target.

In this post I’ll simulate that target file in a containerized RISC-V ISA Simulator.

Spike ISA Sim

The official reference ISA simulator for RISC-V is spike. There are other more functional and performant open source and commercial simulators , however the aim of this post is to describe a solution for quick and easy debugging and simulation of the low level examples on this blog.

Spike implements a functional model of the RISC-V hart(s) and the debug interface. GDB must connect to OpenOCD, which will in turn connect to Spike’s model of the RISC-V serial debug interface. Spike’s Readme file describes how to do this, but it’s pretty tedious to load an example program.

The container example downloads and builds the appropriate source code and manages running the independent tools.

Docker Images

The docker files are in Assume it’s checked out to build-and-verify.

Firstly, the docker images in these folders should be built:

cd build-and-verify/
make -C docker/riscv-openocd
make -C docker/riscv-spike
make -C docker/riscv-xpack-gcc
make -C examples/debug-sim

Running a RISC-V Target Elf File on Desktop OS Host

Let’s assume we’ve already compiled an elf file build/main.elf.

The Docker image is tagged as five_embeddev/examples/riscv_spike_debug_env_sim_gdb, built in the folder examples/debug-sim. In the command line below, docker arguments are placed before the image name and spike simulator arguments are after the image name.

The docker arguments needs to configure the local build directory to be mounted as a volume inside the container, In this case to /project/build. It also needs to configure an interactive TTY mode for debugging.

The spike arguments needs to configure the ISA, memory map and target image.

docker \
  run \
     -i --rm --tty \
     -v `pwd`:/project \
     five_embeddev/examples/riscv_spike_debug_env_sim_gdb \
     --priv=m \
     --isa=rv32imac \
     -m0x8000000:0x2000,0x80000000:0x4000,0x20010000:0x6a120 \

It’s pretty easy to step through the code.

tui enable
b main


The compose.yml file captures this configuration the mapping of local files to /project in the container, and the simulator arguments.

    image: five_embeddev/examples/riscv_spike_debug_env_sim_gdb
    command: [  "--priv=m" ,"--isa=rv32imac", "-m0x8000000:0x2000,0x80000000:0x4000,0x20010000:0x6a120", "/project/build/main.elf"]
    tty: true
        - .:/project
docker-compose run  spike_openocd_gdb


I’d like to customize spike for development flow and architecture experimentation, so having an automated from-source to single run command is convenient.

A few general advantages are:

  • Containerized tools can be used to integrate with modern development environments, like VSCode.
  • The latest tools can be downloaded and utilized using automated scripts.
  • This can be built on for automated testing and cloud based CI.

Building with a Docker container Toolchain

March 01, 2022 (toolchain,docker)

Containerized Development

Modern development is moving towards packaging tools in containers. There are several benefits to containerizing development tools:

  • A simpler tool deployment process, the binary image with all dependencies can simply be pulled from a server and run.
  • An automated and consistent deployment process that can be shared between development machines and cloud based build servers.
  • A consistent set of tools for all team members.
  • The ability to build with General Purpose IDE’s rather than whatever tools a device vendor may provide.
  • Isolation of dependencies used for different devices and build targets.

For RISC-V there are a few more benefits:

I’ve put together a set of Docker images and Docker Compose build and run configurations with the aim of deploying them with GitHub’s workflows.

They are located here:

Image Design

Each image creates a default WORKDIR location /project that is owned by the docker user. The local development directory can be bind mounted to that path.

The path to the tools is always added to the PATH environment variable.

The container runs everything as user docker_user with UID of 1000 by default.


To build a target binary call docker directly or setup a docker-compose configuration compose.yaml with the container environment. The command line to be run in the container is passed as the arguments following the container run command and it’s arguments.

e.g. Compile test.c in the current directory with the riscv-gnu-toolchain and delete the container after running:

docker \
 run \
     --rm \
     -v `pwd`:/project \
     five_embeddev/riscv_gnu_toolchain_dev_env \
     riscv32-unknown-elf-gcc test.c -o test.elf

OR user docker-compose with all of that captured in a compose.yaml file:

version: "3"

    image: five_embeddev/riscv_gnu_toolchain_dev_env
        - .:/project
    command: ["riscv32-unknown-elf-gcc", "test.c",  "-o", "test.elf" ]

Then docker-compose can simply run the build.

docker-compose run build_gnu_toolchain

Image Details

The images have been setup with a docker user (by default UID 1000), and the tools are installed to /opt or to the docker user’s home directory and added to the PATH environment variable. There are Makefiles to build the images, if those are used the current user UID is used by the docker image default user. That allows the user’s file system to be mounted within the container and file ownership handled correctly. (By default docker containers runs as root, which can result in root owned target binaries.)

When tools are compiled from source the default method is to shallow clone the source code within the container and build there. This captures the build process in the Dockerfile, but is inefficient due to the docker caching mechanism. The riscv-gnu-toolchain build script does this but requires a large amount of resources.

Due to the resource requirements of building the entire toolchain the riscv-gnu-toolchain-2 build script stores the source and build files in host file system and mounts into the container via docker volumes. Those can’t be used by a docker build process, so the build process is more complex and relies on the Makefile. This build script should be preferred.

Extending PlatformIO

November 05, 2021 (toolchain,baremetal,C)

Information about extending cross compilation with PlatformIO has been added.


Machine Readable Specification Data

September 10, 2021 (registers,spec,interrupts,opcodes)

As RISC-V is a new architecture so there will be new development at all layers of the software and hardware stack. Rather than write code based on human language specifications from scratch, a smarter way to work can be to translate a machine readable specification to code.

“Machine Readable” does not need to be an all encompassing formal model of the architecture, there are many convenient formats such as csv, yaml, xml and json that can be parsed and transformed using the packages available in most scripting languages.


CMake Cross Compilation for RISC-V Targets

March 20, 2021 (toolchain,baremetal)

An example of cross compiling a baremetal program to RISC-V with CMake.


RISC-V Compile Targets, GCC

February 09, 2021 (gcc,base_isa,extensions,abi)

Note to self: When compiling the riscv-toolchain for embedded systems, set the configure options!

The toolchain can be cloned from the RISC-V official github. Once the dependencies are installed it’s straight forward to compile.

$ git clone --recursive


RISC-V Tools Quick Reference

October 29, 2019 (toolchain,quickref)

An initial toolchain quick reference.


RISC-V Compile Targets, GCC

June 26, 2019 (gcc,base_isa,extensions,abi)

NOTE: Since this was written the riscv-toolchain-conventions document has been released.

Getting started with RISC-V. Compiling for the RISC-V target. This post covers the GCC machine architecture (-march), ABI (-mabi) options and how they relate to the RISC-V base ISA and extensions. It also looks at the multilib configuration for GCC.