Tarides Logo
LEGO figure in blue outfit, red helmet, gray background.

OBuilder on macOS

ME

System Administrator

Posted on Wed, 02 Aug 2023

The CI team at Tarides provides critical infrastucture to support the OCaml community. At the heart of that infrastructure is providing a cluster of machines for running jobs. This blog post details how we improved our support for macOS and moved closer to our goal of supporting all Tier1 OCaml platforms.

In 2022, Patrick Ferris of Tarides, successfully implemented a macOS worker for OBuilder. The workers were added to opam-repo-ci and OCaml CI, and this work was presented at the OCaml workshop in 2022 (video).

Since then, I took over the day-to-day responsibility. This work builds upon those foundations to achieve a greater throughput of jobs on the existing Apple hardware. Originally, we launched macOS support using rsync for snapshots and user accounts for sandboxing and process isolation. At the time, we identified that this architecture was likely to be relatively slow[1] given the overhead of using rsync over native file system snapshots.

This post describes how we switched the snapshots over to use ZFS, which has improved the I/O throughput, leading to more jobs built per hour. It also removed our use of MacFUSE, both simplifying the setup and further improving the I/O throughput.

OBuilder

The OBuilder library is the core of Tarides' CI Workers [2]. OCaml CI, opam-repo-ci, OCurrent Deployer, OCaml Docs CI, and the Base Image Builder all generate jobs which need to be executed by OBuilder across a range of platforms. A central scheduler accepts job submissions and passes them off to individual workers running on physical servers. These jobs are described in a build script similar to a Dockerfile.

OBuilder takes the build scripts and performs its steps in a sandboxed environment. After each step, OBuilder uses the snapshot feature of the filesystem (ZFS or Btrfs) to store the state of the build. There is also an rsync backend that copies the build state. On Linux, it uses runc to sandbox the build steps, but any system that can run a command safely in a chroot could be used. Repeating a build will reuse the cached results.

It is worth briefly expanding upon this description to understand the typical steps OBuilder takes. Upon receiving a job, OBuilder loads the base image as the starting point for the build process. A base image contains an opam switch with an OCaml compiler installed and a Git clone of opam-repository. These base images are built periodically into Docker images using the Base Image Builder and published to Docker Hub. Steps within the job specification could install operating system packages and opam libraries before finally building the test package and executing any tests. A filesystem snapshot of the working folder is taken between each build step. These snapshots allow each step to be cached, if the same job is executed again or identical steps are shared between jobs. Additionally, the opam package download folder is shared between all jobs.

On Linux-based systems, the file system snapshots are performed by Btrfs and process isolation is performed via runc. A ZFS implementation of file system snapshots and a pseudo implementation using rsync are also available. Given sufficient system resources, tens or hundreds of jobs can be executed concurrently.

The macOS Challenges

macOS is a challenging system for OBuilder because there is no native container support. We must manually recreate the sandboxing needed for the build steps using user isolation. Furthermore, macOS operating system packages are installed via Homebrew, and the Homebrew installation folder is not relocatable. It is either /usr/local on Intel x86_64 or /opt/homebrew on Apple silicon (ARM64). The Homebrew documentation includes the warning Pick another prefix at your peril!, and the internet is littered with bug reports of those who have ignored this warning. For building OCaml, the per-user ~/.opam folder is relocatable by setting the environment variable OPAMROOT=/path; however, once set it cannot be changed, as the full path is embedded in objects built.

We need a sandbox that includes the user's home directory and the Homebrew folder.

Initial Solution

The initial macOS solution used dummy users for the base images, user isolation for the sandbox, a FUSE file system driver to redirect the Homebrew installation, and rsync to create file system snapshots.

For each step, OBuilder used rsync to copy the required snapshot from the store to the user’s home directory. The FUSE file system driver redirected filesystem access to /usr/local to the user’s home directory. This allowed the state of the Homebrew installation to be captured along with the opam switch held within the home directory. Once the build step was complete, rsync copied the current state back to the OBuilder store. The base images exist in dummy users' home directories, which are copied to the active user when needed.

The implementation was reliable but was hampered by I/O bottlenecks, and the lack of opam caching quickly hit GitHub's download rate limit.

A New Implementation

OBuilder already supported ZFS, which could be used on macOS through the OpenZFS on OS X project. The ZFS and other store implementations hold a single working directory as the root for the runc container. On macOS, we need the sandbox to contain both the user’s home directory and the Homebrew installation, but these locations need to be in place within the file system. This was achieved by adding two ZFS subvolumes mounted on these paths.

ZFS Volume Mount point Usage
obuilder/result/ /Volumes/obuilder/result/ Job log
obuilder/result//home /Users/mac1000 User’s home directory
obuilder/result//brew /opt/homebrew or /usr/local Homebrew installation

The ZFS implementation was extended to work recursively on the result folder, thereby including the subvolumes in the snapshot and clone operations. The sandbox is passed the ZFS root path and can mount the subvolumes to the appropriate mount points within the file system. The build step is then executed as a local user.

The ZFS store and OBuilder job specification included support to cache arbitrary folders. The sandbox was updated to use this feature to cache both the opam and the Homebrew download folders.

To create the initial base image, empty folders are mounted on the user home directory and Homebrew folder, then a shell script installs opam, OCaml, and a Git clone of the opam repository. When a base image is initially needed, the ZFS volume can be cloned as the basis of the first step. This replaces the Docker base images with OCaml and opam installed in them used by the Linux OBuilder implementation.

ZFS Volumes for macOS Homebrew Base Image for OCaml 4.14
obuilder/base-image/macos-homebrew-ocaml-4.14
obuilder/base-image/macos-homebrew-ocaml-4.14/brew
obuilder/base-image/macos-homebrew-ocaml-4.14/home

Performance Improvements

The rsync store was written for portability, not efficiency, and copying the files between each step quickly becomes the bottleneck. ZFS significantly improves efficiency through native snapshots and mounting the data at the appropriate point within the file system. However, this is not without cost, as unmounting a file system causes the disk-write cache to be flushed.

The ZFS store keeps all of the cache steps mounted. With a large cache disk (>100GB), the store could reach several thousand result steps. As the number of mounted volumes increases, macOS’s disk arbitration service takes exponentially longer to mount and unmount the file systems. Initially, the number of cache steps was artificially limited to keep the mount/unmount times within acceptable limits. Later, the ZFS store was updated to unmount unused volumes between each step.

The rsync store did not support caching of the opam downloads folder. This quickly led us to hit the download rate limits imposed by GitHub. Homebrew is also hosted on GitHub; therefore, these steps were also impacted. The list of folders shared between jobs is part of the job specification and was already passed to the sandbox, but it was not implemented. The job specification was updated to include the Homebrew downloads folder, and the shared cache folders were mounted within the sandbox.

Throughput has been improved by approximately fourfold. The rsync backend gave a typical performance of four jobs per hour. With ZFS, we see jobs rates of typically 16 jobs per hour. The best recorded rate with ZFS is over 100 jobs per hour!

Multi-User Considerations

The rsync and ZFS implementations are limited to running one job simultaneously, limiting the throughput of jobs on macOS. It would be ideal if the implementation could be extended to support concurrent jobs; however, with user isolation, it is unclear how this could be achieved, as the full path of the OCaml installation is included in numerous binary files within the ~/.opam directory. Thus, opam installed in /Users/foo/.opam could not be mounted as /Users/bar/.opam. The other issue with supporting multiuser is that Homebrew is not designed to be used by multiple Unix users. A given Homebrew installation is only meant to be used by a single non-root user.

Summary

With this work adding macOS support to OBuilder using ZFS, the cluster provides workers for macOS on both x86_64 and ARM64. This capability is available to all CI systems managed by Tarides. Initial support has been added to opam-repo-ci to provide builds for the opam repository, allowing us to check packages build on macOS. We have also added support to OCaml-CI to provide builds for GitHub and GitLab hosted projects, and there is work in progress to provide macOS builds for testing OCaml's Multicore support. MacOS builds are an important piece of our goal to provide builds on all Tier 1 OCaml platforms. We hope you find it useful too.

All the code is open source and available on github.com/ocurrent.

  1. As compared to other workers where native snapshots are available, such as BRTRS on Linux.

    ↩︎︎
  2. In software development, a "Continuous Integration (CI) worker" is a computing resource responsible for automating the process of building, testing, and deploying code changes in Continuous Integration systems.

    ↩︎︎

Open-Source Development

Tarides champions open-source development. We create and maintain key features of the OCaml language in collaboration with the OCaml community. To learn more about how you can support our open-source work, discover our page on GitHub.

Explore Commercial Opportunities

We are always happy to discuss commercial opportunities around OCaml. We provide core services, including training, tailor-made tools, and secure solutions. Tarides can help your teams realise their vision