ARM64 Support

ARM64 support and why we introduced it

In TerminusDB version 10.1.8 we introduced containers with ARM64 support. This greatly improves the performance of TerminusDB running on ARM64 architectures. TerminusDB users with ARM64 machines would previously have used QEMU to emulate the AMD64 architecture in order to work with TerminusDB. This caused performance issues, for instance, users on MacBook M1 chips suffered terrible performance! Even though the chip itself is pretty performant.

Why we did not implement ARM64 support earlier?

In our early years, ARM64 was still mostly limited to the SoCs found on smartphones and single-board computers. They were not the intended target devices of TerminusDB, as we expected most users to run a database on servers instead. Developers at the time had mostly dedicated AMD64 devices running. We also did not want users of single-board computers to get a bad impression of the database, simply because their hardware was not up to the task.

The M1 ARM64 processors were still limited in use. A lot of software still had to be updated to provide decent support for them. We also believed the AMD64 emulation of the M1 was performant, which is why we did not consider the need to implement native support at the time.

Development of headless CMS and the bad performance on OS X

Suddenly we found ourselves in a different situation. The M1 MacBooks became more popular. Even so popular that some of our developers got them as well! Things ran relatively smoothly for a while. Of course, it was a bit slower than the native architectures but nothing terrible. We convinced ourselves that even though it was slightly slower than the native architectures, the developer experience was not bad enough to warrant implementing ARM64 support.

However, this changed with the development of TerminusDB as a headless CMS and the ingestion of large datasets. To showcase the ability of TerminusDB as a CMS, we made a Lego demo. It displays all the Lego sets, their parts, and their respective themes. Previously, our developers on Macs developed with relatively small datasets in which the performance penalties were not noticeable. Now they had to run a very large dataset.

My own computer ran this ingest in 15 minutes. Our Mac developers had to wait almost half a day to get this far! This was unacceptable. Not only was the ingestion speed slow, but the query speed was too.

The reason why AMD64 Docker containers on macOS are so slow is that Docker itself already runs in a VM on macOS. This is not slow by definition as macOS has good virtualization support. However, when running an AMD64 container on an ARM64 machine, it will run it using QEMU inside the Linux virtual machine! Which of course, is slow.

Recent developments in CPU land and our community

Around the same time that we discovered the bad performance of AMD64 Docker containers on macOS ARM64, we also noticed community members asking for ARM64 support. A good PR by blueforesticarus was made on GitHub. This fixed support for ARM64 on the swipl-rs crate, the Rust crate we use to call Rust code from SWI Prolog.

If we didn’t already have the wake-up call from our developer’s problems, the demand for support from the community meant we had no choice but to act.

On top of the community and internal issues, we have seen that a lot more servers are running ARM. ARM is definitely not exclusively a thing for embedded devices or Macs anymore. If we take a look at the Amazon EC2 instance types for general-purpose computing, we can see that more and more instance types are available for ARM: Mac, T4g, M6g, and A1.

ARM is a serious server candidate too!


We simply had to add support for ARM64 as the world is moving towards it. We could not offer a terrible developer experience on macOS and potential performance issues on ARM servers.

Staying behind is not an option!

Latest Stories

Vector database and vector embeddings

Building a Vector Database to Make Use of Vector Embeddings

Vector databases are all the rage at the moment and it’s not just hype. The advance of AI, which is making use of vector embeddings, has significantly increased the buzz. This article talks about how we implemented a vector database in Rust in a week to give us semantic indexing and entity resolution using OpenAI to define our embeddings.

Read More »
Change Requests in TerminusCMS

Change Requests in TerminusCMS A video describing how change request workflows function in the TerminusCMS dashboard. TerminusCMS features change request workflows at the database layer.  Users are prompted

Read More »
Back link graph queries using GraphQL

Graph Back Link Queries

Graph back link queries find objects pointing at a particular object. This is useful for understanding the impacts of relationships, for example, in the supply chain back link queries can show the product impacted by a particular component shortage.

Read More »