Overview¶

Ociregistry¶
Ociregistry is a pull-only, pull-through, caching OCI Distribution server. That means:
- It exclusively provides pull capability. It does this by implementing a subset of the OCI Distribution Spec.
- It provides caching pull-through capability to multiple upstream registries: internal, air-gapped, or public; supporting the following types of access: anonymous, basic auth, HTTP, HTTPS (secure & insecure), one-way TLS, and mTLS. In other words, one running instance of this server can simultaneously pull from
docker.io,quay.io,registry.k8s.io,ghcr.io, your air-gapped registries, in-house corporate mirrors, etc.
Goals¶
The goal of the project is to build a performant, simple, reliable edge OCI Distribution server for Kubernetes. One of the overriding goals was simplicity: only one binary is needed to run the server, and all state is persisted as simple files on the file system under one subdirectory. This supports the following use cases for running Kubernetes:
- Edge clusters.
- Air-gapped clusters - loading the server in a connected environment, and then serving a cluster in an air gap.
- Offloading corporate OCI Registries to minimize load / dependency on corporate resources.
- Running the registry in-cluster as a Kubernetes workload, and then mirroring containerd to the registry within the cluster or across clusters.
- Supporting development environments. For example, I do a lot of Kubernetes experimentation at home. In my home environment I run small multi-VM Kubernetes clusters on my desktop. I run Ociregistry on my desktop as a systemd service and mirror
containerdin my dev Kubernetes clusters to the systemd service. - Avoiding rate-limiting, but also being a good internet citizen by lightening the load on the large (free) distribution servers provided to us by DockerHub and many others.
Other distribution servers index for availability and fault tolerance at the cost of increased complexity. This is not a criticism - it is simply a fact of design trade-offs. This server indexes for simplicity and accepts the availability and fault tolerance provided by the host.
Summary¶
The goals of the project are:
- Implement a narrow set of use cases, mostly around serving as a caching mirror for Kubernetes clusters.
- Be simple, performant, and reliable.