Varnish Orca

Varnish Orca is a Virtual Registry Manager that consolidates and accelerates registries for build & runtime artifacts.

By caching Docker Images, NPM Packages, Helm Charts, Go Modules, and many more, Orca speeds up CI/CD pipelines, reduces developer friction, and lowers operational costs.

Deploy Varnish Orca:

Docker Logo docker pull varnish/orca --platform linux/amd64

Orca is also available on other platforms, learn more:

How it works

Varnish Orca speeds up the retrieval of build & runtime artifacts by caching the registries' HTTP responses.

Whether you're running a docker pull or performing an npm install, Orca will cache the responses in a implementation-native way.

By positioning Orca instances close to where they are needed, you are not only offloading pressure from your Artifact Repository Managers, you are also reducing network latency.

Examples of supported artifact types:

  • NPM packages NPM Logo
  • Go modules Go Logo
  • Rust crates Rust logo
  • Maven artifacts Maven logo
  • Python packages Maven logo
  • Docker images Docker Logo
  • Helm charts Helm Logo
  • Debian packages Debian Logo
  • RPM packages Helm Logo
  • Git repositories Git Logo
Varnish Orca diagram

Varnish Orca sits in front of your registries and repository managers, and assumes the role of a Virtual Registry Manager.

Benefits

CI/CD pipeline acceleration

Faster dependency fetches and reduced CI/CD wait time, resulting in faster builds and faster deployments.

Cost reduction

Cut backend requests and data transfer by 75%+ with programmable caching. Reduce infrastructure and license spend.

Reduce developer friction

Reduce developer friction by meeting the latency and toolchain needs of localized development teams.

Remove vendor lock-in

Swap out registries and route artifact requests across vendor platforms, removing vendor lock-in.

Better observability

OpenTelemetry metrics and traces for visibility. Observability across registry types.

Resilience and security

Keep serving artifacts even during origin downtime, zero-error failover, RBAC mirroring for private artifacts.

Modern DevOps Bottlenecks

Whether you are developing code or deploying applications, there are dependencies that need to be resolved.

These dependencies range from modules and packages for programming languages and development frameworks, or runtime packages for Linux, Docker images and Kubernetes orchestration.

They are collectively referred to as artifacts.

At scale dependency resolution becomes a bottleneck:

  • Developers pull in artifacts on their computer during development.
  • Continuous Integration workers need access to these artifacts to build and test the software.
  • Continuous Delivery workers need access to these artifacts to deploy the software.

Repository Limits

CPU/RAM constraints, SaaS egress fees, public repo rate limits.

Expanding Artifact Sizes

Increasing build/fetch times as dependencies grow.

Distributed Teams

Localized teams with unique latency and toolchain needs.

Vendor Lock-In

Rising costs, forced upgrades, limited flexibility.

The higher the concurrency and the bigger the artifact sizes, the bigger the pressure on the artifact registries. At scale, dependency resolution becomes a bottleneck that slows down CI/CD pipelines, increases overall bandwith consumption and developer friction.

Getting started

Getting started with Varnish Orca is very easy: we have a Docker image, we have a Helm Chart, and Linux operating system DEB packages and RPM packages. Varnish Orca is configured through a config.yaml configuration file that describes where to find the registries for each artifact type.

Installing Varnish Orca

Pull in the Docker image:

docker pull varnish/orca --platform linux/amd64

Run the Docker container with the standard configuration:

docker run --rm -p 80:80 --name orca --platform linux/amd64 varnish/orca

Run the Docker container with a custom configuration file:

docker run --rm -p 80:80 --name orca --platform linux/amd64 -v $(pwd)/example-config.yaml:/app/config.yaml:ro varnish/orca --config /app/config.yaml

Mounts example-config.yaml into the Docker container to setup and configure Orca. Makes a Virtual Registry Manager available on port 80 of your local machine.

Documentation

Pull in the Helm chart:

helm pull oci://docker.io/varnish/orca-chart

Create a values.yaml file:

orca:
  varnish:
    http:
    - port: 80
  virtual_registry:
    registries:
    - name: dockerhub
      default: true
      remotes:
      - url: https://docker.io
      - url: https://mirror.gcr.io
    - name: quay
      remotes:
      - url: https://quay.io
    - name: ghcr
      remotes:
      - url: https://ghcr.io
    - name: k8s
      remotes:
      - url: https://registry.k8s.io
    - name: npmjs
      remotes:
      - url: https://registry.npmjs.org
    - name: go
      remotes:
      - url: https://proxy.golang.org
    - name: github
      remotes:
      - url: https://github.com
    - name: gitlab
      remotes:
      - url: https://gitlab.com

Install the Helm chart:

helm install varnish-orca -f values.yaml oci://docker.io/varnish/orca-chart

Run kubectl get svc varnish-orca-orca-chart to get the cluster IP address of the Orca service. Other service types and Ingress are also supported.

Documentation

Update the package list:

sudo apt-get update

Configure the package registry to install Orca on Debian and Ubuntu Linux servers:

curl -s https://packagecloud.io/install/repositories/varnishplus/60-premium/script.deb.sh | sudo bash

Install Varnish Supervisor:

sudo apt-get install -y varnish-supervisor

Edit /etc/varnish-supervisor/default.yaml and add configure the virtual_registry section:

sudo vim /etc/varnish-supervisor/default.yaml

Restart Varnish Supervisor:

sudo systemctl restart varnish-supervisor

Learn more about deploying Orca on Debian or Ubuntu on the documentation site.

Documentation

Configure the package registry to install Orca on RHEL servers:

curl -s https://packagecloud.io/install/repositories/varnishplus/60-premium/script.rpm.sh | sudo bash

Install Varnish Supervisor:

sudo yum install -y varnish-supervisor

Edit /etc/varnish-supervisor/default.yaml and add configure the virtual_registry section:

sudo vim /etc/varnish-supervisor/default.yaml

Restart Varnish Supervisor:

sudo systemctl restart varnish-supervisor

Learn more about deploying Orca on RHEL servers on the documentation site.

Documentation

Configuration file

The standard config.yaml configuration file for Orca:

varnish:
  http:
  - port: 80
virtual_registry:
  registries:
  - name: dockerhub
    default: true
    remotes:
    - url: https://docker.io
    - url: https://mirror.gcr.io
  - name: quay
    remotes:
    - url: https://quay.io
  - name: ghcr
    remotes:
    - url: https://ghcr.io
  - name: k8s
    remotes:
    - url: https://registry.k8s.io
  - name: npmjs
    remotes:
    - url: https://registry.npmjs.org
  - name: go
    remotes:
    - url: https://proxy.golang.org
  - name: github
    remotes:
    - url: https://github.com
  - name: gitlab
    remotes:
    - url: https://gitlab.com

Using Varnish Orca

When you deploy Orca on one of the supported platforms, you can pull your dependencies through Orca by pointing your package manager at a Varnish Orca endpoint.

The usage examples below assume that *.localhost hostnames are automatically resolved to 127.0.0.1. By using a subdomain that contains the registry type, the Orca configuration knows what artifact type is requested, and where it should be proxied to.

Accelerate Docker images

Here's an example of pulling the ubuntu Docker image through Orca:

Docker Logo docker pull docker.localhost/library/ubuntu

Accelerate NPM packages

Here's an example of pulling the express NPM package through Orca:

Docker Logo npm install express --registry=http://npmjs.localhost

Accelerate Go modules

This example pulls the rsc.io/quote Go module through Orca inside an pre-defined Go module:

Docker Logo GOPROXY=http://go.localhost go mod tidy

Accelerate Helm charts (OCI)

The following example pulls the prometheus Helm Chart (hosted on the GitHub Container Registry) through Orca using OCI:

Docker Logo helm pull oci://ghcr.localhost/prometheus-community/charts/prometheus --plain-http

More tutorials

Want to learn more about using Orca for your specific package types? Have a look at our tutorials.

Tutorials

Features

Orca is packaged as a free product and comes with a built-in license that offers a basic set of features. The Premium version of Orca unlocks the rest of the feature set, and only requires a Premium license to upgrade.

Here's an overview of the feature set for both Orca Free and Orca Premium:

Orca FreeOrca Premium
Public registry cache
End-to-end TLS
In-memory cache
OpenTelemetry metrics
OpenTelemetry tracing
Private registry caching
Persistent caching
Support
Programmability
Git mirror

Request an Orca Premium trial license

Want to cache private packages while maintaining the registries’ access controls? Looking to extend your cache storage and persist cached artifacts on disk?

Test the extra features and take Varnish Orca Premium for a spin by requesting a trial license.

Fill in the form and you'll receive an e-mail with your trial license shortly.