Setting up Varnish Orca as a bootstrap cache for Kubernetes

Kubernetes Orca Tutorial

Set Varnish Orca as a cache for Kubernetes clusters, allowing them to fetch resources as needed or when trying to scale.

Prerequisites

  • A Kubernetes cluster (this guide uses kind)
  • A Default Varnish Orca Instance with an Active License

Step 1: Node-Level Mirroring (Container Runtime Config)

We must configure the container engine (containerd) on your Kubernetes nodes to use Orca as a mirror for Docker Hub.

Create the hosts.toml Configuration

On your host (or the machine where you build your cluster), create the registry configuration. Replace <ORCA_IP> with the IP of your Varnish Orca instance. The path to set this is /etc/containerd/certs.d/docker.io/hosts.toml:

server = "https://registry-1.docker.io"

# You can use a raw IP (e.g., 172.17.0.1) or a resolvable hostname (e.g., orca.internal)
[host."http://<ORCA_TARGET>"]
  capabilities = ["pull", "resolve"]
  skip_verify = true

[host."http://<ORCA_TARGET>".header]
  Host = ["dockerhub.localhost"]

Note on Hostnames: If you use a hostname instead of an IP, ensure your Kubernetes nodes can resolve it. For local kind clusters, you may need to manually add an entry to the node’s /etc/hosts using docker exec <node-name> sh -c "echo '<IP> <HOSTNAME>' >> /etc/hosts".

Provision the Cluster (Kind Example)

We use kind to create and provision a local Kubernetes cluster.

Create a kind-config.yaml to ensure your nodes recognize the configuration directory created above.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry]
    config_path = "/etc/containerd/certs.d"  
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /etc/containerd/certs.d
    containerPath: /etc/containerd/certs.d

Create a new local Kubernetes cluster and apply these settings:

kind create cluster --name orca-lab --config kind-config.yaml

Step 2: Cluster-Level Routing (Service & Endpoint Bridge)

To allow Pods to use friendly names (like http://npmjs) without special headers, we create a Kubernetes Service and Endpoint for each registry.

The Multi-Registry Bridge

Create orca-bridge.yaml. Replace <ORCA_IP> with your Orca machine’s IP.

# Standard Pattern: 1 Service + 1 Endpoint per Registry
---
apiVersion: v1
kind: Service
metadata:
  name: npmjs
spec:
  ports: [{port: 80}]
---
apiVersion: v1
kind: Endpoints
metadata:
  name: npmjs
subsets:
  - addresses: [{ip: <ORCA_IP>}]
    ports: [{port: 80}]
---
apiVersion: v1
kind: Service
metadata:
  name: dockerhub
spec:
  ports: [{port: 80}]
---
apiVersion: v1
kind: Endpoints
metadata:
  name: dockerhub
subsets:
  - addresses: [{ip: <ORCA_IP>}]
    ports: [{port: 80}]
---
apiVersion: v1
kind: Service
metadata:
  name: github
spec:
  ports: [{port: 80}]
---
apiVersion: v1
kind: Endpoints
metadata:
  name: github
subsets:
  - addresses: [{ip: <ORCA_IP>}]
    ports: [{port: 80}]

Then apply this with:

kubectl apply -f orca-bridge.yaml

Step 3: Verification

Once the configurations are applied, we must verify both the Node-Level (Infrastructure) and Cluster-Level (Application) caching paths.

1. Verify Node-Level Mirroring (Container Images)

This test confirms that the Kubernetes Node is successfully using Orca as a mirror when pulling container images.

  1. On the Orca Server, monitor the real-time logs for Docker traffic:

    sudo varnishlog -g request -q "ReqHeader ~ 'Host: dockerhub'"
    
  2. From your local machine, deploy a new test pod using an Nginx image:

    kubectl run infra-test --image=nginx
    
  3. The Success Signal: Check your varnishlog output. You should see Orca intercepting the request.

2. Verify Cluster-Level Routing (Application Packages)

This test confirms that Pods can resolve external registries via internal Kubernetes DNS names (the “Bridge”) and receive cached content via the Service/Endpoint.

  1. Launch a temporary tester pod:

    kubectl run tester --image=alpine -- sleep infinity
    
  2. Enter the pod’s shell:

    kubectl exec -it tester -- sh
    
  3. Test a GitHub raw file fetch: GitHub is frequently used for scripts and source archives. Test the route by fetching a standard archive through the Orca bridge:

    # Inside the pod:
    apk add curl
    curl -I http://github/varnish/varnish-modules/archive/master.tar.gz
    
  4. The Success Signal: The HTTP response headers should return headers like:

    • HTTP/1.1 200 OK
    • Via: 1.1 varnish