Varnish built-in VCL

Tags: vcl (16)

The built-in VCL contains a set of rules that will be executed by default, even if they are not specified in your own VCL file. It is possible to bypass the built-in VCL by issuing a return statement in your VCL code.

The built-in VCL provides much of the safe-by-default behavior of Varnish, so be very careful if you decide to skip it.

The built-in VCL defines the behavior for various states of the Varnish Finite State Machine, each of which comes with its own subroutine. This tutorial will explain what the behavior in each of these subroutines means.

1. vcl_recv

The vcl_recv subroutine is executed when Varnish receives a request from the client. Its VCL logic decides whether or not the request is considered cacheable.

Here’s a breakdown of that logic:

Don’t allow PRI requests

When you receive a request that has the PRI request method, it means that an HTTP/2 request was received, whereas Varnish wasn’t configured to handle HTTP/2. This is not supposed to happen, and a HTTP 405 Method Not Allowed error is synthetically returned:

if (req.method == "PRI") {
    /* This will never happen in properly formed traffic (see: RFC7540) */
    return (synth(405));
}

Enforce the host header

When a top-level HTTP/1.1 request is received that does not have a Host header, an HTTP 400 Bad Request error is returned. The rules of the protocol state that every HTTP/1.1 request should have a Host header:

if (!req.http.host &&
    req.esi_level == 0 &&
    req.proto ~ "^(?i)HTTP/1.1") {
    /* In HTTP/1.1, Host is required. */
    return (synth(400));
}

Invalid request methods

There is a series of HTTP request methods that Varnish accepts. If request method doesn’t match this list, return(pipe) is executed:

if (req.method != "GET" &&
    req.method != "HEAD" &&
    req.method != "PUT" &&
    req.method != "POST" &&
    req.method != "TRACE" &&
    req.method != "OPTIONS" &&
    req.method != "DELETE" &&
    req.method != "PATCH") {
    /* Non-RFC2616 or CONNECT which is weird. */
    return (pipe);
}

Uncacheable request methods

Varnish follows HTTP best practices. When it comes to caching, only idempotent requests may be cached. This means: request methods that don’t explicitly change the state of the resource.

As a result, GET and HEAD are the only two cacheable request methods:

if (req.method != "GET" && req.method != "HEAD") {
    /* We only deal with GET and HEAD by default */
    return (pass);
}

So if the request method is for example POST, the return(pass) logic will kick in, and you’ll be sent to the vcl_pass subroutine. Requests that end up in vcl_pass will bypass the cache, and will result in an uncacheable backend fetch.

Authorization headers and cookies are not cacheable

Stateful content is hard to cache: because of personalization, a stateful request may cause too many cache variations of an object.

Varnish’s approach is a cautious and conservative one and that is reflected in the built-in VCL:

if (req.http.Authorization || req.http.Cookie) {
    /* Not cacheable by default */
    return (pass);
}

This means that any request containing a Cookie header, or an Authorization header will result in a return(pass), which bypasses the cache.

Cacheable content

If your request uses a valid and ceacheable request method, if the request contains a Host header and there are no Cookie or Authorization headers, the request is cacheable and we can look it up in cache:

return (hash);

By performing return (hash);, the logic transitions to the vcl_hash subroutine where the object’s hash is composed and looked up in the cache.

2. vcl_pipe

The vcl_pipe subroutine is reached when return (pipe) is called from another subroutine.

Piping means that Varnish no longer considers this an HTTP request. Instead, it just treats the data as TCP and shuffles the payload over the wire, without further interference. If dealing with HTTP requests, always consider using a pass instead of a pipe, as piping relinquishes your ability to manipulate the transaction in further steps, and your logs will be blind to the backend response.

Here’s the built-in VCL for vcl_pipe, which honestly isn’t that exciting:

sub vcl_pipe {
    # By default Connection: close is set on all piped requests, to stop
    # connection reuse from sending future requests directly to the
    # (potentially) wrong backend. If you do want this to happen, you can undo
    # it here.
    # unset bereq.http.connection;
    return (pipe);
}

3. vcl_pass

When you reach the vcl_pass subroutine, it means that the requested content shouldn’t be served from cache. By calling return (pass); you reach this subroutine.

As you can see in the VCL code below, content will be directly fetched from the origin server:

sub vcl_pass {
    return (fetch);
}

The response of a passed request will not be stored in the cache, whereas a regular cache miss would attempt to store the object in the cache.

4. vcl_hash

When return (hash); is called in vcl_recv, the object’s hash key is composed in vcl_hash and eventually the hash is used to look the object up in the cache:

sub vcl_hash {
    hash_data(req.url);
    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }
    return (lookup);
}

This subroutine will use the hash_data() function to add data the hash of the object that is requested.

By default a cached object is identified by the request URL and the Host header. This is reflected in the VCL code. The hash_data(req.url); function will add the request URL to the hash and if a Host header is found, the hash_data(req.http.host); function will ensure that the Host header is also added to the hash.

In the unlikely case that no Host header is found, the hash_data(server.ip); function will add the server IP to the hash instead.

Once the hash is composed, return (lookup) is used to look the object up in the cache. When the result is a cache hit, we transition to vcl_hit. In case of a cache miss, we transition to vcl_miss.

5. vcl_purge

When content needs to be removed from the cache, you can call return (purge); in your VCL code to make it happen. However, the built-in VCL doesn’t provide a standard way to do this.

Unless you implement custom purging logic, the following VCL code will be isolated:

sub vcl_purge {
    return (synth(200, "Purged"));
}

What we learn from the vcl_purge implementation is that a synthetic HTTP 200 Purged response is returned to the client. The output template for this synthetic output is defined in vcl_synth.

6. vcl_hit

The use case for the vcl_hit subroutine is pretty obvious: dealing with a cache hit.

Here’s the VCL code:

sub vcl_hit {
    if (obj.ttl >= 0s) {
        // A pure unadulterated hit, deliver it
        return (deliver);
    }
    if (obj.ttl + obj.grace > 0s) {
        // Object is in grace, deliver it
        // Automatically triggers a background fetch
        return (deliver);
    }
    // fetch & deliver once we get the result
    return (miss);
}

What we learn from this code is that the built-in VCL doesn’t return a cache hit unconditionally. When an object is found in the cache, Varnish will check the freshness and decide how to deliver the object.

Fresh content

As long as the remaining TTL of an object, represented by the obj.ttl variable, is greater than zero, the content is considered fresh:

if (obj.ttl >= 0s) {
    // A pure unadulterated hit, deliver it
    return (deliver);
}

This means the cached object can be immediately delivered to the requesting client. The return (deliver); statement will trigger this behavior and will cause a transition to the vcl_deliver subroutine.

Stale content

When the content has expired and the remaining TTL is no longer greater than zero, the obvious outcome would be a cache miss. But if the object has grace time left, Varnish will serve the stale content while asynchronously fetching the content from the origin web server:

if (obj.ttl + obj.grace > 0s) {
    // Object is in grace, deliver it
    // Automatically triggers a background fetch
    return (deliver);
}

And when the object becomes out of grace, a synchronous fetch is done through return (miss):

// fetch & deliver once we get the result
return (miss);

7. vcl_miss

The vcl_miss subroutine is pretty straightforward: it is accessed when the requested object couldn’t be found in the cache or when the object has expired and is out of grace.

When a cache miss occurs, the content will be fetched from the origin server through the return(fetch) statement. This results into a transition to the vcl_backend_fetch subroutine.

Here’s the built-in VCL code for the vcl_miss subroutine:

sub vcl_miss {
    return (fetch);
}

8. vcl_deliver

The vcl_deliver subroutine is used to return the HTTP response to the requesting client.

Whether a cache hit, cache miss, cache pass or synthetic response takes place, vcl_deliver is responsible for delivering the content to the client.

Here’s the built-in VCL code for the vcl_deliver subroutine:

sub vcl_deliver {
    return (deliver);
}

9. vcl_synth

Not all content returned by Varnish originates from the origin server. In certain cases Varnish will return output that it generated itself. We call these synthetic responses.

Whenever you call return(synth(...));, you transition directly to the vcl_synth subroutine where the response template is composed.

The return(synth(...)); statement requires a status code and an optional response reason.

In the case of return (synth(405)); a synthetic HTTP 405 Method not allowed response is returned. If you add a second argument to synth(...) you can override the reason phrase.

Here’s the built-in VCL code for the vcl_synth subroutine:

sub vcl_synth {
    set resp.http.Content-Type = "text/html; charset=utf-8";
    set resp.http.Retry-After = "5";
    set resp.body = {"<!DOCTYPE html>
<html>
  <head>
    <title>"} + resp.status + " " + resp.reason + {"</title>
  </head>
  <body>
    <h1>Error "} + resp.status + " " + resp.reason + {"</h1>
    <p>"} + resp.reason + {"</p>
    <h3>Guru Meditation:</h3>
    <p>XID: "} + req.xid + {"</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>
"};
    return (deliver);
}

The standard output template that is created in vcl_synth sets the following headers:

  • Content-Type: text/html; charset=utf-8
  • Retry-After: 5

The resp.status variable is based on the first argument of synth(..). When return(synth(405)) is called, resp.status will equal 405.

When return(synth(200, "Purged")); is called from the vcl_purge subroutine the resp.status variable is 200 and the resp.reason variable is Purged. The HTML output for this call would be the following:

<!DOCTYPE html>
<html>
  <head>
    <title>200 Purged</title>
  </head>
  <body>
    <h1>Error 200 Purged</h1>
    <p>Purged</p>
    <h3>Guru Meditation:</h3>
    <p>XID: 123456</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>

10. vcl_backend_fetch

The vcl_backend_fetch subroutine is a backend side subroutine that is only called when the object cannot be served from the cache. This happens when a cache miss occurs or when requests bypasses the cache.

This is reflected in the built-in VCL: both vcl_miss and vcl_pass will perform a return(fetch), which triggers vcl_backend_fetch.

vcl_backend_fetch will convert the request, represented by the req object, into a backend request, respresented by the bereq object.

Here’s the built-in VCL code for vcl_backend_fetch:

sub vcl_backend_fetch {
    if (bereq.method == "GET") {
        unset bereq.body;
    }
    return (fetch);
}

Besides the return(fetch); there’s not a lot happening. The only interesting thing is the fact that the request body is removed for GET requests.

11. vcl_backend_response

When your origin server responds to a backend request issued by vcl_backend_fetch, you reach the vcl_backend_response subroutine. This backend side subroutine provides access to the backend response through the beresp object and also contains rules to decide whether or not the backend response should be stored in the cache.

Here’s the built-in VCL code for the vcl_backend_response subroutine:

sub vcl_backend_response {
    if (bereq.uncacheable) {
        return (deliver);
    } else if (beresp.ttl <= 0s ||
      beresp.http.Set-Cookie ||
      beresp.http.Surrogate-control ~ "(?i)no-store" ||
      (!beresp.http.Surrogate-Control &&
        beresp.http.Cache-Control ~ "(?i:no-cache|no-store|private)") ||
      beresp.http.Vary == "*") {
        # Mark as "Hit-For-Miss" for the next 2 minutes
        set beresp.ttl = 120s;
        set beresp.uncacheable = true;
    }
    return (deliver);
}

We will break down the code and explain the various aspects in the next sections.

Hit-for-miss

A key aspect in vcl_backend_response is deciding whether or not HTTP responses can be stored in the cache. Depening on specific HTTP response headeres we can decide not to cache.

Instead of just not deciding to store the object in cache, we actually cache the decision not to cache. We keep track of uncacheable objects to ensure that the next request for this HTTP resource will not try to add the request to the waiting list for request coalescing purposes.

Because we know ahead of time that the response is not cacheable, the request will never be satisfied by request coalescing. Adding requests for uncacheable content to the waiting list will trigger serialization, which means waiting list items are processed serially rather than in parallel. This can result in a major performance degradation as the waiting list grows.

By storing uncacheable responses as Hit-For-Miss objects, we can bypass the waiting list and immediately fetch the content from the origin server. We do this through set beresp.uncacheable = true;.

And because set beresp.ttl = 120s; is set, a Hit-For-Miss object will be available for 2 minutes. After that the cacheability of the response is re-evaluated . Hit-For-Miss objects are also invalidated when the next backend response is considered cacheable.

Immediately deliver uncacheable content

The built-in VCL for vcl_backend_response checks the bereq.uncacheable variable. If a return(pass) occured or the object was marked as Hit-For-Miss, the backend response is immediately delivered to the client, without evaluating the cacheability of the response.

Here’s the VCL code that illustrates this:

if (bereq.uncacheable) {
    return (deliver);
}

Zero TTL

When Varnish receives a response from the backend and it receives headers that would imply a zero TTL, the built-in VCL will decide to create a Hit-For-Miss object for this with a lifetime of 2 minutes. As mentioned earlier, this will ensure the waiting list is bypassed and the request is directly sent to the origin server.

Here’s a simplified version of the VCL that illustrates this:

if (beresp.ttl <= 0s) {
    # Mark as "Hit-For-Miss" for the next 2 minutes
    set beresp.ttl = 120s;
    set beresp.uncacheable = true;
}

Any Cache-Control header containing a zero TTL or an Expires header with a timestamp in the past will result in beresp.ttl to be zero. Here are some examples of such headers:

  • Cache-Control: max-age=0
  • Cache-Control: s-maxage=0
  • Expires: Wed, 29 Sep 2021 15:36:22 GMT

When a Set-Cookie header is set by the origin, it implies a state change. This means we should not cache this response, because we would risk setting this cookie value for all users hitting this cached object.

Here’s the simplified VCL code that also creates a Hit-For-Miss object:

if (beresp.http.Set-Cookie) {
    # Mark as "Hit-For-Miss" for the next 2 minutes
    set beresp.ttl = 120s;
    set beresp.uncacheable = true;
}

Surrogate-Control no-store

Because Varnish supports HTTP surrogates, the built-in VCL respects the Surrogate-Control header. If it contains a no-cache value, Varnish will not cache that response and will create a Hit-For-Miss object.

Here’s the simplified VCL code for this:

if (beresp.http.Surrogate-control ~ "(?i)no-store") {
    # Mark as "Hit-For-Miss" for the next 2 minutes
    set beresp.ttl = 120s;
    set beresp.uncacheable = true;
}

Cache-Control no-cache, no-store & private

If no Surrogate-Control header is set, the built-in VCL will consider the Cache-Control header and look for directives that imply uncacheable behavior.

Directives like no-cache, no-store or private would indicate that the resource is not cacheable.

Here’s the simplified VCL code that checks for these directives:

if (!beresp.http.Surrogate-Control && 
  beresp.http.Cache-Control ~ "(?i:no-cache|no-store|private)") {
    # Mark as "Hit-For-Miss" for the next 2 minutes
    set beresp.ttl = 120s;
    set beresp.uncacheable = true;
}

So if the following HTTP response header would appear, the built-in VCL will not cache the response and will create a Hit-For-Miss object:

Cache-Control: private, no-cache, no-store

Variations of this header that included either no-cache, no-store or private would result in the same behavior.

Don’t peform wildcard cache variations

The Vary HTTP response header is used to instruct caches to create a cache variation for each object, based on the value of that header. The Vary header’s value must be a valid request header.

If for example Varnish receives a response containing the Vary: Accept-Language header, it will create a cache variation for this based on the value of the Accept-Language header.

This is delicate balancing act between offering enough cache variations per cached resource and maintaining a high enough hit rate.

However if your origin server returns Vary: * to create cache variations based on every request header, then the built-in VCL will decide not to store this resource in cache. A Hit-For-Miss object will be created to mark this object as uncacheable.

Here’s the simplified VCL code for this behavior:

if (beresp.http.Vary == "*") {
    # Mark as "Hit-For-Miss" for the next 2 minutes
    set beresp.ttl = 120s;
    set beresp.uncacheable = true;
}

Cached content

If after all these checks your response didn’t result in the creation of a Hit-For-Miss object, we can conclude that the response is cacheable.

The object will be stored in the cache and will contain either the standard TTL, or a TTL that was assigned by the origin server through a Cache-Control or Expires header.

If the next client request takes place within the lifetime of the object, what was originally a cache miss, will now become a cache hit.

12. vcl_backend_error

The vcl_backend_error subroutine is reached when an attempt to fetch content in vcl_backend_fetch fails, or when the response from vcl_backend_response is considered erroneous.

Just like vcl_synth, the vcl_backend_error subroutine will prepare a synthetic output template using HTTP status information.

Here’s what the VCL code looks like and it’s remarkably similar to vcl_synth:

sub vcl_backend_error {
    set beresp.http.Content-Type = "text/html; charset=utf-8";
    set beresp.http.Retry-After = "5";
    set beresp.body = {"<!DOCTYPE html>
<html>
  <head>
    <title>"} + beresp.status + " " + beresp.reason + {"</title>
  </head>
  <body>
    <h1>Error "} + beresp.status + " " + beresp.reason + {"</h1>
    <p>"} + beresp.reason + {"</p>
    <h3>Guru Meditation:</h3>
    <p>XID: "} + bereq.xid + {"</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>
"};
    return (deliver);
}

This VCL code sets the following response headers:

  • Content-Type: text/html; charset=utf-8
  • Retry-After: 5

The beresp.status variable contains the HTTP status code that was returned by the origin server when the error was triggered. When a Varnish backend fetch failed, the beresp.status value will be 503.

The resp.reason variable is contains the reason phrase for the corresponding error. It is either returned by the origin server, or it is Backend fetch failed when Varnish failed to fetch content.

The bereq.xid will contain the transaction id which can be looked up in the Varnish Shared Memory Logs.

Here’s an example of HTML output that is returned by vcl_backend_error:

<!DOCTYPE html>
<html>
  <head>
    <title>503 Backend fetch failed</title>
  </head>
  <body>
    <h1>Error 503 Backend fetch failed</h1>
    <p>Backend fetch failed</p>
    <h3>Guru Meditation:</h3>
    <p>XID: 123456</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>

13. vcl_init

The vcl_init subroutine is called when the VCL code is loaded. This subroutine is typically used to initialize Varnish Modules (VMODs)

Here’s the built-in VCL code for vcl_init;

sub vcl_init {
    return (ok);
}

14. vcl_fini

When your VCL configuration is discarded, the vcl_fini is called and allows you to clean up your VMODs.

Here’s the built-in VCL code for vcl_fini:

sub vcl_fini {
    return (ok);
}

15. Built-in VCL code

Now that you have familiarized yourself with the built-in VCL for the various subroutines, here’s the full built-in VCL code:

vcl 4.1;

#######################################################################
# Client side

sub vcl_recv {
    if (req.method == "PRI") {
        /* This will never happen in properly formed traffic (see: RFC7540) */
        return (synth(405));
    }
    if (!req.http.host &&
      req.esi_level == 0 &&
      req.proto ~ "^(?i)HTTP/1.1") {
        /* In HTTP/1.1, Host is required. */
        return (synth(400));
    }
    if (req.method != "GET" &&
      req.method != "HEAD" &&
      req.method != "PUT" &&
      req.method != "POST" &&
      req.method != "TRACE" &&
      req.method != "OPTIONS" &&
      req.method != "DELETE" &&
      req.method != "PATCH") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
    }

    if (req.method != "GET" && req.method != "HEAD") {
        /* We only deal with GET and HEAD by default */
        return (pass);
    }
    if (req.http.Authorization || req.http.Cookie) {
        /* Not cacheable by default */
        return (pass);
    }
    return (hash);
}

sub vcl_pipe {
    # By default Connection: close is set on all piped requests, to stop
    # connection reuse from sending future requests directly to the
    # (potentially) wrong backend. If you do want this to happen, you can undo
    # it here.
    # unset bereq.http.connection;
    return (pipe);
}

sub vcl_pass {
    return (fetch);
}

sub vcl_hash {
    hash_data(req.url);
    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }
    return (lookup);
}

sub vcl_purge {
    return (synth(200, "Purged"));
}

sub vcl_hit {
    if (obj.ttl >= 0s) {
        // A pure unadulterated hit, deliver it
        return (deliver);
    }
    if (obj.ttl + obj.grace > 0s) {
        // Object is in grace, deliver it
        // Automatically triggers a background fetch
        return (deliver);
    }
    // fetch & deliver once we get the result
    return (miss);
}

sub vcl_miss {
    return (fetch);
}

sub vcl_deliver {
    return (deliver);
}

/*
 * We can come here "invisibly" with the following errors: 500 & 503
 */
sub vcl_synth {
    set resp.http.Content-Type = "text/html; charset=utf-8";
    set resp.http.Retry-After = "5";
    set resp.body = {"<!DOCTYPE html>
<html>
  <head>
    <title>"} + resp.status + " " + resp.reason + {"</title>
  </head>
  <body>
    <h1>Error "} + resp.status + " " + resp.reason + {"</h1>
    <p>"} + resp.reason + {"</p>
    <h3>Guru Meditation:</h3>
    <p>XID: "} + req.xid + {"</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>
"};
    return (deliver);
}

#######################################################################
# Backend Fetch

sub vcl_backend_fetch {
    if (bereq.method == "GET") {
        unset bereq.body;
    }
    return (fetch);
}

sub vcl_backend_response {
    if (bereq.uncacheable) {
        return (deliver);
    } else if (beresp.ttl <= 0s ||
      beresp.http.Set-Cookie ||
      beresp.http.Surrogate-control ~ "(?i)no-store" ||
      (!beresp.http.Surrogate-Control &&
        beresp.http.Cache-Control ~ "(?i:no-cache|no-store|private)") ||
      beresp.http.Vary == "*") {
        # Mark as "Hit-For-Miss" for the next 2 minutes
        set beresp.ttl = 120s;
        set beresp.uncacheable = true;
    }
    return (deliver);
}

sub vcl_backend_error {
    set beresp.http.Content-Type = "text/html; charset=utf-8";
    set beresp.http.Retry-After = "5";
    set beresp.body = {"<!DOCTYPE html>
<html>
  <head>
    <title>"} + beresp.status + " " + beresp.reason + {"</title>
  </head>
  <body>
    <h1>Error "} + beresp.status + " " + beresp.reason + {"</h1>
    <p>"} + beresp.reason + {"</p>
    <h3>Guru Meditation:</h3>
    <p>XID: "} + bereq.xid + {"</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>
"};
    return (deliver);
}

#######################################################################
# Housekeeping

sub vcl_init {
    return (ok);
}

sub vcl_fini {
    return (ok);
}