Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any HTTP server and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. A high-level overview of what Varnish does can be seen in this video.
You can read a general overview of The Big Varnish Picture in the official Varnish documentation. :)
It’s insanely flexible!
One of the key features of Varnish Cache, in addition to its performance, is the flexibility of its configuration language, VCL. VCL enables you to write policies on how incoming requests should be handled. In such a policy you can decide what content you want to serve, from where you want to fetch said content and whether and how the request or response should be altered. You can extend Varnish’s VCL with modules (VMODs). You can read more about this in the official documentation tutorial at varnish-cache.org.
Varnish is designed with security, performance and flexibility in mind. For an in-depth look at this you can read The Design Principle of Varnish chapter in the Varnish Book.
Varnish Software has recently sponsored the O’Reilly book Getting started with Varnish Cache written by Thijs Feryn. You can download it for free from the company’s website.
If you are wondering why you are on our website reading about our product, you are in the right section. We’ll help you answer the why Varnish question.
Most probably you need to handle a lot of traffic. Caching is one of the best ways to maximize the output and performance of your website!
The main idea behind making your website fly is to reduce the workload of your web infrastructure: web server, database, application; and to optimally use your network capacity. Basically your frontend shouldn’t have to make requests to the backend too often for the same dynamic content every time a client requests it.
To save your resources, placing a reverse proxy, caching software, such as Varnish Cache, right in front of your web application can accelerate the responses to almost all your HTTP requests and thus reduce server workload.
Congratulations! You are making a great choice because Varnish does exactly that. And more! Varnish works by managing client requests BEFORE they make it to your web application server. Varnish not only reduces your web server load, but by being fast it offers DDoS protection to your web servers, making them more resilient and secure.
There is a good article describing Varnish Cache on Wikipedia.
Memcache is a key value store, more or less a rather simple database. It doesn’t persist data and only stores it in memory. It also doesn’t really care if it throws data out. The natural use for Memcache is to cache things internally in your application or between your application and your database. Memcache uses its own specific protocol to store and fetch content.
Varnish on the other hand stores rendered pages. It talks HTTP so it will typically talk directly to an HTTP client and deliver pages from its cache whenever said page is stored in the cache, what is commonly called a cache hit. When an object, any kind of content i.e. an image or a page, is not stored in the cache, then we have what is commonly known as a cache miss, in which case Varnish will go and fetch the content from the web server, store it and deliver a copy to the user and retain it in cache to serve in response to future requests.
These are two pretty different pieces of software. The end goal of both pieces of software is the same, though, and most sites would likely use both technologies in order to speed up delivery. They will deploy Varnish to speed up delivery of its cache hits, and when you have a cache miss the application server might have access to some data in Memcache, which will be available to the application faster than what the database is capable of delivering.
The performance characteristics are pretty different. Varnish will start delivering a cache hit in a matter of microseconds whereas a PHP page that gets rendered content from Memcache will likely spend somewhere around 15-30 milliseconds doing so. The reason Varnish can do it faster is that Varnish has its content in local memory whereas the PHP script needs to get on the network and fetch the content over a TCP connection. In addition, you’ll have the overhead costs of the interpreter. It’s not only that Varnish is better, it’s just that Varnish has a much easier job to do and it is faster because of it.
There are no good reasons not to use both.
The most fundamental difference between Squid and Varnish is that Squid is a forward proxy that can be a configured as a reverse proxy whereas Varnish is built from the ground up to be a reverse proxy.
So, in principle Varnish is better suited than Squid to do reverse proxy HTTP. However, Squid is a very mature product and has had time to accumulate a lot of features that still are not available in Varnish. Both projects are used by huge websites and both of them can do almost anything.
The main advantages of Squid over Varnish are:
On the other hand, Varnish has:
Memoization is a way of caching results of a function to avoid recalculating the the next time the same function is called. The technique is that the function is executed, the result gets added to an object holding the calculated results. When the function is called again, the result object is checked to see if it contains the result.
Caching, on the other hand, is about storing reusable web traffic responses in order to make subsequent requests faster.
In this wiki you can find other resources. Read on about Understand your Website
Useful external resources:
If you want to help fix our bugs or want to know about bugs in the project, check out: