Hammerora TPM – System Comparison overview
is a fantastic offering for those seeking to host their applications.
It is easy and fast :
- to set up (lot of templates, Web Interface, Elastic Load Balancing, API)
- to change (changing EC2 instance type is easy and fast)
- to secure (backup in S3, integrated firewall, Virtual Private Cloud)
- to monitor (Cloud watch)
- to manage (Web Interface, API, Auto Scaling, Ready-to-use databases, …)
And you pay as you use.
But what about performance?
Concerning the Compute performance, I wasn’t worried. Afterwards, AWS EC2 is based on Xen virtualization and the compute performance will be mainly defined by the quantity of host CPU shares and the size of the memory. So it is just defined by the type of instance you choose (micro, large, x large, …).
But in June 2012 (before the launch of new High I/O EC2 Instance Type), I was more concerned by the performance of the IO. EBS (Elastic Bloc Storage) is amazing ; it is so easy to create, to change and to backup. But can we use it for any workload even for heavy database work load?
So I decided to do some tests with SQL Server, SQL IO (Disk Subsystem Benchmark Tool) and Hammerora (an Open Source Database Test Tool).
I used also the tool PerformanceTest from PassMark Software to get some standard benchmark results.
As we know about the number of requests we can expect from a single WordPress front server, it’s time to try to scale out.
Load balancer configuration
I will use haproxy, and as the post HAProxy – Experimental evaluation of the performance shows it, I can setup a small VM with 1vcpu and 512MB.
As WordPress is a stateless product, I don’t need to manage session persistence so a very basic configuration can be used (see Software Installation)
In a following post, I will show how to share sessions on an NFS server. For the need of sessions, I will install a e-commerce plugin: WooCommerce
I also deactivated the cache but kept APC.
There is a classical method to increase the performance of web site that consists in caching the HTML pages that are rendering to the client.
That avoids the server to execute the code to request the database and to render the page. The pre-formatted HTML page is generally stored in the file system.
I would like to try it with this workload (that is perfect for caching because there is no specific information linked to the visitor).
I could use many methods:
- caching with apache
- caching with varnish
- caching with a specialized PHP code
I decided to use the third method as there are a lot of existing WordPress plugins that do the job.
I installed the wp super cache plugin and did a new series of tests.
Raw throughput of a WordPress web site and influence of cpu and memory on the front
In order to evaluate the expected performance of a WordPress web site, I first decided to focus on a single front and to play with the CPU and MEMORY resources of the front.
To be sure that no other bottlenecks appear (such as the database server resources -CPU, MEMORY, IO – or the network resources), I was continually checking the performance counters on the database server and on the front server with commands such as iostat, vmstat, top and iftop for the network during the workload. After a while, I found the perfect command dstat that shows all counters for CPU, IO and MEMORY from just one command line.
To quickly evaluate the speed (throughput) of the web site, I decided to use the apache tool ab that gives a very useful indicator in req/s.
This post is the first one of a series about scaling out WordPress.
The purpose of this series of posts can be summarized as followed:
- Experiment the different bricks (load-balancer haproxy, mysql replica, session sharing)
- What kind of performance increase we can expect ? Is the performance increase linear with the number of web fronts?
- What will be the bottlenecks ?
- What kind of tools can we easily use to stress the application ?
- What kind of tools can we easily use to measure the performance metrics ?
to come :