AWS EBS IO performance evaluation

Hammerora TPM - System Comparison overview

Hammerora TPM – System Comparison overview


AWS is a fantastic offering for those seeking to host their applications.

It is easy and fast :

  • to set up (lot of templates, Web Interface, Elastic Load Balancing, API)
  • to change (changing EC2 instance type is easy and fast)
  • to secure (backup in S3, integrated firewall, Virtual Private Cloud)
  • to monitor (Cloud watch)
  • to manage (Web Interface, API, Auto Scaling, Ready-to-use databases, …)
And you pay as you use.

But what about performance?

Concerning the Compute performance, I wasn’t worried. Afterwards, AWS EC2 is based on  Xen virtualization and the compute performance will be mainly defined by the quantity of host CPU shares and the size of the memory. So it is just defined by the type of instance you choose (micro, large, x large, …).

But in June 2012 (before the launch of new High I/O EC2 Instance Type), I was more concerned by the performance of the IO. EBS (Elastic Bloc Storage) is amazing ; it is so easy to create, to change and to backup. But can we use it for any workload even for heavy database work load?

So I decided to do some tests with SQL Server, SQL IO (Disk Subsystem Benchmark Tool) and Hammerora (an Open Source Database Test Tool).

I used also the tool PerformanceTest from PassMark Software to get some standard benchmark results.

Goals

The goals of the test are to :

  • get standard Compute benchmarks results for standard Small, Large and Extra Large EC2 Windows instances
  • get the maximum IOPS results (for different i/o requests size and type- sequential, random) of standard Small and Large EC2 Windows instances
  • evaluate the variability of the IO performance results.
  • get the maximum throughput (Transactions Per Minute – TPM and New Orders Per Minute – NOPM) of the OLTP load test tool Hammerora for standard Large and XL Large EC2 Windows instances
  • compare the values to standard entry level servers and RAID systems
  • try to find a correlation between SLIO test results and Hammerora test results

Methodology

As I had limited AWS credits, I decided to play only with EC2 Small, Large Windows Instances. As I read (http://www.brentozar.com/archive/2011/09/sql-server-ec/  and http://perfcap.blogspot.fr/2011/03/understanding-and-using-amazon-ebs.html), I knew that I should have the best performance by using RAID 0 on multiple 1TB EBS volumes.

For the comparison with entry level servers and RAID system, I used hardware that I have been using for 2-4 years  (same hardware I used for the WordPress test) + a dedibox server, so:

  • ade-esxi-01: Intel(R) Core(TM) Quad CPU Q6600 @ 2.40GHz; No Hyperthreading; 8 GB DDR2 800; Raid card 3ware 9650SE+BBU; 4x1TB RAID10; SATA WD Caviar Black 7200t/min (WD1001FALS)
  • ade-esxi-02: Intel(R) Core(TM) Quad CPU Q6600 @ 2.40GHz; No Hyperthreading; 8 GB DDR2 800; Raid card HP P400+BBU; 4x1TB RAID10; SATA WD Caviar Black 7200t/min (WD1001FALS)
  • ade-esxi-03: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz; Hyperthreading active; 8 GB DDR3 1066;4x1TB; SATA WD Caviar Black 7200t/min (WD1001FALS)
  • ade-esxi-04 : Intel(R) Xeon(R) CPU X3440 @ 2.53GHz; Hyperthreading active; 24 GB DDR3;Raid 3ware 9650SE +BBU raid 10 mode performance with 4 disk 1TB 7200 Tr Western Digital WD RE3 SATA2 (WD1002FBYS)
  • dedibox-pro-01 :Intel® Xeon® L3426 @ 1.86GHz; Hyperthreading active; 16 GB DDR3; Raid Card Dell H200, 2x2TB RAID1; SATA Toshiba? 7200t/min

Then, I decided to work as followed :

  • Do a series of SQL IO tests with 1x1TB Disk, RAID0 2x1TB Disks, RAID0 3x 1TB Disks and 4x 1TB Disks :
    • With an EC2 Small Windows 2008 R2 Standard Instance
  • Do a series of SQL IO tests with RAID0 2x1TB Disks
    • With an EC2 Large Windows 2008 R2 Standard Instance
    • With a 7GB, 4vcpu (1socket-4cores), RAID 0 SOFT 4x40GB  VM on i7-860 server (ade-esxi-03)
    • With a 6GB, 4vcpu on L3426 (dedibox-pro-01)
    • With a 6GB, 4vcpu on Q6660 (ade-esxi-01)
  • Do a series of Hammerora Test with RAID0 2x1TB Disks (best ratio performance / cost) :
    • With an EC2 Large SQL SERVER 2008 R2 Standard Instance
    • With an EC2 High-CPU Medium SQL SERVER 2008 R2 Standard  Instance
    • With an EC2 Extra Large SQL SERVER 2008 R2 Standard Instance
    • With a 7GB, 4vcpu (1socket-4cores), RAID 0 SOFT 4x40GB  VM on i7-860 server (ade-esxi-03)
    • With a 6GB, 4vcpu on L3426 (dedibox-pro-01)
    • With a 6GB, 4vcpu on Q6660 (ade-esxi-01)
    • With a 8GB, 4vcpu on X3440 (ade-esxi-04)
  • Do a series of PassMark (CPU,Memory,Disk)
    • With an EC2 Large SQL SERVER 2008 R2 Standard Instance
    • With an EC2 High-CPU Medium SQL SERVER 2008 R2 Standard  Instance
    • With an EC2 Extra Large SQL SERVER 2008 R2 Standard Instance
    • With a 7GB, 1socket-8cores VM on i7-860 server (ade-esxi-03)
    • With a 7GB, 2sockets-4cores VM on i7-860 server (ade-esxi-03)
    • With a 6GB, 4vcpu on L3426 (dedibox-pro-01)
    • With a 6GB, 4vcpu on Q6660 (ade-esxi-01)

Details:

  • I created all EC2 instances in in zone eu-west-1a.
  • For the SQL IO Test and the PassMark Test , I used the AMI ID ami-41d3d635  (Windows_Server-2008-R2_SP1-English-64Bit-Base-2012.06.12)
  • For the Hammerora Test, I Used the AMI ID ami-313a0045 (Windows_Server-2008-R2_SP1-English-64Bit-SQL_2008_Standard-2012.05.10)
  • I formatted each disk to NTFS 64K.
  • I used the standard Windows 2008 RAID 0 feature (stripped volume – software RAID)
  • During each stress, I was running a Data Collector Set such as:
    • SQLIO Test Counters
    \PhysicalDisk(_Total)\Avg. Disk Bytes/Transfer
    \PhysicalDisk(_Total)\Avg. Disk Queue Length
    \PhysicalDisk(_Total)\Avg. Disk sec/Transfer
    \PhysicalDisk(_Total)\Disk Bytes/sec
    \PhysicalDisk(_Total)\Disk Transfers/sec
    \PhysicalDisk(_Total)\Avg. Disk Bytes/Transfer
    \PhysicalDisk(_Total)\Avg. Disk Queue Length
    \PhysicalDisk(_Total)\Avg. Disk sec/Transfer
    \PhysicalDisk(_Total)\Disk Bytes/sec
    \PhysicalDisk(_Total)\Disk Transfers/sec
    • Hammerora Test Counters
    \Memory\Available MBytes
    \Memory\Page Faults/sec
    \PhysicalDisk(_Total)\Avg. Disk Bytes/Read
    \PhysicalDisk(_Total)\Avg. Disk Bytes/Write
    \PhysicalDisk(_Total)\Avg. Disk Read Queue Length
    \PhysicalDisk(_Total)\Avg. Disk sec/Read
    \PhysicalDisk(_Total)\Avg. Disk sec/Write
    \PhysicalDisk(_Total)\Avg. Disk Write Queue Length
    \PhysicalDisk(_Total)\Disk Bytes/sec
    \PhysicalDisk(_Total)\Disk Read Bytes/sec
    \PhysicalDisk(_Total)\Disk Reads/sec
    \PhysicalDisk(_Total)\Disk Write Bytes/sec
    \PhysicalDisk(_Total)\Disk Writes/sec
    \Processor(_Total)\% Processor Time
    \SQLServer:Access Methods\Full Scans/sec
    \SQLServer:Access Methods\Index Searches/sec
    \SQLServer:Access Methods\Range Scans/sec
    \SQLServer:Buffer Manager\Checkpoint pages/sec
    \SQLServer:Buffer Manager\Lazy writes/sec
    \SQLServer:Buffer Manager\Page reads/sec
    \SQLServer:Buffer Manager\Page writes/sec
    \SQLServer:Buffer Manager\Readahead pages/sec
    \SQLServer:Databases(_Total)\Log Bytes Flushed/sec
    \SQLServer:Databases(_Total)\Log Flush Wait Time
    \SQLServer:Databases(_Total)\Log Flushes/sec
    \SQLServer:Databases(_Total)\Active Transactions
    \SQLServer:Databases(_Total)\Transactions/sec
    \SQLServer:General Statistics\Logins/sec
    \SQLServer:General Statistics\Transactions
    \SQLServer:General Statistics\User Connections
    \Process(sqlservr)\Thread Count
    \System\Context Switches/sec
    \System\Processor Queue Length
    \System\System Calls/sec
    \Process(sqlservr)\% Processor Time
    \Network Interface(RedHat PV NIC Driver)\Bytes Received/sec
    \Network Interface(RedHat PV NIC Driver)\Bytes Sent/sec
    \Network Interface(RedHat PV NIC Driver)\Bytes Total/sec
  • SQLIO tests
    • I used a 8GB file created with the following command :
    sqlio -kW -s10 -fsequential -t8 -o8 -b8 -LS -Fparam.txt timeout /T 10

    the param.txt file contains

    e:\testfile.dat 2 0x0 8192
    • I stressed the storage system with the following commands in a .bat file :
    sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\TestFile.dat > sqlio-8GB-8K-W-100-R-8x8.txt
    sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\TestFile.dat > sqlio-8GB-8K-R-100-R-8x8.txt
    sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E:\TestFile.dat > sqlio-8GB-64K-W-100-S-8x8.txt
    sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E:\TestFile.dat> sqlio-8GB-64K-R-100-S-8x8.txt
  • Hammerora tests
    • I used a 8GB database created by populating 50 warehouses
    • I ran a test with 3 users to warm cache, then I ran an 10 min autopilot test with 2,4,8,16,32,64,128 and 150 virtual users

Results

Influence of the number of stripped volumes (RAID 0 disks) on EBS performance

Graphs of results

Influence of number of RAID 0 disks on IOPS

Influence of number of RAID 0 disks on IOPS

Influence of number of RAID 0 disks on BW

Influence of number of RAID 0 disks on BW

Basically, the 2 graphs show the same comportment, one is expressed in IOPS, the other in MB/sec.
Series:

Series Name Description
IOPS sqlio-8GB-8K-W-100-R-8×8 IOPS Results for 8GB file size, 8K I/O request size, Write, 100% Random, 8 Threads, 8 for depth to use for completion routines
BW sqlio-8GB-8K-W-100-R-8×8 MB/sec Results for 8GB file size, 8K I/O request size, Write, 100% Random, 8 Threads, 8 for depth to use for completion routines
IOPS sqlio-8GB-8K-R-100-R-8×8 IOPS Results for 8GB file size, 8K I/O request size, Read, 100% Random, 8 Threads, 8 for depth to use for completion routines
BW sqlio-8GB-8K-R-100-R-8×8 MB/sec Results for 8GB file size, 8K I/O request size, Read, 100% Random, 8 Threads, 8 for depth to use for completion routines
IOPS sqlio-8GB-64K-W-100-S-8×8 IOPS Results for 8GB file size, 64K I/O request size, Write, 100% Sequential, 8 Threads, 8 for depth to use for completion routines
BW sqlio-8GB-64K-W-100-S-8×8 MB/sec Results for 8GB file size, 64K I/O request size, Write, 100% Sequential, 8 Threads, 8 for depth to use for completion routines
IOPS sqlio-8GB-64K-R-100-S-8×8 IOPS Results for 8GB file size, 64K I/O request size, Read, 100% Sequential, 8 Threads, 8 for depth to use for completion routines
BW sqlio-8GB-64K-R-100-S-8×8 MB/sec Results for 8GB file size, 64K I/O request size, Read, 100% Sequential, 8 Threads, 8 for depth to use for completion routines

Interpretation

At first sight, these results were astonishing to me. I expected a classic growth model (linear or logarithmic growth – due to contention on the network) as the number of disks increases and I have something very different that I tried to interpret.

Let’s try to use an other representation such as:

Influence of number of RAID 0 disks - interpretation

Influence of number of RAID 0 disks – interpretation

  1. The growth is positive (between 50% and 100%) between 1 and 2 raid-0 disks for all tests except for the sequential writing test which is stable. I imagine that we could explain this comportment by the fact that we are not alone on the host, on the network and on the storage (it is the limit of this kind of test on AWS, but it is the reality for a lambda customer), However we can see an increase of more than 50% for 3 tests.
  2. We see a steady increase of the sequential writing from 2 to 4 disks to reach up to 1550 iops and 97 MB/sec. This result is not so bad (test with 4 SATA 7200 disks in software RAID 0 with virtualiszed windows gives 3357 iops and 210 MB/sec), so AWS runs at half the performance of entry level server. This test is important because it correspond to the way SQL Server inserts data in the log (write-ahead logging). If you have a workload made up of only inserts or bulk inserts, you should see mainly sequential writes at 64K to the log file (except when the lazy writer process updates the data files or a checkpoint occurs that updates also the data files). Refer to the links at the end of the post to learn about IO, SQL Server performance.
  3. We see an important increase of sequential reading performance between 1 and 2 disks (66%) to reach up to 2229 IOPS and 139 MB/sec (see the comment below). Then the decrease from 2 to 4 disks could be explained by the fact that we are not alone and the limit of the network bandwidth (normally 125MB/sec (see the comment below)). This sequential read performance is good and is situated in the middle of my entry levels servers (from 1000 IOPS to 6000 IOPS). SQL Server reads in sequential access (from 64K to 512KB) for table or range scan.
  4. We see a sharply increase of random 8K reads between 2 and 3 disks. Here, as you will notice from additional tests, I believe that it is coming from the use of RAM cache by EBS. Here an overview of EBS :

    EBS is a distributed, replicated block data store that is optimized for consistency and low latency read and write access from EC2 instances. There are two main components of the EBS service: (i) a set of EBS clusters (each of which runs entirely inside of an Availability Zone) that store user data and serve requests to EC2 instances; and (ii) a set of control plane services that are used to coordinate user requests and propagate them to the EBS clusters running in each of the Availability Zones in the Region.

    I suspect that EBS takes advantage of big RAM cache such as a ZFS server that we access with iscsi or NFS. So I guess that the a big part of the data were located in the RAM cache when I did the third test (filled by the first and the second test). This comportment explains the variability of the read performance in EBS (don’t forget that you are not alone) but on the other hand it is good because we can have important read performance boost under heavy reads (it depends also of the size of your database). If I don’t take into account the EBS read cache, random reads performance is still good and is located at the top of my entry level servers (from 280 IOPS to 1100 IOPS). SQL Server reads in random access (from 8K to 64K) for index seeking and data page reading

  5. We see a steady increase of the random writing from 1 to 2 disks to reach up to 3802 iops and 30 MB/sec, a decrease from 2 to 3 disks and a light increase from 3 to 4 disks. Again, we are not alone and this can explain the decrease from 2 to 3 disks. The performance at 2 disks is very good (4x) if I compared to my entry level servers (from 250 IOPS to 1000 IOPS). SQL Server writes in random access (up to 256K) when a checkpoint occurs or when the lazy write process runs.

Concerning the throughput (MB/sec), as I supposed that the network speed between the host and the storage system may be based on 1Gbits/s,  speed is limited to a theoretical 125MB/s. I got 139 MB/sec for a sequential read test on 2 disks. I concluded that the results were either wrong or AWS used faster links. My guess is more that AWS could use, in some cases, channel bonding and increase throughput by combining 2 NIC on the host and the storage system, to get a theoretical speed of 250MB/sec. I imagined that AWS could use also 10Gbits/s  network for High IO Instance but I don’t believe that It was the case for this particular test.

For the other tests, I decided to use a raid0 2 disks configuration with AWS as I find  it should be the best compromise between performance and cost.

AWS EBS vs Entry Level Servers

Graphs of results

IO System comparison (IOPS)

IO System comparison (IOPS)

IO System comparison (BW)

IO System comparison (BW)

Series:

Series Name Description
AWS L 7.5G 2VCPU 2x1TB RAID0 AWS Large Windows Instance, 7.5G RAM, 2VCPU (2 virtual cores with 2 EC2 Compute Units each), 2x1TB EBS SOFTWARE RAID 0
ESXI 4.1 Q6600 4VCPU 6GB 4x1TB RAID10 ESXI 4.1 on ade-esxi-01 Intel(R) Core(TM) Quad CPU Q6600 @ 2.40GHz; No Hyperthreading; 8 GB DDR2 800; Raid card 3ware 9650SE+BBU; 4x1TB RAID10; WD Caviar Black 7200t/min (WD1001FALS)
VM 6G RAM, 4VCPU
ESXI 4.1 L3426 6GB 4VCPU ISCSI 40GB ESXI 4.1 on dedibox-pro-01 :Intel® Xeon® L3426 @ 1.86GHz; Hyperthreading active; 16 GB DDR3; Raid Card Dell H200, 2x2TB RAID1; SATA Toshiba? 7200t/min
VM 6G RAM, 4VCPU
Storage on a second Linux VM on the same host, iscsi (test low speed access)
ESXI 4.1 L3426 6GB 4VCPU 2x2TB RAID 1 ESXI 4.1 on dedibox-pro-01 :Intel® Xeon® L3426 @ 1.86GHz; Hyperthreading active; 16 GB DDR3; Raid Card Dell H200, 2x2TB RAID1; SATA Toshiba? 7200t/min
VM 6G RAM, 4VCPU
ESXI 5 i7-860 4VCPU 7GB 4x40GB RAID 0 SOFT ESXI 5 on ade-esxi-03: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz; Hyperthreading active; 8 GB DDR3 1066;4x1TB; WD Caviar Black 7200t/min (WD1001FALS)
VM 7G RAM, 4VCPU
Storage 4x40GB RAID 0 Soft

Interpretation

  1. Random Read Performance is very good (6x the performance with HW Raid 3ware 4x1TB SATA RAID 10 and 3x the performance with Software 4x40GB RAID 0)
  2. Random Write Performance is good (equal to he performance with HW Raid 3ware 4x1TB SATA RAID 10)
  3. Sequential Write Performance is low (equal to the performance with entry level HW RAID 1 2x2TB SATA – dedibox pro)
  4. Sequential Read Performance is fair (60 % of the performance with HW Raid 3ware 4x1TB SATA RAID 10)

Database OLTP Benchmark – Hammerora

Graphs of results

Hammerora TPM - System Comparison

Hammerora TPM – System Comparison

Hammerora NOPM - System Comparison

Hammerora NOPM – System Comparison

Axis :

  • Vertical axis TPM : Transaction per minute
  • Vertical axis NOPM : Number of Orders per minute
  • Horizontal axis : Number of virtual users

Series:

Series Name Description
AWS L 7.5G 2VCPU 2TBR0 -CLI1P AWS Large Windows Instance, 7.5G RAM, 2VCPU (2 virtual cores with 2 EC2 Compute Units each), 2x1TB EBS SOFTWARE RAID 0
Client 1vcpu
AWS L 7.5G 2VCPU 2TBR0 -CLI2P AWS Large Windows Instance, 7.5G RAM, 2VCPU (2 virtual cores with 2 EC2 Compute Units each), 2x1TB EBS SOFTWARE RAID 0
Client 2vcpu
AWS XL 16G 4VCPU 2TBR0 -CLI2P AWS Extra Large Windows Instance, 16G RAM, 4VCPU (4 virtual cores with 2 EC2 Compute Units each), 2x1TB EBS SOFTWARE RAID 0
Client 2vcpu
ESXI ADE 4VCPU 8GB 4TB RAID10 ESXI 4.1 on ade-esxi-04 Intel(R) Xeon(R) CPU X3440 @ 2.53GHz; Hyperthreading; 24 GB DDR3; Raid 3ware 9650SE +BBU raid 10 mode performance with 4 disk 1TB 7200 Tr Western Digital WD RE3 SATA2 (WD1002FBYS)
VM 8G RAM, 4VCPU
ESXI Q6600 4VCPU 6GB 4TB RAID10 ESXI 4.1 on ade-esxi-01 Intel(R) Core(TM) Quad CPU Q6600 @ 2.40GHz; No Hyperthreading; 8 GB DDR2 800; Raid card 3ware 9650SE+BBU; 4x1TB RAID10; WD Caviar Black 7200t/min (WD1001FALS)
VM 6G RAM, 4VCPU
ESXI L3426 6GB 4VCPU ISCSI 40GB ESXI 4.1 on dedibox-pro-01 :Intel® Xeon® L3426 @ 1.86GHz; Hyperthreading active; 16 GB DDR3; Raid Card Dell H200, 2x2TB RAID1; SATA Toshiba? 7200t/min
VM 6G RAM, 4VCPU
Storage on a second Linux VM on the same host, iscsi (test low speed access)
ESXI L3426 6GB 4VCPU RAID 1 2x2TB ESXI 4.1 on dedibox-pro-01 :Intel® Xeon® L3426 @ 1.86GHz; Hyperthreading active; 16 GB DDR3; Raid Card Dell H200, 2x2TB RAID1; SATA Toshiba? 7200t/min
VM 6G RAM, 4VCPU
ESXI 5 i7-860 4VCPU 7GB RAID 0 SOFT 4x40GB ESXI 5 on ade-esxi-03: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz; Hyperthreading active; 8 GB DDR3 1066;4x1TB; WD Caviar Black 7200t/min (WD1001FALS)
VM 7G RAM, 4VCPU
Storage 4x40GB RAID 0 Soft

Interpretation

  1. The best performance result (3,211,163 TPM) is obtained with ade-esxi-04. It is not surprising as it is the best couple CPU/Storage (X3440 ,3ware 9650SE +BBU raid 10 mode performance with 4 disk 1TB 7200 Tr Western Digital WD RE3 SATA2). Better disk than ade-esxi-01.
  2. The second best performance result (2,897,114 TPM) is obtained with ade-esxi-01. The Storage system is quite the same as ade-esxi-01 (only the disks differs, consumer grade). Here we see that the old Q6600 CPU is quiet good. In this test, we hit 81.6 % CPU
  3. The performance on the AWS Extra Large Instance is more linear and is good. The best result (2,792,703 TPM) is obtained with 150 virtual users. It is the third result at 128 users.
  4. The performance of the AWS Large Instance is fair (same perf as the XL instance at 32 users – 1,462,758 TPM but stagnation after). After this test, I compared the sqlio results between an Large Instance and an Extra Large Instance. The results were quiet the same (10% difference), So I supposed that it is the extra CPU on the XL server that gives better result. On the Large Instance, the CPU hit 91,2%; on the XL instance, the CPU hit 77,582 %.
  5. The dedibox server gets the worse performance (not surprising, the RAID system is RAID 1 on SATA disk; and the CPU is an entry level one); but the price is very low also (50€/month for the host)

Performance counters

After each test, I checked the performance counters graph like the one below that corresponds to the test on ade-esxi-01 (second best result):

Data Collector for Hammerora test Q6600 - 3ware

Data Collector for Hammerora test Q6600 – 3ware

  • I checked that the CPU was not 100% (I tried to get 85-90% so a little below 100%)
  • I checked that I had enough RAM

PassMark results

Graph results

CPU Mark - System Comparison

CPU Mark – System Comparison

Disk Mark - System Comparison

Disk Mark – System Comparison

Mem Mark - System Comparison

Mem Mark – System Comparison

Interpretation

If I compare the AWS Large and Extra Large Instance to a 6GB 4VCPU Instance on the Dedibox server, I get:

AWS Large Instance
vs
6GB,4Vcpu on dedibox
AWS Extra Large Instance
vs
6GB,4Vcpu on dedibox
CPU Mark +54% +134%
Disk Mark +141% +209%
Mem Mark +104% +217%

Interesting links

IO

Getting the hang of IOPS
Detecting Disk Bottlenecks
Analyzing Characterizing and IO Size Considerations

SQL Server Performance

SQL Server 2000 I/O Basics
SQL Server I/O Basics, Chapter 2
Bufferpool Performance Counters
Physical Database Storage Design (Typical work load)
Buffer Management
SQL Server Storage Engine under the hood: How SQL Server performs I/O
Understanding Logging and Recovery in SQL Server
The transaction log
Decoding a Simple Update Statement Within the Transaction Log
Does CHECKPOINT write uncommitted data to disk?
How do checkpoints work and what gets logged
Scans vs. Seeks
Sequential Read Ahead

EBS

Overview of EBS System
Some AWS EBS Benchmarks and Best Practices by Greplin:tech

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *