I attached the two EBS volumes from the poorly performing 2圎BS RAID0 instance to the fast 4圎BS RAID0 instance and re-ran the tests. Swapping EBS volumes to identify bottleneck The standard deviation between the runs was much smaller for the ephemeral drives than for the EBS volumes. Is EBS performance more of a property of the EBS volumes or the instance?Īnother interesting result I noticed (but didn’t include in these graphs) is the deviation of performance from one run to another. The volumes that make up the 4圎BS RAID0 instance clearly are higher performing than those of the other instances. The ephemeral array does about 165 random seeks per second, which is comparable to a desktop hard disk.ĮBS random seek performance, however, is not easily predictable. bonnie++ was run 6 times on each instance. I logged the results of Sequential Writes, Sequential Reads, and Random Seeks. XFS is used as the filesystem: mkfs.xfs -f /dev/md0įinally the RAID array is mounted with noatime at /mnt/md0: mkdir -p /mnt/md0 & mount -o noatime /dev/md0 /mnt/md0 Mdraid was used to create RAID0 arrays with a chunk size of 256k, for example: mdadm -create -verbose /dev/md0 -level=0 -c256 -raid-devices=2 /dev/sdi1 /dev/sdi2īlockdev is used to set the read-ahead buffer to 64k: blockdev -setra 65536 /dev/md0 Testing was done using bonnie++ on fast mode (-f flag, skips per-char tests). All instances were created in the us-east-1b Availability Zone and all EBS volumes attached were newly created specifically for this test. I created 5 c1.xlarge instances with 5 configurations: 4圎phemeral RAID0 local disk, single EBS, 2圎BS RAID0, 4圎BS RAID0, 8圎BS RAID0. 5ms to 10ms+įor this testing, c1.xlarge instances were used due to their high CPU performance, memory capacity, "I/O Performance: High" (according to Amazon), and 4 available 450GB ephemeral disks. Extremely variable performance - seek times can range from.Portable - an EBS volume can be connected to any instance in a single availability zone."Highly available" - AWS claims to provide redundancy and a lower failure rate than physical disks.Average random seek performance (6-7ms seek times per spindle).Ephemeral - if the instance shuts down, all data is lost. Abundant storage (up to 1.7TB on a c1.xlarge).Stable, predictable performance on par with a standard physical hard disk.Free (included in cost of EC2 instance).There are clear performance implications in choosing how to configure an EC2 instance’s disk subsystem, so I recently benchmarked some various ephemeral and EBS RAID configurations. However, in order to get persistent storage, one has to use network-attached EBS volumes, a sort of limitless in capacity but bound in I/O wonder of Amazon architecture. In order to use more disk space, Amazon provides ephemeral disks that one can format and mount anywhere on the file system. Up until very recently, root directories (’/’) at EC2 were limited to 10Gb, a limit defined by the maximum size of an Amazon Machine Image (AMI), essentially a template of an EC2 instance. Amazon’s EC2 service is really neat, but its disk subsystem has some peculiarities that are not initially obvious.
0 Comments
Leave a Reply. |