Recovered from the older tannerjc.net wiki snapshot dated January 23, 2016.

Quick tests

[root@dhcp65 ~]# uname -a
Linux dhcp65.eng.rpath.com 2.6.32-131.17.1.el6-0.27.smp.gcc4.1.x86_64 #1 SMP Wed Nov 2 16:19:08 EDT 2011 x86_64 GNU/Linux
[root@dhcp65 ~]# cat /proc/scsi/scsi  | fgrep Vendor | fgrep disk
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
[root@dhcp65 ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  248 MB in  3.00 seconds =  82.60 MB/sec
[root@dhcp192 ~]# uname -a
Linux dhcp192.eng.rpath.com 2.6.32-71.7.1.el6-0.11.smp.gcc4.1.x86_64 #1 SMP Fri Jan 7 14:43:49 EST 2011 x86_64 GNU/Linux
[root@dhcp192 ~]# cat /proc/scsi/scsi  | fgrep Vendor | fgrep disk
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
[root@dhcp192 ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  304 MB in  3.02 seconds = 100.74 MB/sec
[root@jtshell ~]# cat /proc/scsi/scsi | fgrep Vendor | fgrep disk
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
[root@jtshell ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  656 MB in  3.01 seconds = 218.21 MB/sec
[root@vdb1 ~]# cat /proc/scsi/scsi  | fgrep Vendor
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
[root@vdb1 ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  144 MB in  3.08 seconds =  46.74 MB/sec
[root@vcd1a ~]# cat /proc/scsi/scsi  | fgrep Vendor
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
[root@vcd1a ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  180 MB in  3.06 seconds =  58.90 MB/sec
[root@aaronjae ~]# cat /proc/scsi/scsi  | fgrep Vendor
  Vendor: ATA      Model: WDC WD2002FAEX-0 Rev: 05.0    (sda)
  Vendor: ATA      Model: ST32000542AS     Rev: CC34    (sdb)
  Vendor: ATA      Model: WDC WD1600JB-00R Rev: 20.0    (sdc)
  Vendor: ATA      Model: WDC WD2500PB-55F Rev: 15.0    (sdd)
  Vendor: FANTOM   Model: WD10EACS-00D6B0  Rev: 2.10

[root@aaronjae ~]# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads: 376 MB in  3.02 seconds = 124.67 MB/sec

[root@aaronjae ~]# hdparm -t /dev/sdb
/dev/sdb:
 Timing buffered disk reads: 326 MB in  3.02 seconds = 107.97 MB/sec

[root@aaronjae ~]# hdparm -t /dev/sdc
/dev/sdc:
 Timing buffered disk reads: 174 MB in  3.01 seconds =  57.85 MB/sec

[root@aaronjae ~]# hdparm -t /dev/sdd
/dev/sdd:
 Timing buffered disk reads: 168 MB in  3.02 seconds =  55.71 MB/sec
[root@trainwreck ~]# fdisk -l | fgrep Disk /dev/
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/sda: 251.0 GB, 251000193024 bytes
Disk /dev/sdb: 251.0 GB, 251000193024 bytes
Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
Disk /dev/md0: 524 MB, 524275712 bytes
Disk /dev/md1: 2146 MB, 2146426880 bytes
Disk /dev/md2: 248.3 GB, 248325730304 bytes
Disk /dev/md127: 4000.8 GB, 4000789823488 bytes   (raid 5)

[root@trainwreck ~]# cat /proc/scsi/scsi  | fgrep Vendor | fgrep -v DVD
  Vendor: ATA      Model: WDC WD2500YS-01S Rev: 20.0
  Vendor: ATA      Model: WDC WD2500YS-01S Rev: 20.0
  Vendor: ATA      Model: SAMSUNG HD203WI  Rev: 1AN1
  Vendor: ATA      Model: SAMSUNG HD203WI  Rev: 1AN1
  Vendor: ATA      Model: SAMSUNG HD203WI  Rev: 1AN1

[root@trainwreck ~]# hdparm -t /dev/md127
/dev/md127:
 Timing buffered disk reads:  466 MB in  3.01 seconds = 154.84 MB/sec

#kvm virtio for /dev/md127
[root@digweed ~]# hdparm -t /dev/sdb
/dev/sdb:
 Timing buffered disk reads:  318 MB in  3.01 seconds = 105.70 MB/sec

Industry Standards

http://www.enterprisestorageforum.com/hardware/features/article.php/3671466/Measuring-Storage-Performance.htm blockquote The Storage Performance Council (SPC) has had a benchmark out for some time called the SPC-1. It has been used by dozens of vendors to highlight their products, as well as by users to compare performance of system against system. /blockquote

http://www.storageperformance.org/results/benchmark_results_spc2

http://www.csamuel.org/articles/emerging-filesystems-200709/

http://en.wikipedia.org/wiki/Fibre_Channel#History

{| class=wikitable style=margin: 1em auto 1em auto |+ Fibre Channel Variants | NAME | Line-Rate (GBaud) | ‘‘‘Throughput (MBps)*

Availability
1GFC
1.0625
200
1997
-
2GFC
2.125
400
2001
-
4GFC
4.25
800
2005
-
8GFC
8.5
1600
2008
-

| 16GFC | 14.025 | 3200

2011
10GFC Serial
10.52
2550
2004
-
20GFC
21.04
5100
2008
-
10GFC Parallel
12.75
}
nowiki*/nowiki - Throughput for duplex connections

http://post-office.corp.redhat.com/archives/tech-list/2008-June/msg00398.html

We've had some experience of tuning ext3/lvm/SAN here in Sweden for some
large customers.

There are a _lot_ of factors that impact I/O performance. I would
suggest starting at the lowest possible level, i.e. make sure that the
underlying SAN can deliver the expected throughput before you start
tuning the file system, moving the ext3 journal around etc.

Better write perfomance than read performance is to be expected. A write
operation normally gets committed to a huge (hopefully) write-cache in
the SAN whereas read operations has to be from the disks themselves.

We've measured almost 800MB/sec write-throughput using RHEL 4.6 and a
Hitachi SAN (512GB write-cache). Basically hitting the maximum bandwidth
that two 4gbit HBA:s can deliver in a multibus setup. There was no file
system involved in this though...
Read-performance in this setup was around 500MB/sec iirc.

In order to increase read-performance you need to make sure you are
spreading the I/O across enough disks. Basically make sure that your
LUNs are setup correctly (perhaps MetaLUNs) or in some cases you could
use md-devices (depending on the SAN used).

Also, the type of I/O (random or not) and the file system used will of
course impact performance hugely. (offset of partition-table etc. ). But
first make sure that the HBA:s and SAN itself can deliver the numbers
that you need.
  • IDE: up to 133 MB/s
  • SATA: up to 300 MB/s
  • SCSI: up to 320 MB/s
  • USB: up to 480 Mbps (60 MB/s)
  • iSCSI+GigE: up to 1 Gb/s (125 MB/s)
  • Fibre Channel: up to 4.25 Gb/s (531 MB/s)

http://www.dba-oracle.com/disk_i_o_speed_comparison.htm

{|border=1 | manufacturer | model | technology | interface

performance metrics and notes
Texas Memory Systems
RamSan-400
RAM SSD
Fibre Channel InfiniBand
3,000MB/s random sustained external throughput, 400,000 random IOPS
-
Violin Memory
Violin 1010
RAM SSD
PCIe
1,400MB/s read, 1,00MB/s write with ×4 PCIe, 3 microseconds latency
-
Solid Access Technologies
USSD 200FC
RAM SSD
Fibre Channel SAS SCSI
391MB/s random sustained read or write per port (full duplex is 719MB/s), with 8 x 4Gbps FC ports aggregated throughput is approx 2,000MB/s, 320,000 IOPS
-
Curtis
HyperXCLR R1000
RAM SSD
Fibre Channel
197MB/s sustained R/W transfer rate, 35,000 IOPS
}