2.1 RADOS 性能测试:使用 Ceph 自带的 rados bench 工具
该工具的语法为:rados bench -p <pool_name> <seconds> <write|seq|rand> -b <block size> -t --no-cleanup
pool_name:测试所针对的存储池
seconds:测试所持续的秒数
<write|seq|rand>:操作模式,write:写,seq:顺序读;rand:随机读
-b:block size,即块大小,默认为 4M
-t:读/写并行数,默认为 16
--no-cleanup 表示测试完成后不删除测试用数据。在做读测试之前,需要使用该参数来运行一遍写测试来产生测试数据,在全部测试结束后可以运行 rados -p <pool_name> cleanup 来清理所有测试数据。
测试实例:
写:
rados bench -p rbd 10 write --no-cleanup
顺序读:
rados bench -p rbd 10 seq
随机读:
rados bench -p rbd 10 rand
删除rados bench命令创建的数据:
rados -p rbd cleanup
查看磁盘io:
`iotop -P`
rados bench 参数:
cur 是current的缩写
cur MB/s 当前速度
avg MB/s 平均速度
Bandwidth (MB/sec): 吞吐量
Average IOPS: 平均iops
Stddev IOPS: 标准偏差
Average Latency(s): 平均延迟
磁盘性能指标--IOPS:
IOPS是指单位时间内系统能处理的I/O请求数量,一般以每秒处理的 I/O请求数量为单位,I/O请求通常为读或写数据操作请求。
IOPS和数据吞吐量适用于不同的场合:
读取10000个1KB文件,用时10秒 Throught(吞吐量)=1MB/s ,IOPS=1000 追 求IOPS;
读取1个10MB文件,用时0.2秒 Throught(吞吐量)=50MB/s, IOPS=5 追求吞吐量。
3个osd写测试:
[root@node61 /home/deploy]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.02939 root default
-2 0.01959 host node61
0 0.00980 osd.0 up 1.00000 1.00000
1 0.00980 osd.1 up 1.00000 1.00000
-3 0.00980 host node62
2 0.00980 osd.2 up 1.00000 1.00000
[root@node61 /home/deploy]# rados bench -p rbd 10 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_node61_6776
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 16 0 0 0 - 0
2 16 16 0 0 0 - 0
3 16 17 1 1.01002 1.33333 2.75919 2.75919
4 16 18 2 1.61264 4 4.38175 3.57047
5 16 20 4 2.67874 8 5.63184 4.60117
6 16 21 5 2.77513 4 6.27716 4.93637
7 16 24 8 3.69826 12 7.53663 5.36375
8 16 27 11 4.54269 12 9.63521 6.43958
9 16 27 11 4.11617 0 - 6.43958
10 16 27 11 3.73256 0 - 6.43958
11 16 27 11 3.43962 0 - 6.43958
12 16 27 11 3.18896 0 - 6.43958
13 16 28 12 3.18362 0.8 6.5024 6.44482
14 16 28 12 2.97979 0 - 6.44482
15 16 28 12 2.80256 0 - 6.44482
16 15 28 13 2.81839 1.33333 10.0749 6.72405
17 15 28 13 2.61604 0 - 6.72405
18 15 28 13 2.49061 0 - 6.72405
19 15 28 13 2.36454 0 - 6.72405
2018-08-13 17:52:08.477094 min lat: 2.75919 max lat: 10.0749 avg lat: 6.72405
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
20 15 28 13 2.2382 0 - 6.72405
21 15 28 13 2.1317 0 - 6.72405
22 15 28 13 2.0433 0 - 6.72405
23 15 28 13 1.96298 0 - 6.72405
24 15 28 13 1.88156 0 - 6.72405
25 15 28 13 1.79303 0 - 6.72405
Total time run: 29.895788
Total writes made: 28
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 3.74635
Stddev Bandwidth: 3.60914
Max bandwidth (MB/sec): 12
Min bandwidth (MB/sec): 0
Average IOPS: 0
Stddev IOPS: 1
Max IOPS: 3
Min IOPS: 0
Average Latency(s): 16.644
Stddev Latency(s): 10.1066
Max latency(s): 29.8954
Min latency(s): 2.75919
5个osd, 读写测试:
[root@node61 /home/deploy]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.04898 root default
-2 0.01959 host node61
0 0.00980 osd.0 up 1.00000 1.00000
1 0.00980 osd.1 up 1.00000 1.00000
-3 0.00980 host node62
2 0.00980 osd.2 up 1.00000 1.00000
-4 0.01959 host node63
3 0.00980 osd.3 up 1.00000 1.00000
4 0.00980 osd.4 up 1.00000 1.00000
写:
[root@node61 /home/deploy]# rados bench -p rbd 10 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_node61_7275
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 16 0 0 0 - 0
2 16 16 0 0 0 - 0
3 16 18 2 2.2145 2.66667 2.95397 2.69119
4 16 19 3 2.51211 4 3.92393 3.10211
5 16 20 4 2.76959 4 4.98236 3.57217
6 16 22 6 3.52704 8 3.18835 3.87848
7 16 22 6 3.0701 0 - 3.87848
8 16 23 7 3.00501 2 8.62433 4.55646
9 16 24 8 2.92811 4 10.2561 5.26891
10 16 24 8 2.68245 0 - 5.26891
11 16 24 8 2.47489 0 - 5.26891
12 16 24 8 2.29712 0 - 5.26891
13 16 24 8 2.14324 0 - 5.26891
14 16 24 8 2.0086 0 - 5.26891
15 16 24 8 1.8899 0 - 5.26891
16 16 24 8 1.77778 0 - 5.26891
17 16 24 8 1.68369 0 - 5.26891
18 11 24 13 2.5991 2.22222 15.0841 9.46222
19 11 24 13 2.4753 0 - 9.46222
Total time run: 21.014547
Total writes made: 24
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 4.56826
Stddev Bandwidth: 2.23602
Max bandwidth (MB/sec): 8
Min bandwidth (MB/sec): 0
Average IOPS: 1
Stddev IOPS: 0
Max IOPS: 2
Min IOPS: 0
Average Latency(s): 13.5903
Stddev Latency(s): 6.88653
Max latency(s): 21.0134
Min latency(s): 2.42841
顺序读:
[root@node61 /home/deploy]# rados bench -p rbd 10 seq
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 28 12 47.9131 48 0.158482 0.0650715
2 11 28 17 33.9516 20 1.70342 0.538642
Total time run: 2.556315
Total reads made: 28
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 43.8131
Average IOPS 10
Stddev IOPS: 5
Max IOPS: 12
Min IOPS: 5
Average Latency(s): 1.31447
Max latency(s): 2.55521
Min latency(s): 0.00834177
随机读:
[root@node61 /home/deploy]# rados bench -p rbd 10 rand
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 53 37 147.801 148 0.0163229 0.0764469
2 16 63 47 93.9226 40 0.0147333 0.170013
3 16 75 59 78.6154 48 0.0149613 0.231844
4 16 86 70 69.9585 44 0.00634351 0.349399
5 16 93 77 61.5661 28 0.00581248 0.437091
6 16 110 94 62.636 68 0.00568122 0.485861
7 16 118 102 58.2577 32 0.00595745 0.628554
8 16 125 109 54.4737 28 6.38156 0.759248
9 16 134 118 52.421 36 0.00767004 0.802165
10 16 146 130 51.9749 48 0.00564792 0.851382
11 15 146 131 47.6147 4 6.74079 0.896339
12 15 146 131 43.6481 0 - 0.896339
13 13 146 133 40.9062 4 7.43297 0.986619
14 13 146 133 37.9836 0 - 0.986619
15 9 146 137 36.5177 8 8.8351 1.15867
Total time run: 15.332471
Total reads made: 146
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 38.0891
Average IOPS: 9
Stddev IOPS: 9
Max IOPS: 37
Min IOPS: 0
Average Latency(s): 1.59156
Max latency(s): 10.9095
Min latency(s): 0.00537916
[root@node61 /home/crush_map]# rados bench -p ssd_pool2 10 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_node61_16843
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 18 2 7.98697 8 0.929823 0.71896
2 16 20 4 6.68635 8 1.58145 1.12577
3 16 25 9 10.6069 20 3.25213 2.10989
4 16 28 12 10.9237 12 4.33429 2.47111
5 16 32 16 11.4436 16 2.35905 2.72101
6 16 36 20 11.4628 16 1.28647 2.78175
7 16 40 24 11.4596 16 7.67592 3.31162
8 16 41 25 10.2458 4 1.53436 3.24053
9 15 44 29 10.5692 16 4.59812 3.49656
10 14 44 30 10.0204 4 8.4173 3.66058
11 12 44 32 9.86333 8 2.50584 3.66558
12 12 44 32 9.15691 0 - 3.66558
13 12 44 32 8.54536 0 - 3.66558
14 12 44 32 8.00719 0 - 3.66558
15 3 44 41 9.6548 9 6.51124 5.17173
Total time run: 17.271009
Total writes made: 44
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 10.1905
Stddev Bandwidth: 6.66405
Max bandwidth (MB/sec): 20
Min bandwidth (MB/sec): 0
Average IOPS: 2
Stddev IOPS: 1
Max IOPS: 5
Min IOPS: 0
Average Latency(s): 5.58109
Stddev Latency(s): 3.95531
Max latency(s): 14.853
Min latency(s): 0.508098
======================================================================================
[root@node61 /home/crush_map]# rados bench -p hdd_pool2 10 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_node61_16926
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 16 0 0 0 - 0
2 16 16 0 0 0 - 0
3 16 16 0 0 0 - 0
4 16 17 1 0.973636 1 3.49244 3.49244
5 16 18 2 1.56592 4 4.3859 3.93917
6 16 18 2 1.30939 0 - 3.93917
7 16 18 2 1.12515 0 - 3.93917
8 16 22 6 2.94814 5.33333 7.80042 6.296
9 16 23 7 3.06251 4 8.14554 6.56022
10 16 23 7 2.7603 0 - 6.56022
11 16 23 7 2.51239 0 - 6.56022
12 16 23 7 2.30549 0 - 6.56022
13 16 23 7 2.11749 0 - 6.56022
14 16 23 7 1.96852 0 - 6.56022
15 16 24 8 2.08867 0.666667 14.6477 7.57116
16 16 24 8 1.96067 0 - 7.57116
17 14 24 10 2.30219 4 9.67359 8.72441
18 14 24 10 2.17686 0 - 8.72441
Total time run: 19.026622
Total writes made: 24
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 5.04556
Stddev Bandwidth: 1.84444
Max bandwidth (MB/sec): 5.33333
Min bandwidth (MB/sec): 0
Average IOPS: 1
Stddev IOPS: 0
Max IOPS: 1
Min IOPS: 0
Average Latency(s): 12.3805
Stddev Latency(s): 5.38723
Max latency(s): 19.0262
Min latency(s): 3.49244
[root@node61 /home/crush_map]# rados bench -p ssd_pool2 10 seq
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
Total time run: 0.903322
Total reads made: 44
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 194.836
Average IOPS 48
Stddev IOPS: 0
Max IOPS: 0
Min IOPS: 2147483647
Average Latency(s): 0.32224
Max latency(s): 0.622408
Min latency(s): 0.145255
===========================================================
[root@node61 /home/crush_map]# rados bench -p hdd_pool2 10 seq
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 24 8 31.9511 32 0.00835768 0.1195
2 16 24 8 15.985 0 - 0.1195
3 16 24 8 10.6595 0 - 0.1195
4 14 24 10 9.99459 2.66667 3.04412 0.704327
5 9 24 15 11.9945 20 4.94445 2.1102
6 6 24 18 11.9952 12 5.37769 2.63175
Total time run: 6.093922
Total reads made: 24
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 15.7534
Average IOPS 3
Stddev IOPS: 3
Max IOPS: 8
Min IOPS: 0
Average Latency(s): 3.46288
Max latency(s): 6.09259
Min latency(s): 0.00663387
网友评论