Originally posted by: tux2015
Hello,
software manufacturer will got 1 GB/s r/w for his software product with 12k files.
We got a NetAPP Storage (AllFlash).
With a Linux (x86_64) VM we got on this Storage on the same LUNs with 12k file a little bit more than the 1GB/s r/w.
But with AIX write speed is only 45 MB/s / read speed 917 MB/s.
The AIX LPAR has the Storage connect over VIO Server.
The Filesystem is made with 8 LUNs.
We tryed Sripe 4M / 128K
current is the LV like that:
# lslv lvtest
LOGICAL VOLUME: lvtest VOLUME GROUP: datavg
LV IDENTIFIER: 00fa403600004c000000016aa16e016a.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 1024 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 8 PPs: 8
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 128
MOUNT POINT: /test LABEL: /test
DEVICE UID: 0 DEVICE GID: 0
DEVICE PERMISSIONS: 432
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
INFINITE RETRY: no PREFERRED READ: 0
here the output from the checktool from software manufacturer at AIX
executing bench on /test (300000ms, 2048000000B)
Results:
Physical memory: 17.179GB
Number of processors: 8
Sequential read: 266.571MB/s
Sequential write 174.854MB/s
Random 12k read: 74675 io/s (917.606MB/s) [unreliable: 0%]
Random 12k write: 3812 io/s (46.841MB/s) [unreliable: 0%]
queue depth is 64 at LPAR and VIOs
# lsattr -El hdisk5
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True+
hcheck_interval 30 Health Check Interval True+
hcheck_mode nonactive Health Check Mode True+
max_transfer 0x100000 Maximum TRANSFER Size True
pvid 00fa4036e7ca3dc90000000000000000 Physical volume identifier False
queue_depth 64 Queue DEPTH True+
reserve_policy no_reserve Reserve Policy True+