As Grover pointed out, JFS2 logging is synchronous for every meta data change and depends on the write latency of the storage. Spinning disk will significantly impact that.
Linux is significantly more lazy in writing actual updates to physical disk. I have seen IO on Linux where everything basically ended up only in memory and then being de-staged at a later point in time. You may want to look at your XFS test environment and validate if/when Linux actually writes anything to disk.
So the question here is, what happens in that scenario when "power goes out" ...
With AIX, and synchronous redo logging meta data changes can be replayed very quickly and synced up for a consistent file system, with Linux / XFS that will require significantly more time, potentially more issues as everything stored in memory was lost.
FYI - sync is not a synchronous command, it just tells the OS to sync the buffer cache "at some time", but it returns before all IO has actually happened.
To answer your tuning question, I'm not aware of an option to force JFS2 to not enforce synchronous redo log writing when JFS2 logging is enabled. I'd also would question the purpose of "lazy logging" just to memory.
This means, as Grover already pointed out, you need to optimize the IO characteristics of your JFS2 redo to meet requirements.
With spinning disks you could think about a dedicated redo LV on a different hdisk / set of hdisks from where you write your data to. That should reduce the amount of seeks / seek time significantly as then the redo data is much closer to each other.
Preferably, of course, you'd want storage with no seek penalty for the JFS2 redo logs and low IO latency.
I did a quick test in my lab with a file system spread over 5 disks on IBM FlashSystem:
# lslv -L datalv
LOGICAL VOLUME: datalv VOLUME GROUP: datavg
LV IDENTIFIER: 00c65dc700004c0000000191a444acb5.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 4785 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: striped
LPs: 4785 PPs: 4785
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: no
INTRA-POLICY: middle UPPER BOUND: 5
MOUNT POINT: /oradata LABEL: /oradata
DEVICE UID: 0 DEVICE GID: 0
DEVICE PERMISSIONS: 432
MIRROR WRITE CONSISTENCY: on/PASSIVE
EACH LP COPY ON A SEPARATE PV ?: yes (superstrict)
Serialize IO ?: NO
INFINITE RETRY: no PREFERRED READ: 0
STRIPE WIDTH: 5
STRIPE SIZE: 1m
DEVICESUBTYPE: DS_LVZ
COPY 1 MIRROR POOL: None
COPY 2 MIRROR POOL: None
COPY 3 MIRROR POOL: None
ENCRYPTION: no
File system mounted like this:
/dev/datalv /oradata jfs2 Aug 30 15:24 rw,noatime,log=INLINE
Downloaded the linux-6.11 tar ball and did a test extract with normal JFS2 logging enabled:
# time tar -xf /stage/linux-6.11.tar
real 1m21.22s
user 0m0.31s
sys 0m3.88s
# ls
linux-6.11 pax_global_header
# find . | wc -l
91460
# du -g .|tail
0.00 ./linux-6.11/tools/writeback
0.08 ./linux-6.11/tools
0.00 ./linux-6.11/usr/dummy-include
0.00 ./linux-6.11/usr/include
0.00 ./linux-6.11/usr
0.00 ./linux-6.11/virt/kvm
0.00 ./linux-6.11/virt/lib
0.00 ./linux-6.11/virt
1.57 ./linux-6.11
1.57 .
------------------------------
Ralf Schmidt-Dannert
------------------------------
Original Message:
Sent: Tue September 17, 2024 10:08 AM
From: jack smith
Subject: JFS2 horrible slow
Considering how old this issue is, I highly doubt that IBM will suddenly start enhancing the log just because I asked nicely.
So to summarize: there are no settings which change the way the log works and neither any which impact the log in some other way. Is that correct?
------------------------------
jack smith
Original Message:
Sent: Thu November 07, 2013 10:06 AM
From: Archive User
Subject: JFS2 horrible slow
Originally posted by: XF07_Harald_Dunkel
Hi folks,
I've got 2 8231running AIX 6.1. 32GByte RAM, 2 SAS disks. No RAID. No virtual hosts. Problem: If logging is enabled, then JFS2 on a local disk is slower than a NFS connection to a remote Linux PC. My colleagues are complaining about the poor performance.
To give you some numbers:
Extracting the linux source tarball for 3.11.6 on a local JFS2 filesystem takes about 35 minutes. If I mount a remote filesystem via NFSv4 and use it for the same test, then it takes just 2 minutes. If I run the test local on the NFS server (Linux, amd64) then it takes only a few seconds, including sync.
I understand that this is a special case, writing a ton of tiny files. On daily work the poor performance doesn't show that much, but it is sufficient that nobody likes to work on the AIX hosts.
Is there something misconfigured? AFAICS the documentation says, that asynchronous IO is enabled by default. Using external or inline logging doesn't make a huge difference.
Every helpful comment would be highly appreciated.
Harri