Hi Frank
Thanks for your post, as you know, you have asked me this question in private and I gave you a detailed answer, so you know where I stand.
As this is in public, let me also repeat to you here what I said in private, I am not a VIOS expert but I can not find any evidence of Rochester "Not Recommending VIOS"
If you want my opinion based on what you have shared, it would be that NVMe will work with VIOS but it does not seem like a good fit. VIO Servers do not benefit from high performance storage for their Base Operating System (BOS), in the same way that IBM i does. So having NVMe giving the VIOS BOS a higher performing I/O profile does not seem in and of itself to be that beneficial.
However, having high performance storage being passed to client LPARs would generally be a good thing, so I can see why you would want that. I'm would be most interested to hear about how you plan to implement redundancy and failover in the storage you give to the VIOS clients?
Like most things in life VIOS has it's pro's and con's and it's only when you have completed your system design that you can see whether or not it is appropriate to use VIOS, NMVe or a SAN.
In a previous reply, Larry has pointed out, in many IBM i environments, "i hosting i" is a simple and reliable solution that IBM i SysAdmins can understand and maintain much more easily. I personally run IBM i hosting IBM i on NVMe and have clients who do this and also host AIX and these all run great and considerably faster when hosted on NMVe.
You don't give us much detail of your desired system design, you mention a P10 server, VIOS and 2-3 LPARs. I would recommend your post a new thread in the IBM community forum to get a broader input on your question. I would suggest in the PowerVM group, but where ever you put it you would get a better response if you give details of your desired system design, including:
- number's of VIO Servers
- number and type of guest LPARs
- storage capacity requirements for each LPAR
- storage performance requirements for each LPAR
- storage RAS (Reliably, Availability & Serviceability) requirements for each LPAR
- any advanced VIOS functions that you would like to use e.g. LPM
Then not just me but many system's engineers can offer the opinions and advice you seek, may of these people will know far more about VIOS than I do. So, I'm sure that together as a community we could help you.
I look forward to seeing you post.
------------------------------
Steve Bradshaw Friendly Techie Bloke
------------------------------
Original Message:
Sent: Mon March 22, 2021 06:49 PM
From: Frank Johansen
Subject: What's all the fuss about NMVe's?
Steve,I was listening to your presentation, and I have sold several small systems with NVMe's so I know it quite well.But I have asked Rochester and with no answer so far:1. NVME's are so fast and we can on a P10 have pretty much data, and I would like to us VIOS to have e.g 2-3 LPAR's2. Rochester does not recommend VIOS, but I think they at least could test how much the performance degradtion will be, because the alternative is to still sell and install external storage, and many customers don't like that. Any comment to this ? Mobil: +47 92292811 E-mail: frank.johansen@kvikt.noFra:
Original Message:
Sent: 3/22/2021 7:27:00 AM
From: Steve Bradshaw
Subject: What's all the fuss about NMVe's?
Hi Guys
I thought I'd take a break from the IBM P9 unboxing videos to do a little research into NMVe's and the sort of impact they could have on a typical Scale Out POWER9 S914 server.
The video below is a link to the presentation I have at the i-UG.co.uk Hybrid event in my home town of Wolverhampton last week
https://www.rowtonit.com/whats-all-the-fuss-about-nvme
I thought I would share it on here in case any of you had wondered what sort of effect this new type of storage might have on IBM i?
Spoiler Alert, they really are faster and cheaper than HDD's ;-)
Cheers Brad
PS Don't worry, no servers of techies we harmed in the making or delivery of this presentation ;-)
------------------------------
Steve Bradshaw Friendly Techie Bloke
------------------------------