IBM i Global

IBM i Global

Connect, learn, share, and engage with IBM Power.

 View Only
Expand all | Collapse all

Limitations in PowerVS for IBM i - iSCSI Backup to Falconstor VTL?

  • 1.  Limitations in PowerVS for IBM i - iSCSI Backup to Falconstor VTL?

    Posted Mon June 09, 2025 09:21 AM

    Hi,

    We are currently evaluating the use of Falconstor VTL over iSCSI as a backup solution for IBM i in powerVS. 
    As we normally only use fiber channel for VTL backups for IBM i, we have some concerns regarding network performance, I/O path design and virtualization layers. 

    Could anyone please clarify the following:

    1. Backup over iSCSi from IBM i 

    • What are known limitations when using Falconstor VTL over iSCSI from IBM i in PowerVS?

    • Are there any official guidelines or performance benchmarks available compared to fiber channel?

    2. Network Path – SEA vs SR-IOV/vNIC

    • Is all iSCSI traffic from IBM i routed through VIOS using Shared Ethernet Adapter (SEA)?

    • Is SR-IOV or vNIC supported for IBM i in powerVS to allow bypassing VIOS for network traffic?

    • If SEA is used, is it single threaded, and how does that impact sustained bandwidth during large backup/restore operations?

    3. Expected Bandwidth via VIOS

    • Can IBM provide any guidance or guarantees regarding achievable or expected bandwidth (MB/s or Gbps) when transferring backup data over TCP/IP?

    • Does IBM ensure that the VIOS partition has sufficient CPU and memory resources in PowerVS to handle high-throughput TCP/IP traffic for iSCSI backups?

    4. Performance Impact

    • Are there best practices to mitigate performance bottlenecks caused by VIOS CPU contention or SEA throughput limitations?



    ------------------------------
    Bests regards,

    Rikard Thorin - iPOC
    Technical Consultant - IBM Power and Storage Systems
    ------------------------------


  • 2.  RE: Limitations in PowerVS for IBM i - iSCSI Backup to Falconstor VTL?

    Posted Thu June 12, 2025 11:00 AM

    Hi Rikard,

    While I don't have a real experience with PowerVS (just theory from documentation and discussion with other users.  ) I am sharing my comments. 

    1. 

    •  The limitation is certainly a bandwidth. But would it be a really a concern in PowerVS? Entire Falconstore VTL runs in virtualized environment anyway, and the bottleneck can be appear anywhere. We as users do not have any capability to modify the configuration.
    • iSCSI protocol is a must to offer virtual machines in any cloud solution. I am not aware about a documentation which could provide some performance numbers comparing different protocols. 
      My biggest running workload in the cloud (except sharing environment)  is the restore process. An administrator does not have access to the console in PowerVS, so in disaster situation, the restore process is more complicated. It is necessary to run installios (installation/restore thru NFS) hich requires all these NFS share configuration. So, the restore in the cloud will be certainly longer and more complicated comparing to directly attached a fiber attached VTL running on premise. I think speed degradation caused by iSCSI is the smallest issue. 

    2.No idea, how iSCSI traffic is routed. I deeply believe it is designed in completely separately infrastructure. Again, I don't think this is something what regular administrator should worry about. I believe that PowerVS designed the infrastructure in the best possible way. If they wouldn't do that, PowerVS would have lost with other cloud providers. 

    • I don't think PowerVS has an option to choice which architecture is used for network virtualization. Why you should worry about?
      Not sure what SEA configuration is used. Again, why are you interested ? It is Cloud provider business to provide the most efficient network which make sense from cost and profit perspective. 

    3. 

    • No idea if there are bandwidth guarantees in PowerVS. As I know other cloud providers, the Cloud environment is shared among other customers. So, it is expected that from time to time it will be more utilized by customer A and the other day by customer B. If the customer needs the best performance, he can buy, the entire machine in the cloud which will guarantee the best performance.
    • VIOS resources is something completely out of control for the cloud users. Again, this is shared environment, and the elements where the bottleneck can show up are many. 

    I don't know why you are so much concerned about TCP bandwidth, IBM i workload is not heavy big-data type of workload. IBMi is transactional I/O oriented system. Where the response time is more important factor than bandwidth. It is more important to have quick response once an user hit enter rather than transfer 20GB of data in short amount of time. We run dozen of LPARs on enterprise servers, and when I check in performance how much TCP data is transferred usually these are very low numbers. 

    When I think about the cloud and shared infrastructure my concern is more about firmware and other elements. Just to give you an example. We were dealing with sophisticated problem when only specific workload on few LPARs was affected by bad dispatching CPU cycles by the Power Hypervisor. I won't go to details, but  the fix was delivered for the service processor. In order to identify the issue it was necessary to provide several FSP dumps, many changes in CPU utlization. This was type of the investigation which can never be made on the Cloud environment. 

    Also when I am in talks with Cloud providers they are mostly oriented on the small IBMi LPARs. They will never tell you this, but when you ask what storage devices and server models are behind you will understand that there is no way to host high IO heavy IBMi workload. So, for these small customers I don't think there is any difference if it runs thru vNIC/SRIOV or SEA. They will not feel the difference. 



    ------------------------------
    Bartlomiej Grabowski
    IBM Champion - Platinum Redbook Author and Principal System Specialist
    ------------------------------