AIX

AIX

Connect with fellow AIX users and experts to gain knowledge, share insights, and solve problems.

 View Only
  • 1.  NIM Restore goes 554.

    Posted Mon October 29, 2007 06:18 AM

    Originally posted by: SystemAdmin


    Hi guys,
    I have the following problem. I'm trying to restore a virtualised server (root disks comming from VIOs) but after the mksysb gets restored and the lpar starts booting I get in HMC 0554 value in operator panel and it just stops there. The setup is as follows, the host is getting restored to a non virtualised disks!! and the root disk is hdisk3 on the live system and hdisk0 is the destination disk which gets picked up by the restore program to put the restore of mksysb to. I found suggestions that this might mean a corrupted FS or log LV but that's not the case since I did fsck on the FS and logform on the log device, and everything looked ok. I'm thinking that change from hdisk3->hdisk0 might be doing some problems. The major, minor number of the ipldevice are as those of hdisk0. The lslv -m hd5 shows it's on hdisk0, the only thing is that bootlist -m normal -o gives "-", when I tryd to set it with bootlist -m normal hdisk0, it still gives "-". I have restored a virualised server before and everything was ok, the only diff is that there was no change in the name of the root disk...and that worked just fine. Any suggestions will be helpful.


  • 2.  Re: NIM Restore goes 554.

    Posted Mon October 29, 2007 11:51 AM

    Originally posted by: SystemAdmin


    Kindly give us some more information:

    1.) Are you restoring virutal IO server or client?
    2.) How many virutal IO adapter/device defined?

    if its client you can try the following command virutal IO Server then reset the client LPAR.

    A.) find out all the virtual adapter defined for LPAR in Question? then

    (exclude rootvg disk/adapter from following command)

    following command will unconfigure all the child devices as well as adapter vhost2 (it will not remove the definition)
    $ rmdev -dev vhost2 -ucfg -recursive

    reboot the virutal IO client

    then on virutal IO server run the following to make vhost adapter and child device available

    $cfgdev -dev vhost2



  • 3.  Re: NIM Restore goes 554.

    Posted Mon October 29, 2007 04:15 PM

    Originally posted by: SystemAdmin


    Hi,
    Sorry if I was unclear with my explanation. I'm trying to do a test restore of a VIO client, on a non-VIO LPAR profile (the only vio adapter is the ethernet). I have done a test restore of VIO client lpar on the same non-VIO LPAR profile without any problems. It's just that the LPAR that fails has it's rootvg disk with name hdisk3, and the other that got restored successfully had rootvg disk hdisk0.


  • 4.  Re: NIM Restore goes 554.

    Posted Tue October 30, 2007 09:06 AM

    Originally posted by: SystemAdmin


    Just to point that i have already tried all the things in -> http://www-1.ibm.com/support/docview.wss?rs=111&context=SWG10&uid=isg3T1000132&origin=pSeriesTechnicalBulletin#2

    None of them helped.


  • 5.  Re: NIM Restore goes 554.

    Posted Wed October 31, 2007 04:13 AM

    Originally posted by: SystemAdmin


    UPDATE: After doing some more investigation I noticed that, when I do maint_boot from NIM, and access the rootvg before mounting the FS i did bootlist -m normal -o and it returns hdisk0 blv=hd5, which looks OK. After I do exit it mounts the file systems in maint mode, but when I issue bootlist -m normal -o the result is "-". This is not the result that it gives after successful restore of a different mksysb, i.e. it still returns hdiskX blv=hd5.
    Hint to IBM: put edit option so one can edit his own post to fix typos and etc...


  • 6.  Re: Fixed

    Posted Wed October 31, 2007 08:28 AM

    Originally posted by: SystemAdmin


    Hi guys,
    After talking with a colleague, he suggested that this VIO client never new of any real hardware, thus didn't have the drivers to work with it. This turn out to be the case, after finding out the drivers needed for the SCSI Adapter and installing them + update_all, rebooted and the host was working perfectly well. The other VIO clients that got restored without problems in first place, might have got a real SCSI drives in them in some point in time and that's the reason they already had those drivers in the kernel. Hope this turns out useful for someone.