List of Contributions

Nezih Boyacioglu

Contact Details

My Content

1 to 20 of 50+ total
Posted By Nezih Boyacioglu Thu March 21, 2024 03:23 AM
Found In Egroup: Primary Storage
\ view thread
Hi Thomas, you can also check "sainfo traceroute -ip_or_name 10.0.0.7" (or dns01.local) ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Wed March 20, 2024 05:29 AM
Found In Egroup: Primary Storage
\ view thread
This was why am I asking :) We will update this Redbook for Spectrum Virtualize 8.7 and release it in July. It will be better to separate this table as FlashSystem and SVC and publish it as two tables. ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Tue March 19, 2024 02:09 PM
Found In Egroup: Global Storage
\ view thread
Hi Apidesh, - is there any nodes display error code 550 or 578? ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Tue March 19, 2024 01:58 PM
Found In Egroup: Primary Storage
\ view thread
Hi Sergio, Inter-node communication, is used for heartbeat and metadata exchange between all nodes of all I/O groups in the cluster. Are you asking for SVC or FlashSystem? ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Mon March 18, 2024 02:47 AM
Found In Egroup: Primary Storage
\ view thread
What is your raid configuration? what is your disk type's. ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Sun March 17, 2024 04:15 AM
Found In Egroup: Primary Storage
\ view thread
go to service gui and change it ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Fri March 15, 2024 02:39 AM
Found In Egroup: Primary Storage
\ view thread
Why don't you replace all the failed discs and keep going without worrying? (replace disks one by one and wait for the raid rebuild procedure completed) ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Thu March 14, 2024 04:21 PM
Found In Egroup: Primary Storage
\ view thread
Hi Randy, 3 disk failure is a very tricky, it's risky due to your raid configuration. battery failure causes the cache to become disable in the controller level. When write cache is disabled all writes from the hosts goes directly to the disks and this creates extra load on the discs. If I was in the ...
Posted By Nezih Boyacioglu Thu March 14, 2024 03:57 AM
Found In Egroup: Global Storage
\ view thread
lol :) how can you connect the 2nd 5035, if it's have an management ip address, this means it's an independent 5035. this is not a cluster (hyperswap) configuration. you must delete the configuration on 2nd. You should have 1 management address for both systems and 4 service ip address for each node. ...
Posted By Nezih Boyacioglu Thu March 14, 2024 03:34 AM
Found In Egroup: Primary Storage
\ view thread
check the ibm docs for replacement procedure; https://www.ibm.com/docs/en/v3700/7.8.1?topic=parts-replacing-battery-in-node-canister ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Thu March 14, 2024 03:28 AM
Found In Egroup: Primary Storage
\ view thread
go to settings > network > management ip addresses and change it ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Thu March 14, 2024 03:19 AM
Found In Egroup: Global Storage
\ view thread
also send me the lstargetportfc command output from flashsystem ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Thu March 14, 2024 02:57 AM
Found In Egroup: Global Storage
\ view thread
Hi, If you're using move data instead of move nodata, it's hard to determine dependent volumes. I suggest increasing the mountwait parameter of the LTO4 device class and monitor the activity log or define an alert trigger for mount requests to receive an email when a new cartridge is required. - ...
Posted By Nezih Boyacioglu Wed March 13, 2024 11:57 AM
Found In Egroup: Global Storage
\ view thread
Hi Nguyen which definition you are using on your aliases, port index, wwnn or wwpn? for storage connectivity, you must use storage wwpns instead of host wwpns on each port. ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Wed March 13, 2024 08:30 AM
Found In Egroup: Global Storage
\ view thread
Can I have a look at your zones? ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Wed March 13, 2024 03:06 AM
Found In Egroup: Global Storage
\ view thread
yes it's old gui :) you can use "enclosure actions" to add your 2nd 5035 to your system ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Wed March 13, 2024 03:03 AM
Found In Egroup: Global Storage
\ view thread
Hi Robert Yes, it's that simple. You can also increse the warning threshold from 80% (default) to 85 or 90. ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Tue March 12, 2024 02:04 PM
Found In Egroup: Global Storage
\ view thread
Hi Cuong. The blog post you are using as a guide is from 2018 and a bit outdated. First of all, you need to zone your storage ports. If your 5035 is using NPIV (it's enabled by default), each port on the 5035 provides two wwpns. One for host connection and one for storage-to-storage communication. ...
Posted By Nezih Boyacioglu Mon March 04, 2024 01:58 PM
Found In Egroup: Primary Storage
\ view thread
Post a new one with the new title and we will try to help as best we can. ------------------------------ Nezih Boyacioglu ------------------------------
Posted By Nezih Boyacioglu Mon March 04, 2024 11:29 AM
Found In Egroup: Primary Storage
\ view thread
Hi Thomas, Have you enabled host unmap on FS5200 (check lssystem command output)? If it's enabled, and you're sure you've deleted some data on this volume, use this esxcli command to force scsi unmap to reclaim space. if host unmap is not enabled, enable it via "chsystem -hostunmap on" before running ...