Wednesday, August 22, 2012

UCS and Fabric Interconnect firmware update

Updating the firmware of UCS IO modules, Adapter, Bios, CIMC controller and FI are straight forward and you can do it from the UCS manager.

The firmware before upgrade is 2.0(2q). I'm upgrading to 2.0(3a).

1) The image below shows the old firmware.

Click on the equipment --Firmware Management--Installed firmware

scroll down to see the  Fabric Interconnect  firmware

Click on the packages tab on  the same folder to see what firmware packages are available on the Interconnect.
Look at the version field to see what version is available.  Here I can see the existing version firmware bundle. I need to download the new version from the Cisco site and needs to upload it to the FI.

Since I have only B series servers  and  FI, I'm downloading B series bundle and Infra bundle.

You can also notice the state as active.

Now go ahead and download the bundle from the CISCO site (need authorised login) to the laptop and upload it to the FI. Before uploading you need to check enough space available on the FI.

Go to equipment--primary Fabric Interconnect--General tab.
Look for the Total Memory and available memory. Here we have enough memory to upload.
if you look at the botflash only 16% is used.

Click on the download task on the same page and click download Firmware, a download firmware window will appear.

Browse and choose the local file system , which choose the  file from the local disk/laptop and start upload to Fabric Interconnect.

Once uploaded completely you can see the status on the general tab of the bottom pane.

On the same windows click on the update firmware tab

You will get the update firmware window.
On this window you can see the Running version, start-up version and backup version.
You can see a drop down list for backup version for each upgradable components.

Choose the latest version in the drop down list for each component and then click the apply button
you can see the status of the component changing from ready to scheduled,updating

You can see that the new version is on the running version and old version on the backup version.
Some cards needs reboot to reflect the current version. you may need a server reboot in that case.
and you can see the FI status as activating.

Well its very simple and the firmware is updated.

Thanks for reading

Jibby George

Monday, August 20, 2012

DFM error: No buffer space avaliable ( maximum connection reached)

NetApp Oncommand/DFM/NMC error: No buffer space avaliable ( maximum connection reached)

*Update-- The below solution worked for me temporarily. The permanent fix as to install the windows hotfix

Kernel sockets leak on a multiprocessor computer that is running Windows Server 2008 R2 or Windows 7

This is for my future reference:
This is the error i came across OnCommand running on windows 2008 R2 server.

The solution is to increase the connection in windows 2008 server.

1) Open the windows command line and check how many ports are opened.

C:\>netsh int ipv4 show dynamicport tcp

Protocol tcp Dynamic Port Range
Start Port      : 49152
Number of Ports : 16384

2) Increase the number of ports with the command

C:\>netsh int ipv4 set dynamicportrange protocol=tcp startport=5000 numberofports=60536

3) confirm again

C:\>netsh int ipv4 show dynamicport tcp

Protocol tcp Dynamic Port Range
Start Port      : 5000
Number of Ports : 60536

Thanks for reading.

Thursday, June 28, 2012

Configure OnCommand DFM to discover NetApp filer in different subnet

This is a requirement when you install OnCommand DFM server in the management network and you need to discover the NetApp filer through the different network like NFS network.

If you have the Management IP configured for the NetApp, DFM will discover the Filer through the management network. If you have vFiler configured in a different network (Vlan) , the DFM wont discover the vFiler.

This is also a requirement for the VSC when installed on the OnCommand server. VSC will show all the filers and vFilers in the monitoring and host configuration through the management IP address but the vFiler will fail to discover in the backup and recovery section as a result you wont be able to do any backup on the datastore presented through the vfiler

To overcome this, you need to configure DFM server with 2 interfaces, one in the managemet and other one in the NFS ( in my case). The DFM server is a VM in my case.

By default DFM discover appliance in the same subnet, to discover appliance in the different subnet you need to do the following:

( 192.168.15.x is my management and 192.168.90.x is my storage network)

1) Create an SNMP in the subnet

# dfm snmp add -m 24 -v 1 -c public

where is the ip address of the netapp filer on a different vlan

2) Check whether the host is added through the management IP address

# dfm host list

4) If the host is not added add the host

# dfm host add
 # dfm host list

5) now you can see the host added through the management interface, if you do a host diag you can see it wont work for 90 network

# dfm host diag
#dfm host diag

6) Now you have to change the appliance primary path

# dfm host set < hostname>  hostPrimaryAddress=
# dfm host diag

You can see any vfiler on the NetApp controller also detected.

Thanks for reading...Jibby