Sometimes, when an NFS server is rebooted or off for a while, the volume will remain inactive or inaccessible and greyed out in vCenter/vSphere. To restore an inactive NFS volume in ESXi version 5.x, after obviously verifying that the NFS server is in fact up, do the following from the command line:
List the mounted volumes:
~ # esxcli storage nfs list
Volume Name Host Share Accessible Mounted Read-Only Hardware Acceleration
———– ————- —– ———- ——- ——— ———————
nfsvol1 192.168.0.251 /nfs1 false true false Unknown
nfsvol2 192.168.0.251 /nfs2 false true false Unknown
Then, remove the volumes:
~ # esxcli storage nfs remove -v nfsvol1
~ # esxcli storage nfs remove -v nfsvol2
~ # esxcli storage nfs remove -v nfsvol2
List to ensure that all inactive or accessible volumes are gone:
~ # esxcli storage nfs list
Add or mount the storage:
~ # esxcli storage nfs add -H 192.168.0.251 -s /nfs1 -v nfsvol1
~ # esxcli storage nfs add -H 192.168.0.251 -s /nfs2 -v nfsvol2
And list again to verify that the volumes are mounted:
~ # esxcli storage nfs list
Volume Name Host Share Accessible Mounted Read-Only Hardware Acceleration
———– ————- —– ———- ——- ——— ———————
nfsvol1 192.168.0.251 /nfs1 true true false Not Supported
nfsvol2 192.168.0.251 /nfs2 true true false Not Supported
Note: You can achieve the same thing by remounting the volumes through the vCenter UI, but why when the command is so much more fun? Besides, in the UI, you might get an error that doesn’t seem to happen from the command line.