{"id":652,"date":"2012-01-05T15:29:54","date_gmt":"2012-01-05T22:29:54","guid":{"rendered":"http:\/\/jim-zimmerman.com\/?p=652"},"modified":"2012-01-05T15:29:54","modified_gmt":"2012-01-05T22:29:54","slug":"expand-vmware-esxi-guest-storage-centos-root-partition","status":"publish","type":"post","link":"https:\/\/jim-zimmerman.com\/?p=652","title":{"rendered":"Expand VMware ESXi guest storage &#8211; CentOS root partition."},"content":{"rendered":"<p>The first lesson I learned on this little adventure was that you need to remove any existing VMware snapshots for the guest.  I removed all the snapshots from the vSphere client using the Snapshot Manager option.<\/p>\n<p>Once you get that out of the way, you just need to go in Edit Settings for the guest, select the Hard Disk you want to expand, and enter in the new size.  I wanted to increase the root partition on a CentOS 6 guest by 10GB.<\/p>\n<p>Once that is completed, all the rest was done from the guest.  My objective was to try to do this without rebooting.  Unfortunately, I had reboot one time early in the process.<\/p>\n<p>First, I had to create a new partition on the expanded disk.  I printed out the current configuration using fdisk and then created the new partition.  As you can see below, I got a message about the device being busy.  I tried using partprobe and kpartx as suggested in the output, but neither worked for me, so I ended up rebooting.<\/p>\n<p># <em>fdisk \/dev\/sda<\/em><\/p>\n<p>Command (m for help): <em>p<\/em><\/p>\n<p>Disk \/dev\/sda: 32.2 GB, 32212254720 bytes<br \/>\n64 heads, 32 sectors\/track, 30720 cylinders<br \/>\nUnits = cylinders of 2048 * 512 = 1048576 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<br \/>\nDisk identifier: 0x00033ab9<\/p>\n<p>   Device Boot      Start         End      Blocks   Id  System<br \/>\n\/dev\/sda1   *           2         501      512000   83  Linux<br \/>\nPartition 1 does not end on cylinder boundary.<br \/>\n\/dev\/sda2             502       20480    20458496   8e  Linux LVM<br \/>\nPartition 2 does not end on cylinder boundary.<\/p>\n<p>Command (m for help): <em>n<\/em><br \/>\nCommand action<br \/>\n   e   extended<br \/>\n   p   primary partition (1-4)<br \/>\n<em>p<\/em><br \/>\nPartition number (1-4): <em>3<\/em><br \/>\nFirst cylinder (1-30720, default 1): <em>20481<\/em><br \/>\nLast cylinder, +cylinders or +size{K,M,G} (20481-30720, default 30720):<br \/>\nUsing default value 30720<\/p>\n<p>Command (m for help): <em>w<\/em><br \/>\nThe partition table has been altered!<\/p>\n<p>Calling ioctl() to re-read partition table.<\/p>\n<p>WARNING: Re-reading the partition table failed with error 16: Device or resource busy.<br \/>\nThe kernel still uses the old table. The new table will be used at<br \/>\nthe next reboot or after you run partprobe(8) or kpartx(8)<br \/>\nSyncing disks.<\/p>\n<p># <em>shutdown -r now<\/em><\/p>\n<p>I created a file system on the partition.  I am not sure whether this is needed or not since I was expanding an existing volume, but I did it anyway.<\/p>\n<p># <em>mkfs.ext4 \/dev\/sda3<\/em><br \/>\nmke2fs 1.41.12 (17-May-2010)<br \/>\nFilesystem label=<br \/>\nOS type: Linux<br \/>\nBlock size=4096 (log=2)<br \/>\nFragment size=4096 (log=2)<br \/>\nStride=0 blocks, Stripe width=0 blocks<br \/>\n655360 inodes, 2621440 blocks<br \/>\n131072 blocks (5.00%) reserved for the super user<br \/>\nFirst data block=0<br \/>\nMaximum filesystem blocks=2684354560<br \/>\n80 block groups<br \/>\n32768 blocks per group, 32768 fragments per group<br \/>\n8192 inodes per group<br \/>\nSuperblock backups stored on blocks:<br \/>\n\t32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632<\/p>\n<p>Writing inode tables: done<br \/>\nCreating journal (32768 blocks): done<br \/>\nWriting superblocks and filesystem accounting information: done<\/p>\n<p>This filesystem will be automatically checked every 38 mounts or<br \/>\n180 days, whichever comes first.  Use tune2fs -c or -i to override.<\/p>\n<p>Then, initialize the new partition:<\/p>\n<p># <em>pvcreate \/dev\/sda3<\/em><br \/>\n  Physical volume &#8220;\/dev\/sda3&#8221; successfully created<\/p>\n<p>Extend the volume group:<\/p>\n<p># <em>vgextend vg_vmdev01 \/dev\/sda3<\/em><br \/>\n  Volume group &#8220;vg_vmdev01&#8221; successfully extended<\/p>\n<p>I extended the volume by 9.9GB:<br \/>\n# <em>lvextend -L +9.9G \/dev\/mapper\/vg_vmdev01-lv_root<\/em><br \/>\n  Rounding up size to full physical extent 9.90 GiB<br \/>\n  Extending logical volume lv_root to 27.44 GiB<br \/>\n  Logical volume lv_root successfully resized<\/p>\n<p>Then, resize the file system:<br \/>\n# <em>resize2fs \/dev\/mapper\/vg_vmdev01-lv_root<\/em><br \/>\nresize2fs 1.41.12 (17-May-2010)<br \/>\nFilesystem at \/dev\/mapper\/vg_vmdev01-lv_root is mounted on \/; on-line resizing required<br \/>\nold desc_blocks = 2, new_desc_blocks = 2<br \/>\nPerforming an on-line resize of \/dev\/mapper\/vg_vmdev01-lv_root to 7193600 (4k) blocks.<br \/>\nThe filesystem on \/dev\/mapper\/vg_vmdev01-lv_root is now 7193600 blocks long.<\/p>\n<p>And that is it.  This expanded the root volume by 9.9GB.  Just to verify that all was well, I rebooted again.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The first lesson I learned on this little adventure was that you need to remove any existing VMware snapshots for the guest. I removed all the snapshots from the vSphere client using the Snapshot Manager option. Once you get that out of the way, you just need to go in Edit Settings for the guest, [&#038;hellip<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[40,153,235,236,203,33],"class_list":["post-652","post","type-post","status-publish","format-standard","hentry","category-documentation","tag-centos","tag-esxi","tag-expand","tag-extend","tag-lvm","tag-vmware"],"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/posts\/652","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=652"}],"version-history":[{"count":1,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/posts\/652\/revisions"}],"predecessor-version":[{"id":653,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=\/wp\/v2\/posts\/652\/revisions\/653"}],"wp:attachment":[{"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=652"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=652"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jim-zimmerman.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=652"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}