Expand VMware ESXi guest storage – CentOS root partition.
- January 5th, 2012
- Posted in Documentation
- Write comment
The first lesson I learned on this little adventure was that you need to remove any existing VMware snapshots for the guest. I removed all the snapshots from the vSphere client using the Snapshot Manager option.
Once you get that out of the way, you just need to go in Edit Settings for the guest, select the Hard Disk you want to expand, and enter in the new size. I wanted to increase the root partition on a CentOS 6 guest by 10GB.
Once that is completed, all the rest was done from the guest. My objective was to try to do this without rebooting. Unfortunately, I had reboot one time early in the process.
First, I had to create a new partition on the expanded disk. I printed out the current configuration using fdisk and then created the new partition. As you can see below, I got a message about the device being busy. I tried using partprobe and kpartx as suggested in the output, but neither worked for me, so I ended up rebooting.
# fdisk /dev/sda
Command (m for help): p
Disk /dev/sda: 32.2 GB, 32212254720 bytes
64 heads, 32 sectors/track, 30720 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00033ab9
Device Boot Start End Blocks Id System
/dev/sda1 * 2 501 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 502 20480 20458496 8e Linux LVM
Partition 2 does not end on cylinder boundary.
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (1-30720, default 1): 20481
Last cylinder, +cylinders or +size{K,M,G} (20481-30720, default 30720):
Using default value 30720
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
# shutdown -r now
I created a file system on the partition. I am not sure whether this is needed or not since I was expanding an existing volume, but I did it anyway.
# mkfs.ext4 /dev/sda3
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Then, initialize the new partition:
# pvcreate /dev/sda3
Physical volume “/dev/sda3” successfully created
Extend the volume group:
# vgextend vg_vmdev01 /dev/sda3
Volume group “vg_vmdev01” successfully extended
I extended the volume by 9.9GB:
# lvextend -L +9.9G /dev/mapper/vg_vmdev01-lv_root
Rounding up size to full physical extent 9.90 GiB
Extending logical volume lv_root to 27.44 GiB
Logical volume lv_root successfully resized
Then, resize the file system:
# resize2fs /dev/mapper/vg_vmdev01-lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg_vmdev01-lv_root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 2
Performing an on-line resize of /dev/mapper/vg_vmdev01-lv_root to 7193600 (4k) blocks.
The filesystem on /dev/mapper/vg_vmdev01-lv_root is now 7193600 blocks long.
And that is it. This expanded the root volume by 9.9GB. Just to verify that all was well, I rebooted again.
No comments yet.