In this article, I want to show you how to recover a broken virtual machine, specifically how to increase the file system in case we cannot connect to the instance by ssh.
In this article, we will learn how to make the necessary changes and mount a disk from a broken instance to the “rescuer” instance and make the necessary changes. Since version 7.2, Proxmox VE supports safe and easy reassignments of a VM disk via GUI, allowing us to make all the required changes without a ssh connection to the host.
Please note: Although I have tested all of these instructions, any changes you make are at your own risk. Make sure that you have a backup before intervening in a “broken” virtual machine.
If you don’t have a free instance, you need to create one. After that, stop the broken VM and reassign its disk to the new owner. You can do that by going to “hardware”, choosing the needed disk, and selecting the Reassign Owner option:
In the popup menu, choose the required VM instance, set the necessary disk parameters (usually default ones work perfectly), and press Reassign Disk.
To do that, select the required disk in the Hardware menu, and in the Disk Action, choose the Resize option:
Inside the temporary instance we need to find the attached disk and extend its filesystem. It is not enough to extend a physical device; we need to extend its filesystem (and underlying LVM config if we have any) to make all the space available for the OS. The simplest method to find out the new disk is to check the last logs from the dmesg command output. You will see something like this:
[ 847.872511] sd 2:0:0:1: Power-on or device reset occurred
[ 847.872524] sd 2:0:0:1: Attached scsi generic sg2 type 0
[ 847.872800] sd 2:0:0:1: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical
[ 847.873192] sd 2:0:0:1: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
[ 847.873238] sd 2:0:0:1: [sdb] Write Protect is off
[ 847.873240] sd 2:0:0:1: [sdb] Mode Sense: 63 00 00 08
[ 847.873342] sd 2:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 847.875058] sdb: sdb1 sdb2
[ 847.884885] sd 2:0:0:1: [sdb] Attached SCSI disk
That means our new disk was added to the system as /dev/sdb.
To do that, make sure that you a superuser (all commands run from the superuser), and we will use the fdisk utility for that:
fdisk -l /dev/sdb
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007e67f
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 2099199 2097152 1G 83 Linux
/dev/sdb2 2099200 20971519 18872320 9G 8e Linux LVM
Here we see that we need to extend the /dev/sdb2 device. To do that, we need to remove it and create a new one in its place but with an extended size:
fdisk /dev/sdb
p
- p
command prints the disk info, same as running fdisk -l /dev/sdbd
- d
command delete the last partition (in this case, /dev/sdb2)n
e
- n
command creates a new partition; e
makes that an extended partitiont
- t
changes the type of partition8e
as the partition type, specify here type what that you have previously.w
- w
writes the changes to disk and exits fdisk
After that, the disk will look like this:
Disk /dev/sdb: 21 GiB, 22548578304 bytes, 44040192 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007e67f
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 2099199 2097152 1G 83 Linux
/dev/sdb2 2099200 44040191 41940992 20G 8e Linux LVM
Without any options, the size of the volume will be extended to all the available space:pvresize /dev/sdb2
pvdisplay will show us, that PV Size updated up to 20GiB:
--- Physical volume ---
PV Name /dev/sdb2
VG Name centos
PV Size <20.00 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5119
Free PE 2816
Allocated PE 2303
PV UUID L1x3fi-tjdd-A0N0-6Iqg-NSNd-kLID-S6o5Hr
After extending the physical volume, we need to extend the logical volume (LV). Let's check what we have at this time:
lvdisplay
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID IpaS3d-3h27-oSWR-sOjr-QLqo-2zwv-zGJEqY
LV Write Access read/write
LV Creation host, time localhost, 2023-03-01 22:16:28 +0000
LV Status available
# open 1
LV Size <8.00 GiB
Current LE 2047
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
We can see that the LV size is still 8 GiB. Let's extend it to all the available space:
lvextend -l +100%FREE /dev/centos/root
In my case, it's XFS. If you're using a different filesystem, please use the specific commands for that filesystem. Let’s mount it temporarily and extend it via xfs_growfs:
mkdir /tmp/rescue/
mount /dev/centos/root /tmp/rescue/
xfs_growfs /dev/centos/root
df -h
/dev/mapper/centos-root 19G 1.3G 18G 7% /tmp/rescue
umount /tmp/rescue/
Detach the volume from the rescue VM. Only after that we will have the ability to reassign it to the new owner:
In the rescued VM, if the disk was bootable, we need to check that the disk is enabled for boot and chosen for the first place:
In this article, we walked through the comprehensive process of recovering a broken virtual machine with low disk space in Proxmox VE. The steps included mounting the broken instance's disk to a rescuer instance, increasing the disk size through the Proxmox UI, and extending the file system using commands on the rescuer instance. Remember to exercise caution, back up your data, and adapt the commands according to your specific filesystem.