Friday 3 February 2023

8 - Ubuntu with LUKS: Backup and Restore with Veeam Part 8: Why restoring LUKS, LVM through Veeam recovery UI is a bad idea

Suppose I wanted to do everything through recovery UI as much as possible, except for setting the LUKS container ID, which simply cannot be done through recovery UI. Why does this not work very well?

When I start out fresh, it looks like this.

              CURRENT SYSTEM           │              IN BACKUP
                                       │
     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
                                       │  sda          127.0G
                                       │   sda1        1.04G   /boot/efi...
                                       │   sda2        2.00G   /boot (ext4)
                                       │  ubuntu-vg    123.9G
                                       │   ubuntu-lv   61.96G  / (ext4)

I can then map sda to sda just as I did in the previous parts.

     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1        sda1 (/bo... 1.04G   │  sda          127.0G
      sda2        sda2 (/boot) 2.00G   │   sda1        1.04G   /boot/efi...
      free                     123.9G  │   sda2        2.00G   /boot (ext4)
                                       │  ubuntu-vg    123.9G
                                       │   ubuntu-lv   61.96G  / (ext4)

And in the free space on sda, I can create a new partition (here: sda3).

     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1        sda1 (/bo... 1.04G   │  sda          127.0G
      sda2        sda2 (/boot) 2.00G   │   sda1        1.04G   /boot/efi...
      sda3                     123.8G  │   sda2        2.00G   /boot (ext4)
      free                     49.40M  │  ubuntu-vg    123.9G
                                       │   ubuntu-lv   61.96G  / (ext4)


Then I can create a LUKS container in sda3.




It will end up looking like this.

     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1        sda1 (/bo... 1.04G   │  sda          127.0G
      sda2        sda2 (/boot) 2.00G   │   sda1        1.04G   /boot/efi...
      sda3                     123.8G  │   sda2        2.00G   /boot (ext4)
       dm_crypt-0              123.8G  │  ubuntu-vg    123.9G
      free                     49.40M  │   ubuntu-lv   61.96G  / (ext4)



Now I can try to map dm_crypt-0 with mapper/dm_crypt-0. But I am getting an error.


This would have mapped the LVM into the new LUKS container. I can try to map ubuntu-vg into dm_crypt-0. But this does not work either. Possibly because I chose to backup /dev/mapper/dm_crypt-0 as a device instead of /dev/mapper/ubuntu-vg--ubuntu-lv as an LVM.


But it doesn't matter. I can create a new LVM in the new crypt-dm-0 LUKS container.




It will look like this.

     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1        sda1 (/bo... 1.04G   │  sda          127.0G
      sda2        sda2 (/boot) 2.00G   │   sda1        1.04G   /boot/efi...
      sda3                     123.8G  │   sda2        2.00G   /boot (ext4)
       dm-cryp...              123.8G  │  ubuntu-vg    123.9G
      free                     49.40M  │   ubuntu-lv   61.96G  / (ext4)
     ubuntu-vg                 123.8G  │
      free                     123.8G  │



Into the free space in ubuntu-vg, I can map ubuntu-lv


It will look like this.


Now I can start the restore, going from an empty disk to restoring everything in one go, all without leaving recovery UI. This looks too good to be true.

                                RECOVERY SUMMARY
   1. Create GPT partition table on sda (scsi)
   2. Create partition sda1 on sda (scsi)
   3. Create partition sda2 on sda (scsi)
   4. Create partition sda3 on sda (scsi)
   5. Create CryptoLUKS: [dm_crypt-0] on device sda3 (scsi)
   6. Restore sda1 (scsi) to sda1 (scsi)
   7. Restore sda2 (scsi) to sda2 (scsi)
   8. Add mapper/dm_crypt-0 (dm) to ubuntu-vg group
   9. Create ubuntu-lv volume on ubuntu-vg group
   10. Restore ubuntu-vg/ubuntu-lv (dm) to ubuntu-vg/ubuntu-lv (dm)

So far, so good.

     Restore                      100%                     Status: Success


      Time             Action                                   Duration

      16:24:40         Job started at 2023-01-23 16:24:40 UTC
      16:24:42         Starting volume restore
      16:26:54         Waiting for backup infrastructure res... 00:00:02
      16:26:56         Applying changes to disks configuration  00:00:11
      16:27:07         sda1 restored 1 GB at 2.1 GB/s           00:00:01
      16:27:08         sda2 restored 2 GB at 1.2 GB/s           00:00:01
      16:27:09         ubuntu--vg-ubuntu--lv restored 62 GB ... 00:00:41
      16:27:51         Restoring efi                            00:00:00
      16:27:51         Restore EFI volume: /dev/sda1
      16:27:51         Restore EFI boot manager entry: ubuntu
      16:27:51         Processing finished at 2023-01-23 16:...



But in any case, the system will not boot unless I fix the LUKS container UUID.

veeamuser@veeam-recovery-iso:~$ OSdisk='/dev/sda'
veeamuser@veeam-recovery-iso:~$ sudo mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt
veeamuser@veeam-recovery-iso:~$ cat /mnt/etc/crypttab
dm_crypt-0 UUID=d8073181-5283-44b5-b4dc-6014b2e1a3c2 none luks
veeamuser@veeam-recovery-iso:~$ sudo umount /dev/mapper/ubuntu--vg-ubuntu--lv
veeamuser@veeam-recovery-iso:~$ sudo vgchange -an ubuntu-vg
  0 logical volume(s) in volume group "ubuntu-vg" now active
veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksUUID --uuid d8073181-5283-44b5-b4dc-6014b2e1a3c2 ${OSdisk}3

WARNING!
========
Do you really want to change UUID of device?

Are you sure? (Type uppercase yes): YES
veeamuser@veeam-recovery-iso:~$ sudo reboot

The restored system boots successfully.


Is everything as it was before the restore? No. the LUKS container sector size is now 512. It was 4096 before.

admin01@testlabubuntu01:~$ sudo cryptsetup luksDump ${OSdisk}3 | grep sector
        sector: 512 [bytes]
admin01@testlabubuntu01:~$

No problem, I can re-encrypt it with 4096 sector size. Or can I?

admin01@testlabubuntu01:~$ sudo cryptsetup reencrypt --sector-size=4096 ${OSdisk}3
Enter passphrase for key slot 0:
Auto-detected active dm device 'dm_crypt-0' for data device /dev/sda3.
Data device is not aligned to requested encryption sector size (4096 bytes).
Failed to initialize LUKS2 reencryption in metadata.
admin01@testlabubuntu01:~$

What happened here? It turns out, the partition created by Veeam during restore does not have its end aligned the way LUKS expects it when 4K sector size is used.

admin01@testlabubuntu01:~$ sudo gdisk -l $OSdisk
...
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2203647   1.0 GiB     EF00
   2         2203648         6397951   2.0 GiB     8300
   3         6397952       266235083   123.9 GiB   8300
admin01@testlabubuntu01:~$

parted reports that the partition is aligned, so where is the problem?

admin01@testlabubuntu01:~$ sudo parted $OSdisk align-check optimal 3
3 aligned
admin01@testlabubuntu01:~$


(END + 1) * 512 Must be divisible by 1024 * 1024

Here, the last sector of /dev/sda3 is 266235083 (see above)
(266235083 + 1) * 512 / (1024 * 1024*) = 129997.599609375
Not divisible.

How to fix this. As I described in the previous parts, after restoring the ESP and boot partitions, exit recovery UI and create the LUKS partition on the shell.

You can do this with parted which is included in Veeam's recovery media. Use --align / -a optimal.

veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk  print free      
Model: Msft Virtual Disk (scsi)
Disk /dev/sda: 136GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  1128MB  1127MB  fat32              boot, esp
 2      1128MB  3276MB  2147MB  ext4
        3276MB  136GB   133GB   Free Space

veeamuser@veeam-recovery-iso:~$ sudo parted -a optimal $OSdisk  mkpart 'LUKS' 3276MB 100%
Information: You may need to update /etc/fstab.

veeamuser@veeam-recovery-iso:~$

This will create partitions that are end aligned.

veeamuser@veeam-recovery-iso:~$ sudo gdisk -l $OSdisk                
Number  Start (sector)    End (sector)  Size       Code  Name
...
   3         6397952       266336255   123.9 GiB   8300  LUKS
veeamuser@veeam-recovery-iso:~$ 

Here, the last sector of /dev/sda3 is 266336255
(266336255+ 1) * 512 / (1024 * 1024*) = 130047
Divisible.

gdisk / sgdisk will not align the partition end by default. But you can use it if you do the math yourself.

veeamuser@veeam-recovery-iso:~$ sudo gdisk -l $OSdisk
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2203647   1.0 GiB     EF00
   2         2203648         6397951   2.0 GiB     8300

start_position=$(sudo sgdisk --first-aligned-in-largest /dev/sda $OSdisk)
end_position=$(sudo sgdisk --end-of-largest /dev/sda $OSdisk)
aligned_end_position=$((end_position - ($end_position + 1) % 2048))

veeamuser@veeam-recovery-iso:~$ sudo sgdisk --set-alignment=2048 --new=0:$start_position:$aligned_end_position $OSdisk
The operation has completed successfully.
veeamuser@veeam-recovery-iso:~$ sudo gdisk -l $OSdisk
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2203647   1.0 GiB     EF00
   2         2203648         6397951   2.0 GiB     8300
   3         6397952       266336255   123.9 GiB   8300
veeamuser@veeam-recovery-iso:~$

The partition created this way with sgdisk has the same end sector as when I created it with parted with --align optimal. Now creating the container with --sector-size=4096 will work.

sudo cryptsetup luksFormat --sector-size=4096  ${OSdisk}3

To sum up: You can't restore the system entirely through the recovery UI because...
  • The system will not boot because the LUKS container UUID is different after the restore.
  • The partition end will not be aligned, and the LUKS container will have the wrong sector size.

Tuesday 31 January 2023

Installing QNAP SnapAgent on Hyper-V Server 2019

In this blog, I will show how to install QNAP SnapAgent (QNAP's hardware VSS provider driver). The problem is that the installer refuses to install on Hyper-V Server (the free product).

Fixing "The product can only be installed on Windows Server 2008 R2 or above."


On the Hyper-V server, open regedit and navigate to: "Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion"

Change the following key:

"InstallationType": change from "Server Core" to "Server".


Now install QNAP Snapshot agent. It should install fine. Don't forget to change the registry value back to what it was.




Source(s)

Saturday 28 January 2023

7 - Ubuntu with LUKS: Backup and Restore with Veeam Part 7 - Bare-metal restore: fixing LUKS container UUID

If you followed the previous part, you should now have restored all data, all partitions, the LUKS container, the LVM, the bootloader, etc.

If you reboot now, the system will not boot successfully because the bootloader (GRUB) is looking for the LUKS container with the UUID that it had before the restore. But you have created a new LUKS container and it now has a new UUID.

If you have not already done so, save the device name of your operating system disk in a variable. To find out the device name of the operating system disk, see part 1. In my case, the operating system disk is /dev/sda.

OSdisk='/dev/sda'

If you wrote down the original UUID like I suggested in the previous parts, you can now proceed by changing the UUID and you can skip the next step. If you did not write down the UUID, proceed here.


Verifying partition alignment and sector/cluster/block sizes


You should now verify that everything has been restored in exactly the way it was before. Confirm that all the sector/cluster/block sizes match with that they were before the restore. If sector/cluster/block sizes are chosen wrongly or if partitions are misaligned it can negatively impact performance. If you did not follow my guide, and you restored everything from within recovery UI, most likely it will not match. See the next part for more information on this.

Checking partition alignment


First, you can check if all three partitions are aligned properly.

veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk align-check optimal 1
1 aligned
veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk align-check optimal 2
2 aligned
veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk align-check optimal 3
3 aligned
veeamuser@veeam-recovery-iso:~$

Checking sector/cluster/block sizes


The goal here is to verify that everything has been restored in exactly the way it was before the restore. If you wrote it down or saved it as shown in part 4, this should be easy to confirm. If you didn't write it down, I will give some pointers, but if you want to be 100% sure, you can always install Ubuntu on the system, write down the setup (sector/block sizes, etc) that Ubuntu installer creates, and then wipe it and proceed with the restore.

If you don't know what the sector/cluster/block sizes were before the restore consider this:
  • During my testing, the block size of the /boot and /boot/efi partitions were always restored by Veeam exactly as they were before the restore. You can assume that the block size of /boot and /boot/efi partitions is also the block size that all the other partitions and volumes should have.
  • It should always be the same for all partitions, volumes and file systems. In practice, this should be 512 throughout or 4096 throughout because Ubuntu installer will choose either 512 or 4096.
  • Though I have not tested all configurations, it should be like this:
    • Disk is 512n (logical/physical sectors 512/512): Block size should be 512 throughout.
    • Disk is 512e (logical/physical sectors 512/4096): Block size should be 4096 throughout.
    • Disk is 4Kn (logical/physical sectors 4096/4096): Block size should be 4096throughout.
  • In other words: It looks like Ubuntu installer always sets the block size to what is reported as the physical sector size by the disk.
The steps will be the same as in Part 4 - Getting ready for bare-metal restore, except now you have a baseline to compare it to.

Check the LUKS container properties. 

admin01@testlabubuntu01:~$ sudo cryptsetup luksDump ${OSdisk}3 | grep -E 'sector|UUID'
UUID:           d8073181-5283-44b5-b4dc-6014b2e1a3c2
        sector: 4096 [bytes]
admin01@testlabubuntu01:~$

Of interest here is the sector size of the LUKS container and the container/partition UUID, but you might want to take note of some other properties here as well.

admin01@testlabubuntu01:~$ sudo cryptsetup luksDump ${OSdisk}3 | grep Cipher
        Cipher:     aes-xts-plain64
        Cipher key: 512 bits
admin01@testlabubuntu01:~$


Also note the partition layout. Of interest here is the logical and physical sector size.

admin01@testlabubuntu01:~$ sudo gdisk -l $OSdisk
GPT fdisk (gdisk) version 1.0.8
...
Sector size (logical/physical): 512/4096 bytes
...
First usable sector is 34, last usable sector is 266338270
Partitions will be aligned on 2048-sector boundaries
...
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2203647   1.0 GiB     EF00
   2         2203648         6397951   2.0 GiB     8300
   3         6397952       266336255   123.9 GiB   8300  LUKS
admin01@testlabubuntu01:~$


Though rather unimportant (unless you intend to restore to a disk with different logical/physical sector sizes), you can check the cluster size of the FAT32 formated EFI System Partition (ESP, here: /dev/sda1). I did not find a way to do this in Veeam recovery media, but if you created your own Ubuntu live based recovery media, as I have shown previously, you can install mtools.

sudo apt install mtools

And then use minfo. Look for sector size and cluster size. 

admin01@testlabubuntu01:~$ sudo minfo -i ${OSdisk}1 | grep -E 'sector size|cluster size'
Hidden (2048) does not match sectors (63)
sector size: 512 bytes
cluster size: 8 sectors
admin01@testlabubuntu01:~$

The cluster size is bytes is sector size * cluster size. here: 512 * 8 = 4096. You can ignore the warning 'Hidden (2048) does not match sectors (63)'. It just means that the partition is properly aligned.

Next is the /boot partition (EXT4), here /dev/sda2. Block size of /boot is also unimportant by the way, because it does not impact performance.

veeamuser@veeam-recovery-iso:~$ sudo tune2fs -l ${OSdisk}2 | grep "^Block size:"
Block size:               4096
veeamuser@veeam-recovery-iso:~$

As far as my testing goes, the sector/block size of the ESP and /boot partition will always be same after the restore as it was before the restore because Veeam restores them as they were before. To a fault actually, because when I tried to restore a backup from a 512e disk onto a 4Kn disk, this led to Veeam not properly restoring the ESP partition.

Next, check the block size of the LVM mapper device. This should be the same as the LUKS container sector size.

veeamuser@veeam-recovery-iso:~$ sudo blockdev --getss /dev/mapper/ubuntu--vg-ubuntu--lv
4096
veeamuser@veeam-recovery-iso:~$

Or you can use tune2fs.

veeamuser@veeam-recovery-iso:~$ sudo tune2fs -l /dev/mapper/ubuntu--vg-ubuntu--lv | grep "^Block size:"
Block size:               4096
veeamuser@veeam-recovery-iso:~$

You can also check with stat.

admin01@testlabubuntu01:~$ sudo stat -f /
  File: "/"
    ID: f16df925830148c0 Namelen: 255     Type: ext2/ext3
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 15909803   Free: 13416161   Available: 12599880
Inodes: Total: 4063232    Free: 3949218
admin01@testlabubuntu01:~$

If you have saved this info in a file before, like I showed in part 4, you can now compare it directly to the restored system.

Enter the RestoreInfo folder (here: /mnt/home/admin01/RestoreInfo).

cd /mnt/home/admin01/RestoreInfo

Save the current properties. 

sudo cryptsetup luksDump ${OSdisk}3 > luksDump-restore
sudo gdisk -l $OSdisk > gdisk-restore
sudo minfo -i ${OSdisk}1 > minfo-part-1-restore
sudo tune2fs -l ${OSdisk}2 > tune2fs-part-2-restore
sudo blockdev --getss /dev/mapper/ubuntu--vg-ubuntu--lv > blockdevubuntu--vg-ubuntu--lv-restore
sudo tune2fs -l /dev/mapper/ubuntu--vg-ubuntu--lv > tune2fs-ubuntu--vg-ubuntu--lv-restore
sudo stat -f / > stat-root-file-system-restore

Compare it like so. For the luksDump command, the only thing that should be different is the UUID and a few crypt properties.

examples:
diff luksDump luksDump-restore
diff gdisk -w gdisk-restore

veeamuser@veeam-recovery-iso:/mnt/home/admin01/RestoreInfo$ diff luksDump luksDump-restore
6c6
< UUID:                 d8073181-5283-44b5-b4dc-6014b2e1a3c2
---
> UUID:                 f802c718-8ba1-4487-94af-13c382cf6372
24c24
<       PBKDF:      argon2id
---
>       PBKDF:      argon2i
28,29c28,29
<       Salt:       93 0e 2b 73 e2 58 d2 89 68 61 09 ec 4b 76 a4 c9
<                   55 18 49 c7 85 4d d9 2c 5a 18 3f 49 5d 16 31 9d
---
>       Salt:       a9 70 0a 3f 65 b9 39 82 f1 64 f0 66 f2 66 f5 98
>                   a9 2b 9c a0 09 04 a9 49 57 6c 8f f0 0d 8e 25 7a
39,43c39,43
<       Iterations: 129902
<       Salt:       b1 b2 5b 55 0e 16 eb d3 33 57 62 f7 a8 45 97 96
<                   6d e1 3b c0 cb e1 d7 6f 9f f8 7b 82 c7 8e 90 ea
<       Digest:     de 73 b8 89 70 17 65 f5 b0 5c f0 21 14 e0 cb 21
<                   e8 25 74 5b 8f a1 14 dc bf 54 89 a2 b0 53 fd 2f
---
>       Iterations: 141852
>       Salt:       4a e5 4e aa e5 f7 7b d3 c8 88 6f 08 6e 45 dc b1
>                   d3 2f c5 7a 00 63 8e d4 4c e1 87 c9 2c d3 ea 70
>       Digest:     c4 fe 0f a0 53 34 8f eb 67 b8 a4 50 54 76 17 13
>                   a9 7a fd 64 34 be ef 88 f0 a3 cd df 59 fe d8 d2
veeamuser@veeam-recovery-iso:/mnt/home/admin01/RestoreInfo$

veeamuser@veeam-recovery-iso:/mnt/home/admin01/RestoreInfo$ diff gdisk -w gdisk-restore
1c1
< GPT fdisk (gdisk) version 1.0.8
---
> GPT fdisk (gdisk) version 1.0.3
23c23
<    3         6397952       266336255   123.9 GiB   8300
---
>    3         6397952       266336255   123.9 GiB   8300  LUKS
veeamuser@veeam-recovery-iso:/mnt/home/admin01/RestoreInfo$

You can do this for every file.

Finding out the original LUKS container UUID

If you exited recovery UI, after restoring the LVM, as I showed in the previous part, the LUKS container will still be open and the LVM will be open too. In that case, you can proceed straight to mounting the root file system. If you rebooted for some reason, you now need to open the LUKS container.

You can check if the LUKS container is opened by looking for a mapper device.

veeamuser@veeam-recovery-iso:~$ ls /dev/mapper
 control                                          ubuntu--vg-ubuntu--lv
'luks-a6ef0f16-f6ef-46d8-ace7-071cbc3cec58\x0a'
veeamuser@veeam-recovery-iso:~$

In this case, you can see that both the LUKS container and the LVM are opened. If they are not opened, open them now.

veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksOpen ${OSdisk}3 dm_crypt-0
Enter passphrase for /dev/sda3:
veeamuser@veeam-recovery-iso:~$

The same goes for the LVM. If it isn't open, if you can't see it as /dev/mapper/ubuntu--vg-ubuntu--lv, open it. If the LUKS container has been opened successfully, the LVM should be found.

veeamuser@veeam-recovery-iso:~$ sudo vgscan
  Found volume group "ubuntu-vg" using metadata type lvm2
veeamuser@veeam-recovery-iso:~$

Open it.

veeamuser@veeam-recovery-iso:~$ sudo vgchange -ay ubuntu-vg
  1 logical volume(s) in volume group "ubuntu-vg" now active
veeamuser@veeam-recovery-iso:~$

You should now be able to mount the root file system.

Mounting the root file system


sudo mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt

Now, look at /mnt/etc/crypttab to find out, what the UUID was before the restore.

veeamuser@veeam-recovery-iso:~$ cat /mnt/etc/crypttab
dm_crypt-0 UUID=d8073181-5283-44b5-b4dc-6014b2e1a3c2 none luks
veeamuser@veeam-recovery-iso:~$

Setting the original LUKS container UUID


With that done, unmount the root file system.

sudo umount /dev/mapper/ubuntu--vg-ubuntu--lv

Now, close the LVM.

veeamuser@veeam-recovery-iso:~$ sudo vgchange -an ubuntu-vg
  0 logical volume(s) in volume group "ubuntu-vg" now active
veeamuser@veeam-recovery-iso:~$

Close the LUKS container. If you opened the container through recovery UI, it will be named something like this (check the name in /dev/mapper/).

sudo cryptsetup luksClose /dev/mapper/luks-a6ef0f16-f6ef-46d8-ace7-071cbc3cec58\x0a

But in my case, it is called dm_crypt-0.

sudo cryptsetup luksClose /dev/mapper/dm_crypt-0

Get the current LUKS container UUID.

veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksUUID ${OSdisk}3
a6ef0f16-f6ef-46d8-ace7-071cbc3cec58
veeamuser@veeam-recovery-iso:~$

Compare this to the UUID in /mnt/etc/crypttab: d8073181-5283-44b5-b4dc-6014b2e1a3c2. Although the UUIDs will be different for you, they do not match. This is the reason why GRUB would not be able to find the LUKS container, if you rebooted now. The LUKS container would not be opened during boot. The root file system would not be found.

Now set the UUID of the LUKS container to what it was before the restore. Do not attempt to do this the other way around, by updating /mnt/etc/crypttab with the new UUID. You would have to update the GRUB config if you did.

veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksUUID --uuid d8073181-5283-44b5-b4dc-6014b2e1a3c2 ${OSdisk}3
WARNING!
========
Do you really want to change UUID of device?
Are you sure? (Type uppercase yes): YES
veeamuser@veeam-recovery-iso:~$

Optionally, you can now confirm, that the UUID has been updated and matches that of the UUID in /mnt/etc/crypttab.

veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksUUID ${OSdisk}3 
d8073181-5283-44b5-b4dc-6014b2e1a3c2
veeamuser@veeam-recovery-iso:~$

Remove the recovery media and reboot.

sudo reboot

If everything went well, you will be asked for the LUKS password during boot.


Followed by the login prompt.


This is it. For some troubleshooting tips, the explanation as to why you should not use the UI to re-create LUKS, LVM, and some other things, see the next parts.

6 - Ubuntu with LUKS: Backup and Restore with Veeam Part 6 - Bare-metal restore: LUKS, LVM, root file system

If you followed part 3, you should still be in the recovery media environment. You should have restored the GPT table, the EFI system partition and the /boot partition. 

If you have not already done so, save the device name of your operating system disk in a variable. To find out the device name of the operating system disk, see part 1. In my case, the operating system disk is /dev/sda.

OSdisk='/dev/sda'

Your partition table should look something like this.

ubuntu@ubuntu:~$ sudo gdisk -l $OSdisk
GPT fdisk (gdisk) version 1.0.9
...
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
...
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2203647   1.0 GiB     EF00
   2         2203648         6397951   2.0 GiB     8300
ubuntu@ubuntu:~$

There are two partitions (in my case /dev/sda1, /dev/sda2).

The issue with Veeam and LUKS containers


Veeam does not support backing up LUKS containers. Let's take a moment here to consider what this means in practice. Have a look at what is in the backup.


              CURRENT SYSTEM           │              IN BACKUP
                                       │
     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
                                       │  sda          127.0G
                                       │   sda1        1.04G   /boot/efi...
                                       │   sda2        2.00G   /boot (ext4)
                                       │  ubuntu-vg    123.9G
                                       │   ubuntu-lv   61.96G  / (ext4)

Compare this to what the partition table looked like when the backup was made (see part 1), This is the partition table that was created by Ubuntu installer.

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   2203647   2201600     1G EFI System
/dev/sda2  2203648   6397951   4194304     2G Linux filesystem
/dev/sda3  6397952 266338270 259940319 123.9G Linux filesystem

Now consider, what should have been backed up according to the backup job settings (see part 1).

/dev/mapper/dm_crypt-0
/dev/sda

Note that /dev/sda3, the partition that holds the LUKS container, is missing from the backup. This is because Veeam did not back it up. The same would have happend if you had chosen to back up the 'Entire computer' in the backup job settings.

Re-creating LUKS partition


If you recall from the first part, the LUKS container was in the third partition on the disk. It is important, that you do not do this in the Veeam recovery UI. I will explain this later.

List the free space on the disk

veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk print free
Model: Msft Virtual Disk (scsi)
Disk /dev/sda: 136GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  1128MB  1127MB  fat32              boot, esp
 2      1128MB  3276MB  2147MB  ext4
        3276MB  136GB   133GB   Free Space

veeamuser@veeam-recovery-iso:~$

The third partition will be created in the free space after the second partition (/boot). Here, it starts at 3276MB. Create the partition for the LUKS container.

veeamuser@veeam-recovery-iso:~$ sudo parted -a optimal $OSdisk  mkpart LUKS 3276MB 100%
Information: You may need to update /etc/fstab.

veeamuser@veeam-recovery-iso:~$

The result should like this.

veeamuser@veeam-recovery-iso:~$ sudo parted $OSdisk print
Model: Msft Virtual Disk (scsi)
Disk /dev/sda: 136GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  1128MB  1127MB  fat32              boot, esp
 2      1128MB  3276MB  2147MB  ext4
 3      3276MB  136GB   133GB                LUKS

veeamuser@veeam-recovery-iso:~$

Now it is time to re-create the LUKS container.

Re-creating LUKS container


Once again, do not do this in the recovery UI. 

Notes on LUKS container sector size


Note the --sector-size parameter below. If you need this, and what it should be will depend on your physical disk. In part 1, I showed how to determine the LUKS sector size of the container that is created by Ubuntu installer. You should aim to re-create the container with the same sector size that it had before the restore.  If you don't know what the sector size was before the restore, it gets tricky. 

By default, if no --sector-size parameter is supplied, cryptsetup will try to find the optimal sector size, depending on the properties of the physical disk. But this does not always lead to the same result as with the LUKS container that is created by Ubuntu installer. I will go into more details about this in the next part. For now, if you don't know what the sector size of the LUKS container should be, consider your physical disk.

veeamuser@veeam-recovery-iso:~$ sudo gdisk -l $OSdisk | grep physical
Sector size (logical/physical): 512/4096 bytes
veeamuser@veeam-recovery-iso:~$

This is the old 512n, 512e, 4Kn issue. Basically, there are three possibilities.

  • 512n (logical/physical: 512/512 bytes)
    In this case, it would be best, to set the sector size to 512 and cryptsetup should do this by default, if no --sector-size parameter is supplied. I did not test what Ubuntu installer uses in this case.
  • 512e (logical/physical: 512/4096 bytes)
    Here, Ubuntu installer will use a sector/block size of 4096 for all partitions. (ESP, here: /dev/sda1, /boot, here: /dev/sda2, LUKS here: /dev/sda3). But cryptsetup will default to 512 which is why it is necessary to set 
    --sector-size=4096 if you want to re-create the LUKS container exactly as it was before the restore.
  • 4Kn (logical/physical: 4096 /4096 bytes)
    I have not tested this, but presumably, Ubuntu installer will also default to 4096 and cryptsetup will also default to 4096. You can also set --sector-size=4096.
Once you know, what sector size you need, proceed to create the LUKS container.


Re-creating LUKS container


Note, that ${OSdisk}3 in my case is /dev/sda3. If the partition start and end are not both aligned to the sector size, this will fail. More about that in the next part.

veeamuser@veeam-recovery-iso:~$ sudo cryptsetup luksFormat --sector-size=4096 ${OSdisk}3

WARNING!
========
This will overwrite data on /dev/sda3 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase for /dev/sda3:
Verify passphrase:
veeamuser@veeam-recovery-iso:~$

Once the LUKS container is created, return to the recovery UI.

Restoring LVM and root file system


Enter recovery UI. On Ubuntu live based recovery media, enter

sudo veeamconfig recoveryui

On generic or custom Veeam recovery media, enter

sudo veeam

Choose 'Restore volumes'


Select your backup just as you did in part 3.


You will now be notifed that a crypto device was found. This is the LUKS container you just created.

Found 1 crypto devices. Do you want to decrypt them all?
[Yes]   [No]


Choose yes and enter the password.


In the overview, it should now look like this. There should be the third partition (here: sda3) and the opened LUKS container (here: luks-c0...).

              CURRENT SYSTEM           │              IN BACKUP
                                       │
     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1                     1.04G   │  sda          127.0G
      sda2                     2.00G   │   sda1        1.04G   /boot/efi...
      sda3                     123.9G  │   sda2        2.00G   /boot (ext4)
       luks-a6...              123.9G  │  ubuntu-vg    123.9G
                                       │   ubuntu-lv   61.96G  / (ext4)


In 'CURRENT SYSTEM', select the LUKS container (here: luks-c0...). Map it to mapper/dm_crypt-0 by choosing 'Restore from' and selecting mapper/dm_crypt-0. If this works, skip the next step and move on to restoring LVM and root file system. 

Only, if you get 'The device is too small'


If you get an error 'The device is too small' perhaps because you are restoring to different hardware, no problem. Proceed by manually creating the LVM.


'The device is too small': manually creating the LVM


In 'CURRENT SYSTEM', select the LUKS container (here: luks-c0...). Select 'Create a new volume group'.



Choose the name 'ubuntu-vg' as this is what Ubuntu installer uses.


Map ubuntu-lv into 'ubuntu-vg'. Proceed as shown below.

Proceed with restoring LVM and root file system


Whether you were able to map the two crypto mapper devices, or you had to manually re-create the LVM, it should now look like this.


              CURRENT SYSTEM           │              IN BACKUP
                                       │
     Device       Restore      Size    │  Device       Size    Usage
                                       │
     sda                       127.0G  │  mapper/dm... 123.9G
      sda1                     1.04G   │  sda          127.0G
      sda2                     2.00G   │   sda1        1.04G   /boot/efi...
      sda3                     123.9G  │   sda2        2.00G   /boot (ext4)
       luks-a6...              123.9G  │  ubuntu-vg    123.9G
     ubuntu-vg                 123.9G  │   ubuntu-lv   61.96G  / (ext4)
      ubuntu-lv   ubuntu--v... 61.96G  │
      free                     61.96G  │


Proceed with the restore. Output will be slightly different, if you had to manually re-create the LVM.

                                RECOVERY SUMMARY
   1. Add dm-0 (dm) to ubuntu-vg group
   2. Create ubuntu-lv volume on ubuntu-vg group
   3. Restore ubuntu-vg/ubuntu-lv (dm) to ubuntu-vg/ubuntu-lv (dm)



     Restore                      100%                     Status: Success


      Time             Action                                   Duration

      08:42:55         Job started at 2023-01-23 08:42:55 UTC
      08:42:57         Starting volume restore
      08:45:09         Waiting for backup infrastructure res... 00:00:02
      08:45:11         Applying changes to disks configuration  00:00:00
      08:45:11         ubuntu--vg-ubuntu--lv restored 62 GB ... 00:01:03
      08:46:14         Processing finished at 2023-01-23 08:...


Even though all data, partitions, volumes, etc, have been restored, do not reboot because the system will not boot. Exit to shell and proceed with the next part.