Onion Information
pages tagged luks
No description
Onion Details
Page Clicks: 0
First Seen: 03/11/2024
Last Indexed: 10/21/2024
Onion Content
Feeding the Cloud https://feeding.cloud.geek.nz/tags/luks/ Feeding the Cloud ikiwiki 2024-05-31T19:51:58Z Making the mounting of an encrypted /home optional on a home server https://feeding.cloud.geek.nz/posts/optional-encrypted-root-on-home-server/ Creative Commons Attribution-ShareAlike 4.0 International License 2022-10-29T06:45:39Z 2022-10-29T06:45:00Z I have a computer that serves as a home server as well as a desktop machine. It has an encrypted home directory to protect user files and, in the default configuration, that unfortunately interferes with unattended reboots since someone needs to be present to enter the encryption password. Here's how I added a timeout and made /home optional on that machine. I started by adding a one-minute timeout on the password prompt by adding timeout=60 in my /etc/crypttab : crypt UUID=7e12c123-abcd-5555-8c40-900d1f8cc281 none luks,timeout=60 then I made /home optional by adding nofail to the appropriate mount point in /etc/fstab : /dev/mapper/crypt /home ext4 nodev,noatime,nosuid,nofail 0 2 Before that, the password prompt would timeout but the system would be unable to boot since one of the required partitions had failed to mount. Now, to ensure that I don't accidentally re-create home directories for users when the system is mounted without a /home , I made the /home directory on the non-encrypted drive read-only: umount /home cd /home chmod a-w . Finally, with all of this in place, I was now happy to configure the machine to automatically reboot after a kernel panic by putting the following in /etc/sysctl.d/local.conf : # Automatic reboot 10 seconds after a kernel panic kernel.panic = 10 since I know that the machine will come back up just fine and that all services will be running. I simply won't be able to log into that machine as any other user than root until I manually unlock and mount /home . Erasing Persistent Storage Securely on Linux https://feeding.cloud.geek.nz/posts/erasing-persistent-storage-securely/ Creative Commons Attribution-ShareAlike 4.0 International License 2023-01-14T04:10:33Z 2019-01-08T16:55:00Z Here are some notes on how to securely delete computer data in a way that makes it impractical for anybody to recover that data. This is an important thing to do before giving away (or throwing away) old disks. Ideally though, it's better not to have to rely on secure erasure and start use full-disk encryption right from the start, for example, using LUKS . That way if the secure deletion fails for whatever reason, or can't be performed (e.g. the drive is dead), then it's not a big deal. Rotating hard drives With ATA or SCSI hard drives, DBAN seems to be the ideal solution. Burn it on CD, boot with it, and following the instructions. Note that you should disconnect any drives you don't want to erase before booting with that CD. This is probably the most trustworth method of wiping since it uses free and open source software to write to each sector of the drive several times. The methods that follow rely on proprietary software built into the firmware of the devices and so you have to trust that it is implemented properly and not backdoored. ATA / SATA solid-state drives Due to the nature of solid-state storage (i.e. the lifetime number of writes is limited), it's not a good idea to use DBAN for those. Instead, we must rely on the vendor's implementation of ATA Secure Erase . First, set a password on the drive: hdparm --user-master u --security-set-pass p /dev/sdX and then issue a Secure Erase command: hdparm --user-master u --security-erase-enhanced p /dev/sdX If you get errors like "bad/missing sense data", then you may need to use one of the tricks described in this thread . For me, suspending the laptop and then waking it up did the trick. NVMe solid-state drives For SSDs using an NVMe connector, simply request a User Data Erase nvme format -s1 /dev/nvme0n1 Recovering from an unbootable Ubuntu encrypted LVM root partition https://feeding.cloud.geek.nz/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/ Creative Commons Attribution-ShareAlike 4.0 International License 2021-06-11T20:43:57Z 2017-05-16T04:10:00Z A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty). After showing the boot screen for about 30 seconds, a busybox shell pops up: BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash) Enter 'help' for list of built-in commands. (initramfs) Typing exit will display more information about the failure before bringing us back to the same busybox shell: Gave up waiting for root device. Common problems: Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enough?) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash) Enter 'help' for list of built-in commands. (initramfs) which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM ) cannot be found. There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem. Boot using a USB installation disk First, create bootable USB disk using the latest Ubuntu installer: Download an desktop image . Copy the ISO directly on the USB stick (overwriting it in the process): dd if=ubuntu.iso of=/dev/sdc1 and boot the system using that USB stick ( hold the option key during boot on Apple hardware ). Mount the encrypted partition Assuming a drive which is partitioned this way: /dev/sda1 : EFI partition /dev/sda2 : unencrypted boot partition /dev/sda3 : encrypted LVM partition Open a terminal and mount the required partitions : cryptsetup luksOpen /dev/sda3 sda3_crypt vgchange -ay mount /dev/mapper/ubuntu--vg-root /mnt mount /dev/sda2 /mnt/boot mount -t proc /mnt/proc mount -o bind /dev /mnt/dev Note: When running cryptsetup luksOpen , you must use the same name as the one that is in /etc/crypttab on the root parition ( sda3_crypt in this example). All of these partitions must be present ( including /proc and /dev ) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them. Regenerate the initramfs on the boot partition Then "enter" the root partition using: chroot /mnt and make sure that you have the necessary packages installed: apt install lvm2 cryptsetup-initramfs before regenerating the initramfs for all of the installed kernels: update-initramfs -c -k all Manually expanding a RAID1 array on Ubuntu https://feeding.cloud.geek.nz/posts/manually-expanding-raid1-array-ubuntu/ Creative Commons Attribution-ShareAlike 4.0 International License 2021-06-11T20:43:57Z 2017-04-01T06:00:00Z Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine. My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive). Partition the new drive In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive ( /dev/sdc ) to use up all of the capacity on that drive: $ parted /dev/sdc unit s print mkpart print Then I initialized the partition table and creating the EFI partition on the new drive ( /dev/sdd ): $ parted /dev/sdd unit s mktable gpt mkpart Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition ( /home ) on the new 3 TB drive had: the same start position as the second partition on the old drive the end position of the third partition (the temporary one I just created) on the old drive I created the partition and flagged it as a RAID one: mkpart toggle 2 raid and then deleted the temporary partition on the old 2 TB drive: $ parted /dev/sdc print rm 3 print Create a temporary RAID1 array on the new drive With the new drive properly partitioned, I created a new RAID array for it: mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing and added it to /etc/mdadm/mdadm.conf : mdadm --detail --scan >> /etc/mdadm/mdadm.conf which required manual editing of that file to remove duplicate entries. Create the encrypted partition With the new RAID device in place, I created the encrypted LUKS partition: cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10 cryptsetup luksOpen /dev/md10 chome2 I took the UUID for the temporary RAID partition: blkid /dev/md10 and put it in /etc/crypttab as chome2 . Then, I formatted the new LUKS partition and mounted it: mkfs.ext4 -m 0 /dev/mapper/chome2 mkdir /home2 mount /dev/mapper/chome2 /home2 Copy the data from the old drive With the home paritions of both drives mounted, I copied the files over to the new drive: eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/ making use of wrappers that preserve system reponsiveness during I/O-intensive operations. Switch over to the new drive After the copy, I switched over to the new drive in a step-by-step way: Changed the UUID of chome in /etc/crypttab . Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf . Rebooted with both drives. Checked that the new drive was the one used in the encrypted /home mount using: df -h . Add the old drive to the new RAID array With all of this working, it was time to clear the mdadm superblock from the old drive: mdadm --zero-superblock /dev/sdc1 and then change the second partition of the old drive to make it the same size as the one on the new drive: $ parted /dev/sdc rm 2 mkpart toggle 2 raid print before adding it to the new array: mdadm /dev/md1 -a /dev/sdc1 Rename the new array To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays: umount /home cryptsetup luksClose chome mdadm --stop /dev/md10 mdadm --stop /dev/md1 before running this command: mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2 and updating the name in /etc/mdadm/mdadm.conf . The last step was to regenerate the initramfs: update-initramfs -u before rebooting into something that looks exactly like the original RAID1 array but with twice the size. Poor man's RAID1 between an SSD and a hard drive https://feeding.cloud.geek.nz/p