Rescue

From BitFolk
(Redirected from Rescue VM)
Jump to navigation Jump to search

This article describes the BitFolk Rescue Environment - a standard VPS which can always be booted into for rescue and advanced installation purposes.

Overview

The rescue environment is provided so that customers are able to access their VPS's block devices from outside of their VPS. This can be useful for tasks such as:

  • rescuing a very broken installation;
  • resetting root passwords;
  • forensic work after a compromise

and so on.

Accessing the rescue environment

The rescue environment is accessed via the Xen Shell, using the rescue command. Your VPS must be shut down before using this command as the rescue environment needs exclusive access to your block devices.

The basics

The rescue environment is a Debian-based "live" system. That is, it's booted from a standard read-only image and then a RAM disk is used to provide a writeable root filesytsem.

Logging in

A default user is set up with the username "user" and a random password assigned each boot. As the rescue environment is separate from your VPS, no credentials from your VPS will be used. Once logged in you may use sudo to become root.

Warning Warning: Take care with the generated password - networking is enabled, sshd is running and this user has full sudo access!

64-bit architecture

Even though a minority of BitFolk customers still have a 32-bit (i686) architecture, the rescue environment is always a 64-bit (x86_64) architecture with a matching 64-bit kernel. This is so that customers with an x86_64 install can still use the rescue environment in a useful manner.

Since an x86_64 Linux kernel can still execute i686 binaries you will not generally encounter any issues chrooting into your VPS filesystem, however some utilities do try to guess which architecture you are on. For example, the yum tool will see that you are on a 64-bit architecture and attempt to install 64-bit packages. If necessary you can avoid this by use of the setarch command:

$ sudo chroot /mnt /bin/bash
# arch
x86_64
# setarch i386 /bin/bash
# arch
i686
#

Installed utilities

Many standard utilities are already installed, and new packages can be installed in the normal Debian fashion, provided that there's enough disk space.

Block devices

Your VPS's block devices are not mounted by default, but should be accessible under their usual device nodes. See /proc/partitions for a list of block devices and partitions.

Persistence

By default, the rescue environment uses half of your VPS's RAM as an overlay tmpfs so that you can write new data to it. This can be fairly limiting as the smallest VPSes have just 1½GiB RAM. It also means that none of your changes will persist past a reboot.

It is however possible to tell the rescue environment to use a different device other than tmpfs for its overlay. If you create an ext2, ext3 or ext4 filesystem which is labelled live-rw and then reboot, this filesystem will be discovered during boot and used as the overlay:

# mkfs.ext2 -L live-rw /dev/xvda1

Since this filesystem will be on one of your VPS's block devices it will persist between reboots and should offer much larger capacity.

Warning Warning: Do not rely on the rescue environment for long-term storage of files or provision of services. Your persistent image will become invalid next time the rescue environment's base image is upgraded!

Using an image file as a peristent backing store

Creating a new partition may be difficult; if your existing VPS's partitions are making use of all the disk space allocated to you, it may be inconvenient to purchase more disk space or to try to shrink existing partitions to make room.

If you'd rather not create an extra partition you can instead create an image file named live-rw in the root of an existing filesystem, and put a filesystem upon it:

# dd if=/dev/null of=/path/to/fs/live-rw bs=1G seek=1 # for a 1GB sized image file
# /sbin/mkfs.ext2 -F /path/to/fs/live-rw

/home automounting

If during boot a partition labelled home-rw (or an image file named so) is discovered, this filesystem will be directly mounted as /home in the rescue environment. This can be useful if you purely just need extra storage.

Example usage

Resetting a root password

Main article: Resetting your root password

Taking an image of a whole VM

Basic idea

This will result in a disk image being stored at a remote location; as it will be a partitioned image you might need to use some tricks to loop mount it, or use kpartx. If you don't like that you could just take an image of the desired file system itself, e.g. /dev/xvda1.

$ sudo dd if=/dev/xvda bs=4M | ssh you@elsewhere.com 'cat > bitfolk-vm.img'

With compression

$ sudo apt install lz4
$ sudo dd if=/dev/xvda bs=4M | lz4 -c | ssh you@elsewhere.com 'cat > bitfolk-vm.img.lz4'

With a progress bar

$ sudo apt install lz4
$ sudo dd if=/dev/xvda bs=4M status=progress | lz4 -c | ssh you@elsewhere.com 'cat > bitfolk-vm.img.lz4'

With a nicer progress bar

$ sudo apt install lz4
$ sudo pv /dev/xvda | lz4 -c | ssh you@elsewhere.com 'cat > bitfolk-vm.img.lz4'

Persistent storage

Typical boot of rescue environment only leaves you with half your RAM as writeable space:

.
.
.
****************************************
Resetting user password to random value:
        New user password: imeiXuoy
****************************************
               
BitFolk Rescue Environment - https://tools.bitfolk.com/wiki/Rescue
               
This virtual machine is running read-only over NFS with a unionfs ramdisk to
allow changes. This means:
               
- anything you write to its filesystem will not survive a reboot
- you only have about half your RAM size as writable space
               
If you need to write more, or you need it to persist past a reboot, you'll need
to use your VPS's storage. Please see:
               
    https://tools.bitfolk.com/wiki/Rescue#Persistence
               
for more information.
               
Your user account is called 'user' and its password has been randomly-generated
(see above). Be careful what you do with it as networking is now active and
sshd is running. The 'user' account has full sudo access.
               
rescue login: user
Password:      
Linux rescue 2.6.32-5-686-bigmem #1 SMP Mon Jan 16 16:42:05 UTC 2012 i686
               
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
               
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
user@rescue:~$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
aufs                  235M   21M  215M   9% /

Let's say that we have an xvda of 10GiB, and it's currently completely empty. We'll create an xvda1 of 8GiB and xvda2 with the remaining 2GiB, then use xvda2 as our persistent storage so there will be 2GiB of space for use by the rescue environment.

After we're done and presumably have a working VPS installed on xvda1 we can delete xvda2 and grow xvda1 into the space it used.

user@rescue:~$ cat /proc/partitions
major minor  #blocks  name
               
 202        0   10485760 xvda
 202       16     491520 xvdb
   7        0     175200 loop0
user@rescue:~$ sudo fdisk /dev/xvda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd78846e1.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): c
DOS Compatibility flag is not set

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p              
Partition number (1-4): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +8G
               
Command (m for help): p
               
Disk /dev/xvda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd78846e1
               
    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048    16779263     8388608   83  Linux
               
Command (m for help): n
Command action 
   e   extended
   p   primary partition (1-4)
p              
Partition number (1-4): 2
First sector (16779264-20971519, default 16779264):
Using default value 16779264
Last sector, +sectors or +size{K,M,G} (16779264-20971519, default 20971519):
Using default value 20971519
               
Command (m for help): p
               
Disk /dev/xvda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd78846e1
               
    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048    16779263     8388608   83  Linux
/dev/xvda2        16779264    20971519     2096128   83  Linux
               
Command (m for help): w
The partition table has been altered!
               
Calling ioctl() to re-read partition table.
[96410.248484]  xvda: xvda1 xvda2
Syncing disks.

So now we have the partitions defined, it's time to create a filesystem on xvda2 for use as a persistent backing store.

user@rescue:~$ sudo mkfs.ext2 -L live-rw /dev/xvda2
mke2fs 1.41.12 (17-May-2010)
Filesystem label=live-rw
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524032 blocks
26201 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912

Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
user@rescue:~$ sudo halt
INIT: Switching to runlevel: 0
INIT: Sending processes the TERM signal

Broadcast message from root@rescue (hvc0) (Fri Apr 20 15:17:39 2012):

The system is going down for system halt NOW!
Using makefile-style concurrent boot in runlevel 0.
Asking all remaining processes to terminate...done.
All processes ended within 1 seconds....done.
Stopping enhanced syslogd: rsyslogd.
Deconfiguring network interfaces...done.
Cleaning up ifupdown....
Deactivating swap...done.
live-boot is resyncing snapshots and caching reboot files...Will now halt.
[96598.577218] xenbus_dev_shutdown: device/console/0: Initialising != Connected, skipping
[96598.581456] System halted.
xen-shell> rescue
.
.
.
rescue login: user
Password:
Linux rescue 2.6.32-5-686-bigmem #1 SMP Mon Jan 16 16:42:05 UTC 2012 i686

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
user@rescue:~$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
aufs                  2.0G  9.6M  1.9G   1% /

A full df shows where the increased space is coming from:

user@rescue:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
aufs                  2.0G  9.6M  1.9G   1% /
tmpfs                 235M     0  235M   0% /lib/init/rw
udev                  223M   64K  223M   1% /dev
tmpfs                 235M  4.0K  235M   1% /dev/shm
85.119.80.243:/srv/rescue
                       49G   33G   13G  73% /live/image
/dev/xvda2            2.0G  9.6M  1.9G   1% /live/cow
tmpfs                 235M     0  235M   0% /live
tmpfs                 235M     0  235M   0% /tmp

Destroying your own data

When you cancel your service BitFolk will delete your data in a manner that should not allow it to be recovered. If for any reason you did not want to wait for BitFolk to do that, or wanted to ensure yourself as much as possible that it was done, you might want to use the Rescue VM to do it yourself.

user@rescue:~$ sudo blkdiscard -v -p 512M /dev/xvda
/dev/xvda: Discarded 1610612736 bytes from the offset 0             
/dev/xvda: Discarded 1610612736 bytes from the offset 1610612736    
/dev/xvda: Discarded 1610612736 bytes from the offset 3221225472    
/dev/xvda: Discarded 1610612736 bytes from the offset 4831838208    
/dev/xvda: Discarded 1073741824 bytes from the offset 6442450944    
/dev/xvda: Discarded 1610612736 bytes from the offset 7516192768    
/dev/xvda: Discarded 1610612736 bytes from the offset 9126805504

You should do the same for xvdb and any other block devices you may have.

Warning Warning: The security of this procedure relies on the drive vendor's implementation of discard and assumes that the storage devices will not be physically removed, disassembled and their memory cells examined. When BitFolk deletes data the same method is used. Permanently and irrevocably deleting data from flash storage is difficult; it's generally considered much easier to encrypt your data (e.g. LUKS) and then forget the key.