On AWS, there aren’t any images of Slackware. If you want to run Slackware, you have to create one from scratch. There is a howto on this topic, but it wasn’t too helpful for me. I still ran into several issues that required a deeper investigation. This post is a summary of the steps I had to take to create a Slackware machine on EC2. This is not a tutorial on Slackware, AWS, VMware or Linux. It is assumed you are proficient in all of these.
1. Create a Slackware virtual machine
- The first step is to create a virtual machine of Slackware. I am using VMware Workstation Pro (on Windows). Other platforms like VirtualBox will also work, but the instructions below are specific to VMware.
- Download the Slackware64-15.0 iso media. On VMware, create a virtual machine, specify the iso file as the installer, and specify the disk size (minimum recommended is 32GB).
- The firmware type (legacy BIOS or UEFI) has to be specified in VMware under Options/Advanced. The default is BIOS. UEFI offers a faster boot, but BIOS firmware can run in a greater variety of hardware. The choice is up to you. Once booted, however, it does not make any difference to the system performance.
- There seems to be a significant keyboard lag when running Slackware guest on a Windows host. This can be solved by adding the following four lines to the vmx file.
- keyboard.allowBothIRQs = “FALSE”
- keyboard.vusb.enable = “TRUE”
- mouse.vusb.enable = “TRUE”
- mouse.vusb.useBasicMouse = “FALSE”
- Boot the system and follow the usual Slackware installation process. Create the partitions as needed. I usually create three partitiions – /boot (ext4), swap and / (ext4) for the BIOS firmware, or /boot/efi (EFI), swap and / (ext4) for the UEFI firmware.
- Complete the Slackware installation with at least the following disk sets: A, AP, D, L and N. The system should automatically detect the BIOS or UEFI firmware and produce the appropriate LILO or ELILO entries.
- Reboot to make sure everything boots up normally.
- Create a new user and grant sudo access.
2. Upgrade to Slackware64-current (optional)
- Edit /etc/slackpkg/mirrors and specify a slackware64-current mirror. Since this is a big jump from 15.0, some packages may break if we simply run upgrade-all. The upgrade sequence is important. The following sequence works as of Oct 2024:
slackpkg update
slackpkg upgrade slackpkg
slackpkg upgrade aaa_glibc-solibs gnupg
slackpkg upgrade-all
slackpkg clean-system
-
- Blacklist the series we don’t need (i.e., E, F, KDE, T, TCL, X, XAP, XFCE, Y etc…) in /etc/slackpkg/blacklist. Then run slackpkg install-new.
- Since slackware64-current no longer includes the huge kernel, we need to install initrd before rebooting. Otherwise the system will stall with a kernel panic when mounting the partitions.
- Run geninitrd
- If running the BIOS firmware, insert initrd = /boot/initrd.gz in lilo.conf and rerun lilo
- If running the UEFI firmware, run eliloconfig
- Create a file named startup.nsh under /boot/efi/ directory, with a one-line command to invoke elilo.efi:
- FS0:\EFI\Slackware\elilo.efi
- Note the windows style backslashes in the directory path.
- Reboot and make sure everything boots up normally.
- If reboot fails, we can boot from the installation iso, mount the partitions manually and chroot into the system to make repairs. Below are the steps for that process (assuming UEFI):
mount /dev/nvme0n1p3 /mnt
mount /dev/nvme0n1p1 /mnt/boot/efi
for i in /dev /dev/pts /proc /sys /run; do mount -B $i /mnt$i; done
chroot /mnt
3. Change partition names to their UUIDs
- Once ported into EC2, the drive names such as /dev/sda may become changed to /dev/xvda or /dev/nme0n1 depending on the instance type, firmware and kernel options. The system files fstab, lilo.conf and elilo.conf explictly mention partitions by their names, so it is important to replace these with their persistent UUID’s. Run blkid and get the UUID’s of all devices and partitions.
- Edit /etc/fstab, and replace the partition names with their UUID’s. For example:
- UUID=41b4f058-1f1d-4944-af0a-ee33ecc499aa / ext4 defaults 1 1
- If running the BIOS firmware, edit /etc/lilo.conf and replace the root device name with its UUID, such as:
- root=”UUID=41b4f058-1f1d-4944-af0a-ee33ecc499aa”
- Run lilo
- If running the UEFI firmware, edit /boot/efi/EFI/Slackware/elilo.conf and replace the root device name with its UUID:
- append = “root =UUID=41b4f058-1f1d-4944-af0a-ee33ecc499aa vga=normal ro”
- Do not run eliloconfig after this. Doing so will delete the UUID and revert back to the original partition names.
- Reboot and make sure everything works
4. Check ssh connectivity
- Obviously, we want the machine to be reachable, so we need to make sure we can login via ssh.
- From another terminal, check if you can login via ssh. On windows, I use MobaXterm. Just open a local terminal and then type ssh IPNUMBER -l username.
5. Create key pairs and disable password login (optional)
- For better security we should disable text passwords and use key pair login only.
- Create private/public key pairs. Then login using the key to make sure it works:
ssh -i private_key IP_NUMBER -l username
- Once key pair login has been verified to work, we can disable the password prompts. Change the configuration in /etc/ssh/sshd_config file and say no to PasswordAuthentication:
- PasswordAuthentication no
- For better security, we should disable root login via ssh. In the /etc/ssh/sshd_config file, set PermitRootLogin no. But I think this is the default setting anyway.
- Reboot again and make sure everything still works, and you are able to ssh into the machine.
6. Decide which EC2 instance to use
- AWS has different instance types, such as t2, t3 etc.. The underlying hardware is different for each type, so not all instances will work with all firmwares and kernels.
- With the legacy BIOS firmware, both t2 and t3 instances will work. But below are a few considerations:
- t3 will boot up and run just fine with the stock kernel. All hard drive partitions will become renamed to /dev/nvme0n1 etc., but since we are using their UUID’s anyway, this won’t be an issue. But not all regions have the t3 instance available.
- The t2 instance requires some tweaks. The stock kernel will boot up fine, and the hard drive partitions will retain their original /dev/sda etc. names. But the network device will not work. It is pointless to have a EC2 running without being able to connect to it. To enable the network device, we need to have XEN option enabled in the kernel. This option is disabled in the stock kernel, so we have to enable it and recompile the kernel. This option is under Processor type -> Processor type and features -> Linux guest support -> Xen guest support. Run make bzImage, then cp arch/x86/boot/bzImage /boot/vmlinuz-xen, change the lilo.conf to point to this kernel, and run lilo. Compilation could take some time, and you have to do this every time the kernel is upgraded. After porting to EC2, the partition names will be renamed to /dev/xvda1 /dev/xvda2 etc.
- With the UEFI firmware, t2 instances will not work. We need the t3 instance.
7. LILO vs ELILO vs GRUB
- For legacy BIOS firmware, we can use lilo. With UEFI we can use elilo. However, some people might prefer grub because it will work under BIOS or UEFI. Grub is already included in the “A” set of Slackware, so it is very straightforward to switch to GRUB if that’s the path you choose. I am partial to the original slackware flavor, so I prefer lilo or elilo.
- The following steps will install the grub boot loader in the virtual machine (if you are running this after porting to EC2, replace the /dev/sda with the appropriate device name):
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg
- Reboot the system in VMware and verify everything still works.
8. Export the VMDK file
- By default, VMware splits the virtual hard drive into multiple vmdk files. We need to create a single compressed VMDK file. This can be done by using the “Export to OVF” function in VMware Workstation Pro.
9. Upload the vmdk file to S3
- The VMDK file has to be uploaded to AWS S3 into a separate bucket. Since the vmdk file is several GB in size, the upload will take some time. The upload can be done via the command line using the AWS shell, or via the web interface.
10. Convert the vmdk file to an EC2 snapshot
- This is a non-intuitive step, and there is no web interface to accomplish this. It has to be done through the AWS local shell or AWS Cloud Shell utility. I wrote a bash script to make this process easier. Download and execute this script. It should produce an output like shown below:
./s3vmdktosnapshot.bash slack64 SlacktestBIOS-disk1.vmdk Slackware64-current
Deleting old vmimport role
Creating vmimport role
Submitting S3 to EC2 conversion job
task ID = import-snap-0061efa3bf75811ee
Status = active; Message = pending
Status = active; Message = pending
.....
Status = active; Message = downloading/converting
Status = active; Message = converted
Status = active; Message = Preparing snapshot
Status = active; Message = Preparing snapshot
Status = active; Message = Preparing snapshot
Status = completed; Message = null
11. Create Amazon Machine Image (AMI) from the snapshot
- Select the snapshot file from the EC2 panel and choose create image from snapshot.
- There is an option to specify the root partition name, but this doesn’t really matter. The bootloader will select the appropriate UUID for the root partition.
- Select the Boot mode to Legacy BIOS or UEFI as appropriate for the firmware.
- Go under Amazon Machine Images (AMI), and verify that the image has been created.
12. Launch an instance
- Select the image and choose Launch an instance from AMI.
- Under Security Group, select a group which allows inbound traffic on port 22 (SSH).
- Once the instance is running, we can view the console screen via Instance screenshot. The screenshot should show the boot process, and finally the login prompt.
- If everything looks ok, attach an IP address to this instance.
- Next, try to ssh into the instance from your local computer.
- If you can login, congratulations! The rest of the Slackware configuration is the same as any other Slackware installation.
- I have found that the boot loader can sometimes become corrupted during the transfer process, so we should run lilo or eliloconfig at least once in EC2.
13. Kernel updates
- If the kernel is upgraded, we will have to regenerate initrd with geninitrd. In the case of a t2 instance with legacy BIOS, we will also have to recompile the kernel with the XEN option enabled, and run lilo. With UEFI, we have to copy the new kernel and initird into /boot/efi/EFI/Slackware/. There is no need to re-run eliloconfig.
14. What do to if the instance refuses to boot
- Inevitably, there will be times when the boot process fails, or the machine refuses to connect. This can be caused by misconfigurations in the boot parameters, a bad kernel, or an error in the ssh settings. This doesn’t mean we need to terminate the system and loose everything. We can mount the root drive to another working machine, and then perform the necessary edits to recover from the errors. The steps are as follows:
- Stop (instead of terminating) the hung instance from the EC2 console.
- Find the AMI that was used to create the original instance. If you don’t have it, you will have to create one from the vmdk files.
- Launch a new temporary instance from this AMI.
- Attach an elastic IP to this instance (or reuse the ip from the hung machine by first disassociating it and then associating it to this new instance).
- Go under EBS (Elastic Block Store) Volumes, and find the root volume of the hung machine.
- Detach this volume from its host instance (the instance has to be in a stopped state for the detachment to work).
- Attach this volume to the newly created instance.
- SSH into the newly created machine, and switch to root.
- Run dmesg and find the name of the newly attached volume. For example, this might be /dev/xvdf.
- Mount the root partition of the attached volume under /mnt, and all the associated sub partitions. You need to know how you originally assigned the efi, root, and swap partitions. Then chroot into the mounted drive. The steps are shown below:
mount /dev/nvme0n1p3 /mnt
mount /dev/nvme0n1p1 /mnt/boot/efi
for i in /dev /dev/pts /proc /sys /run; do mount -B $i /mnt$i; done
chroot /mnt
- Now we should be able to regenrate initrd, edit the ssh configurations, etc.. to fix the problem.
- Stop the temporary instance, detach the volume that we had attached to /mnt, and then terminate the temporary instance.
- Attach the repaired volume to the original instance.
- Reboot the original instance, and check if it works this time.
Leave a Reply