On AWS, there aren’t any images of Slackware. If you want to run Slackware, you have to create one from scratch. There is a howto on this topic. Although it was very helpful, I still ran into several issues that required a deeper investigation. This post is a summary of the steps I had to take to create a Slackware machine on EC2. This is not a tutorial on Slackware, AWS, VMware or Linux. It is assumed you are proficient in all of these.
1. Create a Slackware virtual machine
- The first step is to create a virtual machine of Slackware. I am using VMware Workstation Player (on Windows), but my understanding is that other platforms like VirtualBox can also be used.
- Download the Slackware64-15.0 iso media. On VMware Player, create a virtual machine, specify the iso file as the installer, specify the disk size (minimum recommended is 16GB), and then boot the system.
- Follow the usual Slackware installation process. Create the partitions as needed. I normally create three partitions: /boot, /, and swap, in that order.
- Complete the Slackware installation with at least the following disk sets: A, AP, D, K, L and N.
- After logging in as root via the console, create a new user and grant sudo access.
2. Recompile the kernel
- In order to run on EC2, the kernel should have the XEN options enabled at a few places.
- Copy the default huge-kernel’s configuration file from the boot directory into /usr/src/linux
cd /usr/src/linux
cp /boot/config .config
- Configure the kernel compile options with menuconfig
- Select XEN in the following places (some of them may already be selected by default)
- Processor type -> Processor type and features -> Linux guest support -> Xen guest support (without this option, the system will refuse to boot on EC2)
- Device drivers -> Network device support -> Xen network device frontend driver (NEW) (without this option, the system will not have any network interfaces)
- Device drivers -> Block devices -> Xen virtual block device support (NEW) (everything still works without this option, but we won’t be able to mount a separate EBS volume unless this option is enabled)
- Run make bzImage to recompile the kernel
- Copy the new kernel to /boot
cp arch/x86/boot/bzImage /boot/vmlinuz-xen
3. LILO vs GRUB
- We can install the LILO bootloader on the virtual machine, port the system over to EC2, and everything will boot up and run fine. But we will run into problems later when we have to re-run LILO after compiling a new kernel. The LILO boot process appears to be incompatible with XEN Block Devices. The only way to keep LILO is by disabling the XEN Block Devices in the kernel. However, as mentioned earlier, we won’t be able to attach other EBS volumes to the machine, which is pretty limiting. If LILO is absolutely necessary, then this is the only option. If the XEN Block Devices are enabled, the traditional block device names like /dev/sda, /dev/sdb will get renamed to /dev/xvdb, /dev/xvdc etc.. We will be able to attach other EBS volumes, but LILO will not work.
- Alternatively, we could install GRUB. It will work fine even with the XEN block devices, so that is the best option. GRUB is already included in the “A” set of Slackware, so it is very straightforward to switch to GRUB.
- As mentioned above, block devices will get renamed from /dev/sda to /dev/xvdb etc.. when the system is ported to EC2 with XEN enabled. However, in VMware, they will still be called /dev/sda. To ensure that the system works in both places, we need to use their uuid’s in /etc/fstab instead of their device names.
- Run the command blkid and get the uuid’s of all devices.
- Edit /etc/fstab, and replace the partition names with their UUID: For example:
- UUID=41b4f058-1f1d-4944-af0a-ee33ecc499aa / ext4 defaults
- The following steps will install the grub boot loader in the virtual machine (if you are running this after porting to EC2, replace the /dev/sda with the appropriate xen block device name):
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg
- Reboot the system in VMware and verify everything still works.
4. Check ssh connectivity
- Since it is not possible to access the physical console on EC2, we need to make sure we can login to the system via ssh.
- From another terminal, check if the virtual machine is reachable. On windows, I use MobaXterm. Just open a local terminal and then type ssh IPNUMBER -l username. It should produce a password prompt.
5. Create key pairs and disable password login (optional)
- For better security we should disable text passwords via ssh and use key pair login only.
- Create private/public key pairs. Then login using the key to make sure it works:
ssh -i private_key IP_NUMBER -l username
- Once key pair login has been verified to work, we can disable the password prompts. Change the configuration in /etc/ssh/sshd_config file and say no to PasswordAuthentication:
- PasswordAuthentication no
- For better security, we should disable root login via ssh. In the /etc/ssh/sshd_config file, set PermitRootLogin no. But I think this is the default setting anyway, so this step may not be necessary.
- Reboot again and make sure everything still works, and you are able to ssh into the machine.
6. Export the VMDK file
- By default, VMware splits the virtual hard drive into multiple vmdk files. We need to create a single compressed VMDK file. Some of the paid versions of VMware products apparently have an export function built in, but VMware Player does not. However, we can use the ovftool utility, which can be download freely from VMware.
- Open a command prompt terminal on Windows. Then find the VMware OVF Tool directory.
- Change to where the virtual machine’s directory is located.
- Run the following command (with appropriate replacement for your paths):
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe "C:\Users\asara\Downloads\Virtual Machines\Slackware64-current\Slackware64-current.vmx" "C:\Users\asara\Downloads\Virtual Machines\SlackwareSnapshotForAWS"
- The response will be something like the following:
Opening VMX source: C:\Users\asara\Downloads\Virtual Machines\Slackware64-current\Slackware64-current.vmx
Opening OVF target: C:\Users\asara\Downloads\Virtual Machines\SlackwareSnapshotForAWS
Writing OVF package: C:\Users\asara\Downloads\Virtual Machines\SlackwareSnapshotForAWS\Slackware64-current\Slackware64-current.ovf
Transfer Completed
Completed successfully
- The result will be an iso and a vmdk file. We only need the vmdk file.
7. Upload the vmdk file to S3
- The VMDK file has to be uploaded to AWS S3 into a separate bucket. Since the vmdk file is several GB in size, the upload will take some time. The upload can be done via the command line using the AWS shell, or via the web interface.
8. Create a vmimport role
- This is not a very intuitive process, and requires two steps. Even when you are logged into AWS as the root user, you still need to grant yourself priviledges to convert the vmdk file into a machine snapshot file.
- The next few steps have to be executed via the AWS shell command line interface. This can be done using a local installation of the AWS shell, or the Cloud Shell utility on the AWS website.
- Create a file named trust-poly.json with the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
- Then run the following command:
aws iam create-role --role-name vmimport --assume-role-policy-document "file://trust-policy.json"
- Next, create a file named role-policy.json with the following content (where slackvmdk is the S3 bucket name):
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::slackvmdk",
"arn:aws:s3:::slackvmdk/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
- Then run the following command:
aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file://role-policy.json"
9. Convert the vmdk file to a snapshot file
- Using the AWS shell, create a file called containers.json with the following items (with appropriate replacements for your file name, bucket name and description):
{
"Description": "Slackware64-15.0",
"Format": "vmdk",
"UserBucket": {
"S3Bucket": "slackvmdk",
"S3Key": "Slackware64-current-disk1.vmdk"
}
}
- Then run the following command:
aws ec2 import-snapshot --description "My Slackware Server" --disk-container "file://containers.json"
- If there are no syntax errors, this will spit out a json format output like this:
{
"Description": "My Slackware Server",
"ImportTaskId": "import-snap-0df31ad7db80caa02",
"SnapshotTaskDetail": {
"Description": "My Slackware Server",
"DiskImageSize": 0.0,
"Progress": "0",
"Status": "active",
"StatusMessage": "pending",
"UserBucket": {
"S3Bucket": "SlackwareSnapshots",
"S3Key": "Slackware64-current-disk1.vmdk"
}
},
"Tags": []
}
- The conversion process will run for a while. You can check the progress of the conversion with the following command (replace the task id with the one from the previous output):
aws ec2 describe-import-snapshot-tasks --import-task-ids "import-snap-0df31ad7db80caa02"
- If the conversion stops due to errors, chances are it has something to do with your vmimport role setting.
10. Create Amazon Machine Image (AMI) from the snapshot
- Once we have a snapshot file, select that file and choose create image from snapshot.
- Go under Amazon Machine Images (AMI), and verify that the image has been created.
11. Launch an instance
- Select the image and choose Launch an instance from AMI
- Specify the root drive, for example, /dev/sda2. Even though these get renamed to /dev/xvdb2, they are still referred to as /dev/sda2 in some places.
- Ensure Allow SSH traffic from is selected.
- Once the instance is running, we can view the console screen via Instance screenshot. The screenshot should show the boot process, and finally the login prompt.
- If everything looks ok, attach an elastic IP address to this instance.
- Next, try to ssh into the instance from your local computer.
- If you can login, congratulations! The rest of the Slackware configuration is the same as any other remote Slackware installation.
12. Kernel updates
- If the kernel is upgraded during slackpkg upgrade-all, recompile the kernel as described previously with the XEN options enabled. The default kernel will not boot in EC2.
13. What do to if the instance refuses to boot
- Inevitably, there will be times when the boot process fails, or the machine refuses to connect. This can be caused by misconfigurations in the boot parameters, kernel, or ssh settings. This doesn’t mean we need to terminate the system and loose everything on the system. We can mount the root drive to another working machine, and then perform the necessary edits to recover from the errors. The steps are as follows:
- Stop (instead of terminating) the hung instance from the EC2 console.
- Find the AMI that was used to create the original instance. If you don’t have it, you will have to create one from the vmdk files.
- Launch a new temporary instance from this AMI.
- Attach an elastic IP to this instance (or reuse the ip from the hung machine by first disassociating it and then associating it to this new instance).
- Go under EBS (Elastic Block Store) Volumes, and find the root volume of the hung machine.
- Detach this volume from its host instance (the instance has to be in a stopped state for the detachment to work).
- Attach this volume to the newly created instance.
- SSH into the newly created machine, and switch to root.
- Run dmesg and find the name of the newly attached volume. For example, this might be /dev/xvdf.
- Mount the root partition of the attached volume under /mnt. You need to know how you originally assigned the root, boot and swap partitions.
mount /dev/xvdf2 /mnt
mount /dev/xvdf1 /mnt/boot
for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done
- chroot into the mounted drive
chroot /mnt
- Now we should be able to recompile the kernel, edit the ssh configurations, re-run grub etc.. to fix the problem.
- Stop the temporary instance, detach the volume that we had attached to /mnt, and then terminate the temporary instance.
- Attach the repaired volume to the original instance.
- Reboot the original instance, and check if it works this time.
Leave a Reply