How to run Alpine Linux on an Azure Virtual Machine (VM)
Alpine Linux is a simple, lightweight and secure Linux distribution, very well-suited for virtualized systems.
Although there's built-in support for major distributions such as Debian, Ubuntu, and so on, Alpine is not supported natively on Azure, so we have to jump through a few hoops to get it working there.
Prerequisites
- Windows 10 Pro or Windows 11 Pro, with Hyper-V enabled.
- The latest x86_64 “virtual” variant of the Alpine Linux ISO, downloaded from their downloads page.
- The AzCopy command-line utility.
NOTE: There may be way to do it from QEMU from a Linux machine, but in this guide I'm only documenting the following path that has worked for me.
High-level plan
We're going to go through the following steps:
1. Create a Virtual Machine (VM) on Hyper-V locally.
2. Install Alpine Linux on that local VM.
3. Upload the VM's VHD disk to Azure.
4. Start a new VM on Azure from that uploaded disk.
The end result is that the cloud VM will work just like the local VM -- as we have simply uploaded the disk with the pre-installed and pre-configured Alpine Linux to Azure -- but it will be running reliably 24/7 on the Azure infrastructure, and accessible at a publicly-facing Internet endpoint.
Step 1 -- Creating a local Hyper-V VM
Note that Hyper-V on Windows requires the “Pro” version, instead of the typical “Home” version. It also needs to be enabled through the “Turn Windows features on or off” utility, if you haven't already.
1.A. Create a fixed-size 1GB VHD
VHDX is the newest virtual disk format which supports dynamically resizing the file on the host system as needed. It is the default for new Hyper-V VMs, but we must avoid it by explicitly creating a fixed-size VHD disk.
On the “Actions” side panel, click “New”, then “Hard Disk...”. For disk format, choose “VHD”. For disk type, choose “fixed size”.
Place it where you can remember it and where Admin privileges aren't needed for reading or writing (e.g. a newly-created “VMs” folder on your home directory).
Choose 1 GB for the disk size, which is more than enough for Alpine -- as for the disk size of the final VM, don't worry, we will be resizing this disk later, after it has been uploaded, directly on Azure).
2.A. Create the local Hyper-V VM
On the “Actions” side panel, click on “New”, then “Virtual Machine...”. Choose a VM name and location where you don't need Admin privileges for writing and reading.
For generation choose Generation 2. This is important, as it must match the generation of the Azure cloud-based VM we will be creating later.
NOTE: I'm not sure what the distinction is between Generation 1 and Generation 2 in Hyper-V, but I assume it has to do with booting through UEFI instead of BIOS.
For startup memory, 512 MB is enough for Alpine. Leaving “Dynamic Memory” enabled is fine.
For connection, choose the “Default Switch”.
For the Virtual Hard Disk, choose to “attach a virtual hard disk later”, since the “use an existing virtual hard disk” option doesn't let us choose a VHD (only a VHDX).
3.A. Configure the local Hyper-V VM
Before even starting up the VM, right-click it and open “Settings...”.
On the “Security” tab, we're going to disable “secure boot”, as Alpine doesn't support it.
On the “SCSI Controller” tab, we're going to add a DVD drive, put it in a unused location (e.g. Location 1), and check the “image file” option and choose the Alpine installation ISO image.
Back on the “SCSI Controller” tab, we're going to add a hard drive, put it in a unused location (e.g. Location 2), and type out the path to the VHD file (the file browser chooser won't show it, because it is filtering only for VHDX).
To save on some disk space on your host machine, you can go on the “Checkpoints” tab and disable that completely. For the “Automatic Start Action” you can choose “Nothing”, and for “Automatic Stop Action”, you can choose “Turn off”.
After applying those settings and clicking “OK”, re-open the “Settings...” and go to “Firmware” to choose the boot order. It should be: first the Hard Drive, then the DVD drive, then the Network Adapter. Apply and click “OK”.
We're now ready to boot up the VM and install Alpine on it!
Step 2 -- Installing Alpine on the local VM
As mentioned in the previous step, start the VM. It should start on the GRUB bootloader from the ISO and boot up until it shows the “localhost login:” prompt. Type in “root” and press enter.
2.A. Installing Alpine on the local VM
Alpine has its own installation guide on its Wiki.
At a high-level, we're: running setup-alpine
; setting up a root password; leaving DHCP on; choosing 1 (dl-cdn.alpinelinux.org
) as our APK package manager mirror; setting up a non-root user with sudo privileges (note that doas
is used in Alpine instead of sudo
, though); choosing sda
as our target installation disk; choosing sys
as its role for a full system install.
If everything went fine with the installation, the following messages should be shown on the screen: “Installation finished. No error reported” and, following that after the GRUB bootloader is installed, “Installation is complete. Please Reboot”.
We can now unmount the installation ISO by going to the “Media” menu, “DVD Drive”, then clicking on “Eject ...”. We can now reboot the VM by typing the “reboot” command and login into the newly-configured system.
Check that you can login both as the root user and as your non-root user by using the passwords you've set up.
2.B. Configuring the serial console to be enabled
An absolutly critical step is to configure the VM to enable the serial console -- in addition to the traditional VGA console -- on the VM, so that we can use that in Azure as a fallback login shell in case we lose access through SSH.
The steps are described here on the Alpine Wiki: Enable Serial Console.
The steps that matter are:
1. Adding to the end of the /etc/default/grub
file:
GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --unit=0 --word=8 --parity=no --speed 115200 --stop=1"
2. Running grub-mkconfig -o /boot/grub/grub.cfg
3. Un-commenting the following entry on the /etc/inittab
file:
# Put a getty on the serial port ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100
This enables a serial terminal at the ttyS0
device, with 115200 baud rate, using the vt100
escape codes.
Performing these steps ensures that the cloud-base VM that we will spin up will also have the serial console enable, so we can log in through that, should we ever need to.
3.B. Configuring SSH remote access (Optional)
NOTE: I've marked this step as optional because it's really clunky to do it through Hyper-V, and this can be done afterwards through the Serial Console when the VM is running on Azure.
The steps here are pretty standard and can be found elsewhere. Basically adding the host's SSH public key to the local VM's ~/.ssh/authorized_keys
file.
Use this guide by Digital Ocean or this one by Red Hat.
Step 3 - Uploading the VHD disk image into Azure
Now make sure to power off the VM properly, through a doas poweroff
command. We want the disk image to reflect a properly turned off VM, so it can cleanly start up once it's running on Azure.
3.A. Creating the Azure managed disk
You can create a new Resource Group to bundle together all of these resources that will be created soon. For me, it will be named “AlpineGuideRG”.
Open the Azure Cloud Shell and use the following command:
az disk create --name AlpineGuideDisk \ --resource-group AlpineGuideRG \ --hyper-v-generation V2 \ --os-type Linux \ --upload-type Upload \ --upload-size-bytes 1073742336
The --upload-size-bytes
option should have the exact size in bytes of the VHD file. You can either use “Properties” in explorer (use “Size” and not “Size on disk”), or use the wc -c DiskImage.vhd
command, if you have GNU coreutils installed.
Make sure that the value of the --hyper-v-generation
matches the generation of the local VM we configured previously. If you followed the described steps, it should be V2
, but the azure-cli
defaults it to V1
, so it's important to pass it explicitly.
3.B. Uploading the virtual disk
Now that the cloud disk has been instantiated, allow writing to it remotely through a SAS token by running:
az disk grant-access --name AlpineGuideDisk \ --resource-group AlpineGuideRG \ --access-level Write \ --duration-in-seconds 3600
Keep a note of the value of accessSas
entry. Keep in mind that we've set it to expire in one hour, but we could make it last for loger if needed.
Going back to your host machine where the VHD file resides. Make sure that AzCopy is installed. Copy the local VHD disk image to the cloud by running:
azcopy copy .\AlpineGuide.vhd "<accessSas>"
Replace the <accessSas>
by the actual value you got with the grant-access
step.
NOTE: Do keep the "
quotes around the SAS to prevent the shell to misinterpret the URL.
It should succeed with the following message:
Total Number of Bytes Transferred: 1073742336 Final Job Status: Completed
Finally, on the Azure CLI revoke the access to finalize the disk (and to make it available to be attached to an Azure VM in a following step):
az disk revoke-access --name AlpineGuideDisk \ --resource-group AlpineGuideRG
Step 4 -- Start a new VM on Azure from the existing disk image
On the Azure Portal, go to the managed disk resource.
On the “Size + performance” tab, you can now resize the disk to, for example, 30 GB, to match the default disk size of the lowest-end VM Azure offers.
Back in the “Overview” tab, you can click on “Create VM” and go through the steps to create a VM running on top of this disk image.
You should be able to access the login shell directly through the serial console by going to the VM Azure resource and scrolling all the way down on the left pane to select “Serial console”.
You can now follow your favorite “Setting up remote access through SSH guide” to allow this remote machine to be accessed through SSH through port 22.
NOTE 1: If you do create your ~/.ssh/authorized_keys
manually, make sure it has the permissions -rw-r--r--
(only the owner can write to it), which can be enforced through running chmod 0600 ~/.ssh/authorized_keys
.
NOTE 2: We have NOT installed the recommended Azure Linux Agent waagent
, which is a personal preference. Having the agent installed would allow a better integration with the Azure Portal for various status and management operations, and it comes pre-installed in the Azure-provided Linux images that are officially supported. Not having the Linux Agent installed is actually a plus for me, as it 1) runs in a privileged state and 2) takes in remote commands... so essentially a potential backdoor with complete access the system. Anyways, the serial console we've enabled should have your back for any emergency system administration needs.