Dev Environments with LXC and Cloud-init
Recently, I have started using LXC containers, managed by LXD, as development environments. In this post I’ll explore the rationale for this type of setup and provide an example that can be copied and tweaked as needed. The following gist can also be referenced: https://gist.github.com/eternal-turtles/f3d3b2eccc466012a561ad67ddd95d96.
Rationale
In case you’re not familiar, LXC containers, being system containers, are unlike OCI app containers such as Docker or Podman, in that they are persistent (by default) and do not rely on a single process entrypoint. This means you can treat the containers similarly to virtual machines, except that they are much more lightweight because they share the host kernel rather than emulating hardware.
I have a sort of omnibus development container with a number of programming language runtimes installed, some installed through ASDF and others through the system package manager. It’s also of course possible to run a single service per container, and the only difference would be the cloud-init configuration and perhaps container networking configuration.
Reasons why this setup might make sense for you:
- Your workstation runs Linux
- You would like to avoid cluttering your workstation with dependencies
- You would prefer to avoid the overhead associated with hardware emulation
- You would like to maintain multiple development environments, each with their own set of dependencies and database instances, for example one for work and one for side-projects
- You would like the ability to migrate your development environment(s) when upgrading to a new workstation
- You would prefer to use a traditional process supervisor and to not store persistent state as data volumes, as with app containers
Reasons why you might pass on this sort of setup:
- Your workstation runs an OS other than Linux
- You prefer to use virtual machines for their greater level of isolation
- You are satisfied with using app containers for development and don’t have a need to segregate your containers and images for distinct purposes
LXD installation
Install snap, enable the snapd service, then install and initialize LXD.
On Fedora, this looks like:
sudo dnf install -y snapd
sudo systemctl enable --now snapd.socket
sudo snap install lxd
sudo newgrp lxd
sudo usermod -aG jnewton
lxd init --minimal
Logout and then log back in to be associated with the new ‘lxd’ group.
Container up
You’ll need to modify the following script slightly. Replace jnewton
with your actual username, and substitute enp87s0
with the name of your actual physical network interface.
Explanation of the following script:
- Create a container named
dev
, based on the Fedora 40 cloud-init image, but do not start it yet. - Set the cloud-init user-data for the container: pass a valid YAML file (documentation). The first boot of the container will create a user, install the specified packages, run shell commands to install ASDF and related tooling, and create a PostgreSQL user that matches your OS user, named
user
- Set the
security.privileged
configuration to true so you can write to the directory shared between the host and the container from within the container (optional) - Set
security.nesting
to true in case you might run Docker, Podman, or LXC within the container (optional) - Set CPU and memory limits (optional)
- To enable sharing of a directory between the host and the container, the UID and GID of the user must be the same. In the example below, both the host and container user have UID and GID of 1000
- Attach a physical network interface to the container; this is the simplest type of networking. The network interface on your system is likely something other than
enp87s0
: runlxc network ls
, and find a “physical” type network that you are currently using. Runip addr
to check that the interface is UP - Mount the directory by creating a disk-type device: this directory should contain all code to be mounted in the container, and is writable from both the host and the container
- Start the container
- Open a shell to the running container
lxc init images:fedora/40/cloud dev
lxc config set dev cloud-init.user-data - < cloud-init.yml
lxc config set dev security.nesting true
lxc config set dev security.privileged true
lxc config set dev limits.cpu=4
lxc config set dev limits.memory=16GB
lxc config set dev raw.idmap "both 1000 1000"
lxc network attach enp87s0 dev eth0
lxc config device add dev dev-code disk source=/home/jnewton/code/dev path=/home/user/code
lxc start dev
lxc exec dev -- /bin/bash
cloud-init.yml
#cloud-config
package_upgrade: true
packages:
- bash-completion
- tree
- openssh
- postgresql-server
- postgresql-contrib
- redis
- curl
- git
- awscli2
- openssl
- openssl-devel
- libyaml-devel
- ncurses
- ncurses-devel
- ncurses-compat-libs
- perl
- "@development-tools"
- gmp-devel
- autoconf
- sbcl
- ecl
- podman
timezone: America/Los_Angeles
write_files:
- path: /run/scripts/setup-asdf.sh
content: |
#!/bin/bash -e
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.14.1
echo '. "$HOME/.asdf/asdf.sh"' >> ~/.bashrc
echo '. "$HOME/.asdf/completions/asdf.bash"' >> ~/.bashrc
source ~/.bashrc
asdf update
asdf plugin add ruby
asdf plugin add nodejs
asdf plugin add erlang
asdf plugin add elixir
asdf install ruby latest
asdf install nodejs latest
asdf install erlang latest
asdf install elixir latest
asdf global ruby latest
asdf global nodejs latest
asdf global erlang latest
asdf global elixir latest
npm install -g npm
npm install -g yarn
owner: root:root
permissions: '0744'
- path: /run/scripts/setup-qlot.sh
content: |
#!/bin/bash -e
curl --proto '=https' --tlsv1.2 -LsSf https://qlot.tech/installer > /tmp/install-qlot
cat /tmp/install-qlot
chmod +x /tmp/install-qlot
/tmp/install-qlot
echo -e '\nexport PATH="/home/user/.qlot/bin:$PATH"' >> /home/user/.bashrc
- path: /run/scripts/setup-haskell.sh
content: |
#!/bin/bash -e
curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org > /tmp/get-ghcup
cat /tmp/get-ghcup
chmod +x /tmp/get-ghcup
/tmp/get-ghcup
owner: root:root
permissions: '0744'
- path: /run/scripts/setup-postgresql.sh
content: |
#!/bin/bash -e
postgresql-setup --initdb --unit postgresql
systemctl enable postgresql
systemctl start postgresql
sudo -u postgres createuser user
echo "local all user trust" >> /etc/postgresql/15/main/pg_hba.conf
systemctl reload postgresql
owner: root:root
permissions: '0744'
- path: /run/scripts/setup-redis.sh
content: |
#!/bin/bash -e
systemctl enable redis
systemctl start redis
owner: root:root
permissions: '0744'
runcmd:
- chown -R user:user /home/user
- chmod +x /run/scripts/setup-asdf.sh
- chmod +x /run/scripts/setup-qlot.sh
- chmod +x /run/scripts/setup-haskell.sh
- chown user:user /run/scripts/setup-asdf.sh
- chown user:user /run/scripts/setup-qlot.sh
- chown user:user /run/scripts/setup-haskell.sh
- cd /home/user && sudo -H -u user /run/scripts/setup-asdf.sh
- cd /home/user && sudo -H -u user /run/scripts/setup-qlot.sh
- cd /home/user && sudo -H -u user /run/scripts/setup-haskell.sh
- /run/scripts/setup-postgresql.sh
- /run/scripts/setup-redis.sh
users:
- name: user
gecos: User
primary_group: user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
Debugging
- Validate the cloud-init config:
cloud-init schema --system --annotate
- Tail the cloud-init output log:
tail -f /var/log/cloud-init-output.log
- Check cloud-init status:
cloud-init status
orcloud-init status --wait
Bridged networking
Remove the eth0
interface and re-add, attaching to lxdbr0
.
lxc stop dev
lxc config device remove dev eth0
lxc config device add dev eth0 nic nictype=bridged parent=lxdbr0 name=eth0
Container down
Stop the container: lxc stop dev
.
Delete the container: lxc delete dev
.
Other considerations
- LXD allows you to create storage pools using various backends; the example above implicitly uses the ‘default’ disk storage pool. You can also create a profile in case you would like to share configuration between multiple containers.
- If you would prefer to avoid installing LXD through snap, or if you would like the ability to manage both system and app containers, you might try Incus, a community driven fork of LXD
- Check out Bravetools if you prefer a YAML-based, declarative configuration
- You may consider using Ansible from within the container, or another configuration management tool that can be run in a client-only mode, in order to manage dependencies and services in a more organized, repeatable fashion
- You may consider installing dependencies through GUIX and/or Nix as an alternative to the system package manager, or even running Guix System or NixOS containers
- Run
lxc image list images: | grep cloud
to list all cloud-init compatible images hosted by theimages
remote; you an also browse these images here: https://images.lxd.canonical.com/