Spotter-VM/_doc/toolchain/pkgmgr.md
2019-03-19 11:32:31 +01:00

9.9 KiB

Package manager

Why custom package manager

Why use custom package management instead of the native APK package manager?

Native packaging toolchain abuild is designed for automated bulk package building. It doesn't support building packages from pre-existing directories without some considerable customizations and requires that full build takes place as part of the packaging process. That includes steps like binary stripping, symlink resolution and dependency tracing. It also requires to be run under non-root user inside fakeroot which is problematic when LXC containers should be packaged. Most of the limitations can be worked around (run as root using -F, spoof build process by bind-mounting existing directory to packaging directory, skip dependency tracing using options="!tracedeps" in APKFILE and omit majority of the build process by running only build package prepare_metafiles create_apks index clean abuild actions), however there is no real benefit in (ab)using the native tools this way.

Furthermore, when apk package manager installs a package, it first unpacks it, then runs post-install script and once all packages are installed, only then it modifies the permissions and ownership of the files to the original values contained in the package. This means that it's not possible to run container setup as part of post-install script as most applications require the permissions to be already correct. Every single file including its ownership and permissions along with a hash is recorded in /lib/apk/db/installed, which only unnecessarily bloats the database of locally installed packages (e.g. the basic python 3 layer contains ~6500 files).

With custom package manager, the whole download, unpacking and installation process can be observed directly, keeping the VMMgr web GUI user informed about the currently ongoing step, as opposed to a mere download percentage offered by the bare apk. Finally, the APK packages are only gzipped whereas the custom solution uses xz (LZMA2), allowing for up to 70% smaller packages.

How does it work

The package manager is integrated into the VMMgr application. It can be invoked only via the VMMgr web GUI. The entry point is on /setup-apps URL and all the configuration related to the repository settings (URL, username and password) can be configured on the same page. The URL should point to the directory where all content previously created by repository maintainer using lxc-pack commands is uploaded (i.e. packages, packages.sig and all *.tar.xz files). Once the user opens this page, VMMgr tries to contact the repository using the configured values and attempts to download packages file with all packages metadata. If it fails, it checks the reason for the failure (either connection exception or HTTP status code) and displays appropriate error message to the user.

If the packages is successfully downloaded, the package manager immediately downloads also packages.sig, which contains ECDSA-signed SHA512 hash of the packages file, and verifies the signature using public key preloaded in /etc/vmmgr/packages.pub. In the signature matches, then it parses the packages file contents in JSON format and displays list of installable packages.

The information about installed packages, including their metadata, are stored in a local metadata file /etc/vmmgr/config.json along with VMMgr settings for the local virtual machine. The local metadata file is also in JSON format and the metadata are simply copied to it from the remote repository during installation.

All package manager actions (install / upgrade / uninstall) as well as stop / start actions are handled by VMMgr queue manager. The queue manager processes the actions sequentially in the order in which they were enqueued (FIFO), therefore it cannot happen that multiple package installations will run simultaneously or will interfere with stop / start actions. In the event of unexpected failure or VM shutdown, it is possible to safely repeat the failed or unfinished actions as the install / upgrade / uninstall methods are designed to ensure sanity of the environment.

The whole idea is generally the same as with any other packaging system - e.g. rpm or dpkg on linux, homebrew on Mac or Chocolatey on Windows, except this packaging system is highly customised for use with LXC containers and VMMgr web GUI.

Anatomy of a package

The files in the package are structured as

*.tar.xz
 ├─ srv/
 │  └─ <package>/
 │     ├─ install/
 │     ├─ install.sh
 │     ├─ uninstall/
 │     ├─ uninstall.sh
 │     ├─ upgrade/
 │     └─ upgrade.sh
 └─ var/
    └─ lib/
       └─ lxc/
          └─ <lxcpath>/

This structure is extracted directly to the root directory of the virtual machine as it would be with any other package manager. Every package may contain subdirectories install, upgrade and uninstall and files install.sh, upgrade.sh, uninstall.sh which are invoked during the respective actions. Their presence and the contents under /var/lib/lxc depend on the type of the package. If the package contains only a shared LXC OverlayFS layer, it doesn't contain config file with LXC container definition and it likely wouldn't contain any of the install / upgrade / uninstall scripts and directories as it is not needed in this context.

Installing a package

First, the installation method builds and flattens a dependency tree using the metadata from the repository and compares it with list of currently installed packages taken from the local metadata file, resulting in a list of packages to be downloaded and installed, ordered by dependency requirements (i.e. packages with already satisfied dependencies are installed first, satisfying dependencies for the subsequent ones).

All packages in this list are then downloaded as *.tar.xz archives from the repository and stored in temporary directory /var/cache/vmmgr as *.tar.xz.partial. Once the package is downloaded, its SHA512 has is calculated and verified against the value in cryptographically signed packages metadata file. If the hashes don't match, the whole installation process is interrupted and an error message informing about the mismatch is displayed to user. If the hashes match, the *.tar.xz.partial is renamed as *.tar.xz. Therefore in the event of unexpected VM shutdown or connection interruption, all *.tar.xz archives in /var/cache/vmmgr can be considered as verified and don't need to be downloaded again when the user decides to retry the installation.

Once all the packages are downloaded and their checksums are verified, the installation method unpacks them. Prior to unpacking, the method ensures sanity of filesystem by purging the directories and files (if they exist) which are to be used by the currently installing packages. This includes /var/lib/lxc/<lxcpath>, /srv/<package> and /var/log/lxc/<package>.log. The *.tar.xz archive is deleted right after decompressing.

After all the package archives are unpacked, the uninstall.sh script is run (if it is present) to ensure sanity of other components. This script attempts to remove objects and interfaces installed within components which are not part of the currently installing package (databases and database users, Solr cores, MQ definitions...). This requires that the uninstall.sh script is written in a defensive manner (e.g. DROP DATABASE IF EXISTS...)and must not exit with non-zero code even if no objects and interfaces for this package exist yet.

Next, an install.sh script is run which sets all the objects and interfaces which need to be installed in other components (databases, database users) and performs all the post-installation steps for the currently installing package, such as creation of persistent configuration and data directory under /srv/<package> of the VM. In case of user-installable application packages, the very last command in the install.sh script is vmmgr register-app VMMgr hook which creates a definition for VMMgr web GUI, including administrator credentials and subdomain of which the application will be accessible.

Finally the package itself with its metadata, stripped of size and sha512 keys automatically added by lxc-pack during packaging, is added to the local repository metadata in /etc/vmmgr/config.json. After this, the package is considered fully installed and can be used by the users or other applications.

Upgrading a package

Upgrading process is not yet implemented in the package manager. The idea is that the VMMgr simply compares the version and release from the repository metadata with the local metadata and offers upgrade if the version don't match. The dependency list build, download and verification part will be the same as during installation. Upgrade process will purge only the LXC data and LXC log, but will leave configuration and data under /srv/<package> unchanged. Then it overwrites install / upgrade / uninstall scripts and directories and runs upgrade.sh script. Finally it re-registers the package metadata in local repository metadata file /etc/vmmgr/config.json.

Uninstalling a package

Uninstallation process first compiles a dependency list in a similar fashion like in the first step of installation, except this time it checks which packages are recorded as dependencies and will become unused (and therefore unnecessary) after the current package is uninstalled.

For every package in this list the uninstall.sh script is run, removing objects and interfaces installed within components which are not part of the currently installing package (databases and database users, Solr cores, MQ definitions...).

After the uninstall.sh finishes, all files related to the currently uninstalling package are deleted. This includes /var/lib/lxc/<lxcpath>, /srv/<package> and /var/log/lxc/<package>.log.

As the final step, the package metadata are unregistered (removed) from the local repository metadata file /etc/vmmgr/config.json.