Spotter-VM/doc/toolchain/vmmgr-internals.rst

208 lines
11 KiB
ReStructuredText
Raw Normal View History

2020-06-01 22:05:55 +02:00
VMMgr internals
===============
Configuration
-------------
While VMMgr mainly utilizes metadata and configuration of SPOC and other tools, it has its own configuration file with metadata which are either sourced to other tools or are relevant only in context of VMMgr, resp. its web interface and nginx reverse proxy. The JSON configuration file is located in ``/etc/vmmgr/config.json`` and its structure looks as follows
.. code-block:: json
{
"apps": {
"sahana": {
"host": "sahana",
"login": "admin@example.com",
"password": "KlLwlo3DxW3sK7gW",
"visible": true
}
},
"common": {
"email": "admin@example.com",
"gmaps-api-key": ""
},
"host": {
"adminpwd": "$2b$12$1QAv6NEuHCGWbP8IqjhZ/ehxMbW1jwcUBptYgzg1CVmro9FBrQfPO",
"domain": "spotter.vm",
"port": "8443"
}
}
host
^^^^
The ``host`` part in ``domain`` and ``port`` fields holds easily parsable information about the main HTTP host on which the VMMgr is accessible. This configuration is supplied to other services and tools such as nginx and `Issue / MotD`_ banners. The HTTP host component is also used in conjunction with the application subdomain to form the FQDN for the separate applications which the nginx reverse proxy accepts.
The ``adminpwd`` part is *argon2* hash of the VMMgr administrator password which is designed to be the same as the LUKS disk encryption password (see `Virtual Machine internals <virtual-machine-internals>`_). However, there isn't any direct link between these two and VMMgr always attempts to modify both passwords at once, leveraging the return code of ``cryptsetup luksChangeKey`` to obtain the information about whether the user managed to supply the correct old password.
common
^^^^^^
The ``common`` part currently contains settings which is important in the context of some applications, such as SMTP sender email or Google API keys. These settings are propagated to the applications via ``update-conf.sh`` script located in the respective SPOC application directories. However this system is not extensible and needs to be reworked on per-application settings as described in `VMMgr issue #4 <https://git.spotter.cz/Spotter-Cluster/vmmgr/-/issues/4>`_.
apps
^^^^
The ``apps`` part contains metadata for applications. In the example above, there is a record for ``sahana`` SPOC application with nginx proxy host defined as ``sahana``, creating the full HTTP host as ``sahana.spotter.vm:8443``.
The ``login`` and ``password`` fields are plaintext username and password for the application which were automatically generated during the application setup. This is only to display the generated password to the VMMgr administrator. The username and password aren't connected to the actual application in any way, so when the user modifies username or password directly in the application, it is not reflected in VMMgr configuration.
The ``visible`` field stores information to determine if the application should be displayed on VMMgr's application portal. In order for the application to be visible, it needs to be both stated and have this filed set to ``true``. However, setting this field to ``false`` does not prevent HTTP requests to be routed by nginx reverse proxy and processed by the application. The setting is purely cosmetical.
Nginx configuration
-------------------
VMMgr runs as a standalone WSGI application. All HTTP requests are passed through *nginx* HTTP server which serves as reverse proxy. The web server processes all HTTP and HTTPS connections for VMMgr and SPOC applications and containers. VMMgr is the component setting up the proxy rules based on a common template.
The web server is configured to redirect HTTP to HTTPS. On HTTPS, both TLSv1.2 and TLSv1.3 are supported, as some applications are making callbacks which are passing through nginx and are too old and can't handle TLS 1.3 handshakes. The rest of the TLS settings generally tries to follow `Mozilla Guidelines <https://ssl-config.mozilla.org/#server=nginx&config=modern&hsts=false&ocsp=false>`_ for modern browsers where possible.
The default nginx configuration relevant to VMMgr looks as follows
.. code-block:: nginx
server {
listen [::]:80 default_server ipv6only=off;
location / {
return 301 https://$host:443$request_uri;
}
location /.well-known/acme-challenge/ {
root /etc/acme.sh.d;
}
location = /vm-ping {
add_header Content-Type text/plain;
return 200 "vm-pong";
}
}
server {
listen [::]:443 ssl http2 default_server ipv6only=off;
location / {
proxy_pass http://127.0.0.1:8080;
}
location /static {
root /usr/share/vmmgr;
}
error_page 502 /502.html;
location = /502.html {
root /usr/share/vmmgr/templates;
}
location = /vm-ping {
add_header Content-Type text/plain;
return 200 "vm-pong";
}
}
server {
listen [::]:443 ssl http2;
server_name *.spotter.vm;
location / {
proxy_pass http://172.17.0.2:8080;
}
}
The template for individual applications looks as
.. code-block:: nginx
server {
listen [::]:443 ssl http2;
server_name sahana.spotter.vm;
access_log /var/log/nginx/sahana.access.log;
error_log /var/log/nginx/sahana.error.log;
location / {
proxy_pass http://172.17.0.2:8080;
}
include vmmgr_common;
}
Where the application name is taken from metadata and the upstream IP from SPOC global hosts file (see `Networking chapter in SPOC Architecture <spoc-architecture#networking>`_).
The ``vmmgr_common`` is a file with several static rules
.. code-block:: nginx
error_page 502 /502.html;
location = /502.html {
root /usr/share/vmmgr/templates;
}
error_page 503 /503.html;
location = /503.html {
root /usr/share/vmmgr/templates;
}
location = /vm-ping {
add_header Content-Type text/plain;
return 200 "vm-pong";
}
The 502 and 503 error pages are briefly describing the state of the application and they are usually displayed when the application is not fully started. The ``/vm-ping`` endpoint is used for checks if the applications are available from the internet on their respective subdomains.
Application hooks
-----------------
VMMgr needs to be explicitly told whenever there is a new SPOC application installed to register its metadata and nginx reverse proxy. This is achieved via ``vmmgr register-app`` and ``vmmgr unregister-app`` calls respectively.
.. code-block:: text
usage: vmmgr register-app app host [login] [password]
positional arguments:
app Application name
host Application subdomain
login Admin login
password Admin password
.. code-block:: text
usage: vmmgr unregister-app app
positional arguments:
app Application name
The registration add the passed parameters as metadata to the `configuration`_. At the same time it also creates the nginx reverse proxy configuration file described above using a template.
Removal of orphaned images
--------------------------
SPOC normally doesn't remove unused and orphaned images / layers whenever an application is uninstalled. Unlike SPOC, VMMgr is intended to be used more naively, so unlike SPOC, VMMgr *does* remove all unused and orphaned images / layers whenver an application is uninstalled. This may lead to confusing scenarios when both SPOC commands and VMMgr is used on the same machine, as uninstallation or update of one application via SPOC doesn't remove its orphaned layers, but subsequent uninstallation / update of another application via VMMgr will remove layers for both the first and the second application, possibly leading the user to believe that there is any connection between the two applications when there is none.
Issue / MotD
------------
VMMgr handles generation and refreshes of ``/etc/issue`` and ``/etc/motd``. The former is displayed in PTY, e.g. whenever a user starts the VM, the latter when logging in via SSH. Both files contain branding, legal notice and currently configured URLs to access the VMMgr web interface. The files are regenerated every time the host is changed via VMMgr or whenever the user presses a key in PTY displaying ``vmtty``.
The ``/sbin/vmtty``, which is the default *login program* used by `/sbin/getty` defined in `/etc/inittab` and formally not a part of VMMgr, relies on this VMMgr functionality and calls ``/usr/bin/vmmgr rebuild-issue`` to trigger the regeneration. The command is invoked as-is with no parameters or environment variables and loads the settings from VMMgr `configuration`_.
SSH
---
VMMgr allows to configure *authorized_keys* for SSH access. The file which it displays and modifies is ``/root/.ssh/authorized_keys``. SSH daemon is disabled by default, but once the user enters a SSH keys, VMMgr automatically enables and starts also the daemon, making SSH available on all network interfaces (including `WireGuard`_). Conversely, if the content of the file is emptied via VMMgr's web interface, VMMgr stops and disables SSH daemon.
WireGuard
---------
Another service which VMMgr allows to configure is WireGuard VPN. WireGuard is both an application and a protocol for point-to-point VPNs, building on modern cryptography algorithms. WireGuard utilizes Curve25519 for key exchange, ChaCha20 for encryption, Poly1305 for data authentication, SipHash for hashtable keys, and BLAKE2s for hashing. It is a default component of linux kernel as of version 5.6. Alpine 3.11 uses kernel 5.4, therefore WireGuard module and tooling needs to be installed separately, which is why VMMgr declares an explicit dependency to it.
Since WireGuard is easy to configure, but sill versatile, VMMgr leaves most of the configuration up to the user. It creates ``wg0`` and automactically generate the cryptographical key pair. The public key is displayed in the VMMgr GUI and needs to be configured on other machines. The key pair can be also regenerated via GUI. VMMgr restricts WireGuard operations to 172.17.255.0/24 network. The default listening port is 51820/udp. List of peers can be configured using following stanzas
.. code-block:: ini
[Peer]
PublicKey = pi1I6pUcjN//s5OEoaGn6bJQyv8RO5w5HjndV97mHWM=
AllowedIPs = 172.17.255.12/32
Endpoint = 12.34.56.78:51820
Where the ``PublicKey`` is the public key of the peer / partner, ``AllowedIPs`` is the internal IP range which will be routed to that peer (typically only the peer's VPN IP) and ``Endpoint`` is the IP address and port of peer's WireGuard interface reachable from the internet. For the rest of the settings, refer to the `official WireGuard documentation <https://www.wireguard.com/>`_. The ``[Interface]`` section is hidden in VMMgr.