diff --git a/source/images/tproxy-diagram.drawio.png b/source/images/tproxy-diagram.drawio.png new file mode 100644 index 0000000000..c7ee09c42a Binary files /dev/null and b/source/images/tproxy-diagram.drawio.png differ diff --git a/source/images_drawio/tproxy-diagram.drawio b/source/images_drawio/tproxy-diagram.drawio new file mode 100644 index 0000000000..b35a7692bd --- /dev/null +++ b/source/images_drawio/tproxy-diagram.drawio @@ -0,0 +1,164 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/source/installation_and_configuration/opennebula_services/onegate.rst b/source/installation_and_configuration/opennebula_services/onegate.rst index c7c97f0b20..d7b7773f42 100644 --- a/source/installation_and_configuration/opennebula_services/onegate.rst +++ b/source/installation_and_configuration/opennebula_services/onegate.rst @@ -123,229 +123,37 @@ Other logs are also available in Journald. Use the following command to show: .. |onegate_net| image:: /images/onegate_net.png -.. - Advanced Setup - ============== - - - Example: Use OneGate/Proxy to Improve Security - ---------------------------------------------- - - In addition to the OneGate itself, OpenNebula provides transparent TCP-proxy for the OneGate's network traffic. - It's been designed to drop the requirement for guest VMs to be directly connecting to the service. Up to this point, - in cloud environments like :ref:`OneProvision/AWS `, the OneGate service had to be exposed - on a public IP address. Please take a look at the example diagram below: - - .. graphviz:: - - digraph { - graph [splines=true rankdir=LR ranksep=0.7 bgcolor=transparent]; - edge [dir=both color=blue arrowsize=0.6]; - node [shape=plaintext fontsize="11em"]; - - { rank=same; - F1 [label=< - - -
- -
ONE / 1 (follower)
eth1: 192.168.150.1
- >]; - F2 [label=< - - -
- -
- -
ONE / 2 (leader)
opennebula-gate
192.168.150.86:5030
eth1:
192.168.150.2
192.168.150.86 (VIP)
- >]; - F3 [label=< - - -
- -
ONE / 3 (follower)
eth1: 192.168.150.3
- >]; - } - - { rank=same; - H1 [label=< - - -
- -
- -
- -
- -
ONE-Host / 1
opennebula-gate-proxy
169.254.16.9:5030
lo:
127.0.0.1
169.254.16.9
⇅ (forwarding)
br0: 192.168.150.4
- >]; - H2 [label=< - - -
- -
- -
- -
- -
ONE-Host / 2
opennebula-gate-proxy
169.254.16.9:5030
lo:
127.0.0.1
169.254.16.9
⇅ (forwarding)
br0: 192.168.150.5
- >]; - } - - { rank=same; - G1 [label=< - - -
- -
- -
- -
VM-Guest / 1
ONEGATE_ENDPOINT=
http://169.254.16.9:5030
static route:
169.254.16.9/32 dev eth0
eth0: 192.168.150.100
- >]; - G2 [label=< - - -
- -
- -
- -
VM-Guest / 2
ONEGATE_ENDPOINT=
http://169.254.16.9:5030
static route:
169.254.16.9/32 dev eth0
eth0: 192.168.150.101
- >]; - } - - F1:s -> F2:n [style=dotted arrowhead=none]; - F2:s -> F3:n [style=dotted arrowhead=none]; - - F2:eth1:e -> H1:br0:w; - F2:eth1:e -> H2:br0:w; - - H1:br0:e -> G1:eth0:w; - H2:br0:e -> G2:eth0:w; - } - - | - - In this altered OneGate architecture, each hypervisor Node runs a process, which listens for connections on a dedicated - `IPv4 Link-Local Address `_. - After a guest VM connects to the proxy, the proxy connects back to OneGate and transparently forwards all the protocol traffic - both ways. Because a guest VM no longer needs to be connecting directly, it's now easy to setup a VPN/TLS tunnel between - hypervisor Nodes and the OpenNebula Front-end machines. It should allow for OneGate communication to be conveyed through securely, - and without the need for exposing OneGate on a public IP address. - - Each of the OpenNebula DEB/RPM node packages: ``opennebula-node-kvm`` and ``opennebula-node-lxc`` contains the ``opennebula-gate-proxy`` systemd service. To enable and start it on your Hosts, execute as **root**: - - .. prompt:: bash # auto - - # systemctl enable opennebula-gate-proxy.service --now - - You should be able to verify, that the proxy is running with the default config: - - .. prompt:: bash # auto +Advanced Setup +============== - # ss -tlnp | grep :5030 - LISTEN 0 4096 169.254.16.9:5030 0.0.0.0:* users:(("ruby",pid=9422,fd=8)) +Example: Use Transparent OneGate Proxy to Improve Security +---------------------------------------------------------- - .. important:: +Add the following config snippet to the ``~oneadmin/remotes/etc/vnm/OpenNebulaNetwork.conf`` file on Front-end machines: - The ``:onegate_addr`` attribute is configured automatically in the ``/var/tmp/one/etc/onegate-proxy.conf`` file during - the ``onehost sync -f`` operation. That allows for an easy reconfiguration in the case of a larger (many Hosts) - OpenNebula environment. - - To change the value of the ``:onegate_addr`` attribute, edit the ``/var/lib/one/remotes/etc/onegate-proxy.conf`` - file and then execute the ``onehost sync -f`` operation as **oneadmin**: - - .. prompt:: bash $ auto - - $ gawk -i inplace -f- /var/lib/one/remotes/etc/onegate-proxy.conf <<'EOF' - BEGIN { update = ":onegate_addr: '192.168.150.86'" } - /^#*:onegate_addr:/ { $0 = update; found=1 } - { print } - END { if (!found) print update >>FILENAME } - EOF - $ onehost sync -f - ... - All hosts updated successfully. - - .. note:: - - As a consequence of the ``onehost sync -f`` operation, the proxy service will be automatically restarted - and reconfigured on every hypervisor Node. - - To change the value of the ``ONEGATE_ENDPOINT`` context attribute for each guest VM, edit the ``/etc/one/oned.conf`` file - on your Front-end machines. For the purpose of using the proxy, just specify an IP address from the ``169.254.0.0/16`` - subnet (by default it's ``169.254.16.9``) and then restart the ``opennebula`` service: - - .. prompt:: bash # auto - - # gawk -i inplace -f- /etc/one/oned.conf <<'EOF' - BEGIN { update = "ONEGATE_ENDPOINT = \"http://169.254.16.9:5030\"" } - /^#*ONEGATE_ENDPOINT[^=]*=/ { $0 = update; found=1 } - { print } - END { if (!found) print update >>FILENAME } - EOF - # systemctl restart opennebula.service - - And, last but not least, it's required from guest VMs to setup this static route: +.. code:: - .. prompt:: bash # auto + :tproxy: + # OneGate service. + - :service_port: 5030 + :remote_addr: 10.11.12.13 # OpenNebula Front-end VIP + :remote_port: 5030 - # ip route replace 169.254.16.9/32 dev eth0 +Propagate config to Hypervisor hosts, execute as ``oneadmin`` on the leader Front-end machine: - Perhaps one of the easiest ways to achieve it, is to alter a VM template by adding a :ref:`start script `: +.. code:: - .. prompt:: bash # auto + $ onehost sync -f - # (export EDITOR="gawk -i inplace '$(cat)'" && onetemplate update alpine) <<'EOF' - BEGIN { update = "START_SCRIPT=\"ip route replace 169.254.16.9/32 dev eth0\"" } - /^CONTEXT[^=]*=/ { $0 = "CONTEXT=[" update "," } - { print } - EOF - # onetemplate instantiate alpine - VM ID: 0 +Deploy a guest Virtual Machine and test OneGate connectivity from within: - Finally, by examining the newly created guest VM, you can confirm if OneGate is reachable: +.. code:: - .. prompt:: bash # auto + $ onegate vm show - # grep -e ONEGATE_ENDPOINT -e START_SCRIPT /var/run/one-context/one_env - export ONEGATE_ENDPOINT="http://169.254.16.9:5030" - export START_SCRIPT="ip route replace 169.254.16.9/32 dev eth0" - # ip route show to 169.254.16.9 - 169.254.16.9 dev eth0 scope link - # onegate vm show --json - { - "VM": { - "NAME": "alpine-0", - "ID": "0", - "STATE": "3", - "LCM_STATE": "3", - "USER_TEMPLATE": { - "ARCH": "x86_64" - }, - "TEMPLATE": { - "NIC": [ - { - "IP": "192.168.150.100", - "MAC": "02:00:c0:a8:96:64", - "NAME": "NIC0", - "NETWORK": "public" - } - ], - "NIC_ALIAS": [] - } - } - } +Read more in :ref:`Transparent Proxies `. +.. Example: Deployment Behind TLS Proxy ------------------------------------ diff --git a/source/management_and_operations/network_management/index.rst b/source/management_and_operations/network_management/index.rst index 3f5657b26b..f495579d3d 100644 --- a/source/management_and_operations/network_management/index.rst +++ b/source/management_and_operations/network_management/index.rst @@ -13,3 +13,4 @@ Virtual Network Management Security Groups Self Provision Virtual Routers + Transparent Proxies diff --git a/source/management_and_operations/network_management/tproxy.rst b/source/management_and_operations/network_management/tproxy.rst new file mode 100644 index 0000000000..7c29402d1d --- /dev/null +++ b/source/management_and_operations/network_management/tproxy.rst @@ -0,0 +1,234 @@ +.. _tproxy: + +================================================================================ +Transparent Proxies +================================================================================ + +Transparent Proxies make it possible to connect to management services, such as OneGate, by implicitly using the existing data center backbone networking. The OneGate service usually runs on the leader Front-end machine, which makes it difficult for Virtual Machines running in isolated virtual networks to contact it. This situation forces OpenNebula users to design virtual networking in advance, to ensure that VMs can securely reach OneGate. Transparent Proxies have been designed to remove that requirement. + +About the Design +================================================================================ + +|tproxy_diagram| + +Virtual networking in OpenNebula is bridge-based. Each Hypervisor that runs Virtual Machines in a specific Virtual Network pre-creates such a bridge before deploying the VMs. Transparent Proxies extend that design by introducing a pair of VETH devices, where one of two "ends" is inserted into the bridge and the other is boxed inside the dedicated network namespace. This makes it possible to deploy proxy processes that can be reached by Virtual Machine guests via TCP/IP securely, i.e. without compromising the internal networking of Hypervisor hosts. Proxy processes themselves form a "mesh" of daemons interconnected with UNIX sockets, which allows for complete isolation of the two involved TCP/IP stacks; we call this environment the "String-Phone Proxy." The final part of the solution requires that Virtual Machine guests contact services over proxy via the ``169.254.16.9`` link-local address on specific ports, instead of their real endpoints. + +Hypervisor Configuration +================================================================================ + +Transparent Proxies read their config from the ``~oneadmin/remotes/etc/vnm/OpenNebulaNetwork.conf`` file on the Front-end machines. The file uses the following syntax: + +.. code:: + + :tproxy_debug_level: 2 # 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG + :tproxy: + # OneGate service. + - :service_port: 5030 + :remote_addr: 10.11.12.13 # OpenNebula Front-end VIP + :remote_port: 5030 + # Custom service. + - :service_port: 1234 + :remote_addr: 10.11.12.34 + :remote_port: 1234 + :networks: [vnet_name_or_id] + +.. note:: + + The YAML snippet above defines two distinct proxies, where the first is the usual OneGate proxy and the second is a completely custom service. + +.. important:: + + If the ``:networks:`` YAML key is missing or empty, the particular proxy will be applied to *all* available Virtual Networks. Defining multiple entries with the identical ``:service_port:`` values will have no effect as the subsequent duplicates will be ignored by networking drivers. + +**To apply the configuration, you need to perform two steps:** + +1. On the leader Front-end machine: as the ``oneadmin`` system user, sync the ``OpenNebulaNetwork.conf`` file with the Hypervisor hosts, by running ``onehost sync -f``. +2. Power-cycle any running guests (for example by running ``onevm poweroff`` followed by ``onevm resume``); otherwise the desired configuration changes may show no effect. + +Guest Configuration +================================================================================ + +The most common use case of Transparent Proxies is for communication with OneGate. Below is an example Virtual Machine template: + +.. code:: + + NAME = "example0" + CONTEXT = [ + NETWORK = "YES", + SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]", + TOKEN = "YES" ] + CPU = "1" + DISK = [ + IMAGE = "img0" ] + GRAPHICS = [ + LISTEN = "0.0.0.0", + TYPE = "VNC" ] + MEMORY = "256" + NIC = [ + NETWORK = "vnet0", + NETWORK_UNAME = "oneadmin", + SECURITY_GROUPS = "100" ] + NIC_DEFAULT = [ + MODEL = "virtio" ] + OS = [ + ARCH = "x86_64" ] + +In the simplest (but still instructive) case, a Virtual Machine needs the following settings to connect to OneGate using Transparent Proxies: + +.. code:: + + $ grep ONEGATE_ENDPOINT /run/one-context/one_env + export ONEGATE_ENDPOINT="http://169.254.16.9:5030" + + $ ip route show to 169.254.16.9 + 169.254.16.9 dev eth0 scope link + +.. code:: + + $ onegate vm show -j | jq -r '.VM.NAME' + example0-0 + +Debugging +================================================================================ + +You can find driver logs for each guest on the Front-end machines, in ``/var/log/one/*.log``. + +Proxy logs are found on Hypervisor hosts, in ``/var/log/``. For example: + +.. code:: + + $ ls -1 /var/log/one_tproxy*.log + /var/log/one_tproxy.log + /var/log/one_tproxy_br0.log + +The internal implementation of Transparent Proxies involves several networking primitives combined together: + +* ``nft`` (``nftables``) to store the service mapping and manage ARP resolutions +* ``ip netns`` / ``nsenter`` family of commands to manage and use network namespaces +* ``ip link`` / ``ip address`` / ``ip route`` commands +* ``/var/tmp/one/vnm/tproxy`` the actual implementation of the "String-Phone" daemon mesh + +Below are several example command invocations, to gain familiarity with the environment. + +**Listing service mappings in nftables:** + +.. code:: + + $ nft list ruleset + ... + table ip one_tproxy { + map ep_br0 { + type inet_service : ipv4_addr . inet_service + elements = { 1234 : 10.11.12.34 . 1234, 5030 : 10.11.12.13 . 5030 } + } + } + +.. note:: + + The ``nftables`` config is not persisted across Hypervisor host reboots, as it is the default behavior in OpenNebula in general. + +**Listing all custom network namespaces:** + +.. code:: + + $ ip netns list + one_tproxy_br0 (id: 0) + +.. note:: + + Each active Virtual Network requires one of those namespaces to run the proxy inside. + +**Checking if the "internal" end of the VETH device pair has been put inside the dedicated namespace:** + +.. code:: + + $ ip netns exec one_tproxy_br0 ip address + 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + 7: br0a@if8: mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether 12:00:83:53:f4:3d brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 169.254.16.9/32 scope global br0a + valid_lft forever preferred_lft forever + inet6 fe80::1000:83ff:fe53:f43d/64 scope link + valid_lft forever preferred_lft forever + +.. note:: + + In case multiple Hypervisor hosts participate in the Virtual Network's traffic, the ``169.254.16.9`` address stays the same regardless, the closest Hypervisor host is supposed to answer guest requests. + +**Checking if the default route for sending packets back into the bridge has been configured:** + +.. code:: + + $ ip netns exec one_tproxy_br0 ip route + default dev br0a scope link + +**Listing PIDs of running proxy processes:** + +.. code:: + + $ /var/tmp/one/vnm/tproxy status + one_tproxy: 16803 + one_tproxy_br0: 16809 + +.. note:: + + There is only a single ``one_tproxy`` process running in the default network namespace, it connects to real remote services. + +.. note:: + + There are multiple ``one_tproxy_*`` processes, they are boxed inside corresponding dedicated network namespaces and connect to the ``one_tproxy`` process using UNIX sockets. + +.. note:: + + There is no PID file management implemented. For simplicity, all proxy processes may be found by looking at the ``/proc/PID/cmdline`` process attributes. + +**Restarting/reloading config of proxy daemons:** + +.. code:: + + $ /var/tmp/one/vnm/tproxy restart + $ /var/tmp/one/vnm/tproxy reload + +.. important:: + + While you can manually run the ``start``, ``stop``, ``restart`` and ``reload`` commands as part of a debugging process, under normal circumstances the proxy daemons are completely managed by networking drivers. The command-line interface here is very minimal and does not require any extra parameters, as all the relevant config is stored in ``nftables``. + +Security Groups +================================================================================ + +Transparent Proxies can be used together with OpenNebula Security Groups. Below is an example of a security group template: + +.. code:: + + NAME = "example0" + + RULE = [ + PROTOCOL = "ICMP", + RULE_TYPE = "inbound" ] + RULE = [ + PROTOCOL = "ICMP", + RULE_TYPE = "outbound" ] + + RULE = [ + PROTOCOL = "TCP", + RANGE = "22", + RULE_TYPE = "inbound" ] + RULE = [ + PROTOCOL = "TCP", + RANGE = "80,443", + RULE_TYPE = "outbound" ] + + # Required for Transparent Proxies + RULE = [ + PROTOCOL = "TCP", + RANGE = "1234,5030", + RULE_TYPE = "outbound" ] + + # DNS + RULE = [ + PROTOCOL = "UDP", + RANGE = "53", + RULE_TYPE = "outbound" ] + +.. |tproxy_diagram| image:: /images/tproxy-diagram.drawio.png