This is a http bridge to make local varlink services available via http. The main use case is systemd, so only the subset of varlink that systemd needs is supported right now.
It takes a directory with varlink sockets (or symlinks to varlink sockets) like /run/varlink/registry as the argument and will serve whatever it finds in there. Sockets can be added or removed dynamically in the dir as needed.
POST /call/{method} → invoke method (c.f. varlink call, supports ?socket=)
GET /sockets → list available sockets (c.f. valinkctl list-registry)
GET /sockets/{socket} → socket info (c.f. varlinkctl info)
GET /sockets/{socket}/{interface} → interface details, including method names (c.f. varlinkctl list-methods)
GET /health → health check
For /call, the socket is derived from the method name by stripping
the last .Component (e.g. io.systemd.Hostname.Describe connects
to socket io.systemd.Hostname). The ?socket= query parameter
overrides this for cross-interface calls, e.g. to call
io.systemd.service.SetLogLevel on the io.systemd.Hostname socket.
For /call the parameters are POSTed as regular JSON.
GET /ws/sockets/{socket} → transparent varlink-over-websocket proxy
The websocket endpoint is a transparent proxy that forwards raw bytes between the websocket and the varlink unix socket in both directions. Clients are expected to speak raw varlink wire protocol.
This makes the bridge compatible with libvarlink varlink --brige
via websocat --binary, enabling full varlink features (including
--more) over the network.
The default port is 1031 (NCC-1031, USS Discovery) - because every bridge needs a ship, and this one discovers your varlink services.
Using curl for direct calls is usually more convenient/ergonimic than using the websocket endpoint.
$ systemd-run --user ./target/debug/varlink-httpd
$ curl -s http://localhost:1031/sockets | jq
{
"sockets": [
"io.systemd.Login",
"io.systemd.Hostname",
"io.systemd.sysext",
"io.systemd.BootControl",
"io.systemd.Import",
"io.systemd.Repart",
"io.systemd.MuteConsole",
"io.systemd.FactoryReset",
"io.systemd.Credentials",
"io.systemd.AskPassword",
"io.systemd.Manager",
"io.systemd.ManagedOOM"
]
}
$ curl -s http://localhost:1031/sockets/io.systemd.Hostname | jq
{
"interfaces": [
"io.systemd",
"io.systemd.Hostname",
"io.systemd.service",
"org.varlink.service"
],
"product": "systemd (systemd-hostnamed)",
"url": "https://systemd.io/",
"vendor": "The systemd Project",
"version": "259 (259-1)"
}
$ curl -s http://localhost:1031/sockets/io.systemd.Hostname/io.systemd.Hostname | jq
{
"method_names": [
"Describe"
]
}
$ curl -s -X POST http://localhost:1031/call/io.systemd.Hostname.Describe -d '{}' -H "Content-Type: application/json" | jq .StaticHostname
"top"
$ curl -s -X POST http://localhost:1031/call/org.varlink.service.GetInfo?socket=io.systemd.Hostname -d '{}' -H "Content-Type: application/json" | jq
{
"interfaces": [
"io.systemd",
"io.systemd.Hostname",
"io.systemd.service",
"org.varlink.service"
],
"product": "systemd (systemd-hostnamed)",
"url": "https://systemd.io/",
"vendor": "The systemd Project",
"version": "259 (259-1)"
}
# streaming methods use Accept: application/json-seq (RFC 7464)
$ curl -s -H "Accept: application/json-seq" -H "Content-Type: application/json" \
http://localhost:1031/call/io.systemd.UserDatabase.GetUserRecord \
-d '{"service":"io.systemd.Multiplexer"}' | jq --seq
Systemd version v260+ supports pluggable protocols for varlink, with that the bridge becomes even nicer.
# copy varlinkctl-http into /usr/lib/systemd/varlink-bridges/http
# (or use SYSTEMD_VARLINK_BRIDGES_DIR)
$ varlinkctl introspect http://localhost:1031/ws/sockets/io.systemd.Hostname
interface io.systemd
...
$ varlinkctl call http://localhost:1031/ws/sockets/io.systemd.Hostname io.systemd.Hostname.Describe {}
{
"Hostname" : "top",
...The examples use websocat because curl for websockets support is relatively new and still a bit cumbersome to use.
$ cargo install websocat
...
# call via websocat: note that this is the raw procotol so the result is wrapped in "parameters"
# note that the reply also contains the raw \0 so we filter them
$ printf '{"method":"io.systemd.Hostname.Describe","parameters":{}}\0' | websocat ws://localhost:1031/ws/sockets/io.systemd.Hostname | tr -d '\0' | jq
{
"parameters": {
"Hostname": "top",
...
# io.systemd.Unit.List streams the output
$ printf '{"method":"io.systemd.Unit.List","parameters":{}, "more": true}\0' | websocat --no-close ws://localhost:1031/ws/sockets/io.systemd.Manager| tr -d '\0' | jq
{
"parameters": {
"context": {
"Type": "device",
...
# and user records come via "continues": true
$ printf '{"method":"io.systemd.UserDatabase.GetUserRecord", "parameters": {"service":"io.systemd.Multiplexer"}, "more": true}\0' | websocat --no-close ws://localhost:1031/ws/sockets/io.systemd.Multiplexer | tr '\0' '\n'|jq
{
"parameters": {
"record": {
"userName": "root",
"uid": 0,
"gid": 0,
...
# varlinkctl is supported via our varlinkctl-http
$ VARLINK_BRIDGE_URL=http://localhost:1031/ws/sockets/io.systemd.Multiplexer \
varlinkctl call --more /usr/libexec/varlinkctl-http \
io.systemd.UserDatabase.GetUserRecord '{"service":"io.systemd.Multiplexer"}'
# libvarlink bridge mode gives full varlink CLI support over the network
$ varlink --bridge "websocat --binary ws://localhost:1031/ws/sockets/io.systemd.Hostname" info
Vendor: The systemd Project
Product: systemd (systemd-hostnamed)
...
$ varlink --bridge "websocat --binary ws://localhost:1031/ws/sockets/io.systemd.Hostname" \
call io.systemd.Hostname.Describe
{
"Hostname": "top",
"StaticHostname": "top",
...
}
TLS flag names follow the systemd convention.
--cert=PATH path to TLS certificate PEM file
--key=PATH path to TLS private key PEM file
--trust=PATH path to CA certificate PEM for client verification (mTLS)
Providing --trust= implicitly enables mTLS: the server will
require clients to present a certificate signed by that CA.
When running as a systemd service, the bridge discovers TLS material
from $CREDENTIALS_DIRECTORY (see systemd.exec(5)). The credential
file names match the CLI flag names: cert, key, trust.
The shipped unit file (varlink-httpd.service) uses ImportCredential=
to import well-known credential names from the credstore and rename
them to the short names the service expects. To provision TLS:
# cp server.pem /etc/credstore/varlink-httpd.tls.certificate
# systemd-creds encrypt server-key.pem /etc/credstore.encrypted/varlink-httpd.tls.key
# cp ca.pem /etc/credstore/varlink-httpd.tls.trustExplicit CLI flags take priority over credentials.
The varlinkctl-http binary acts as a bridge between varlinkctl
and varlink-httpd, supporting TLS and mTLS. It looks for
client TLS material in the first existing directory:
$XDG_CONFIG_HOME/varlinkctl-http/~/.config/varlinkctl-http//etc/varlinkctl-http/
| File | Purpose |
|---|---|
client-cert-file |
Client certificate PEM (for mTLS) |
client-key-file |
Client private key PEM (for mTLS) |
server-ca-file |
CA certificate PEM (for private/self-signed server CAs) |
The system CAs are used automatically. For mTLS, drop the client cert and key into the config directory:
$ mkdir -p ~/.config/varlinkctl-http
$ cp client-cert.pem ~/.config/varlinkctl-http/client-cert-file
$ cp client-key.pem ~/.config/varlinkctl-http/client-key-file
$ cp ca.pem ~/.config/varlinkctl-http/server-ca-file
$ VARLINK_BRIDGE_URL=https://myhost:1031/ws/sockets/io.systemd.Hostname \
varlinkctl call exec:/usr/libexec/varlinkctl-http \
io.systemd.Hostname.Describe '{}'The bridge can authenticate requests using SSH public keys. If you have an SSH agent running clients authenticate automatically with zero extra configuration. Note that RSA keys are not supported, just Ed25519 and ECDSA keys.
The bridge discovers authorized keys automatically from these locations (first match wins):
--authorized-keys=PATH— explicit CLI flag/etc/varlink-httpd/authorized_keys— config file$CREDENTIALS_DIRECTORY/ssh.authorized_keys.root— systemd credential (seesystemd.exec(5))
The simplest setup is to pass the path explicitly:
$ varlink-httpd --authorized-keys=~/.ssh/authorized_keysTo fetch keys from GitHub (or any HTTPS URL) and save them locally,
use the import-ssh subcommand:
$ run0 varlink-httpd import-ssh gh:myuser
Wrote 3 key line(s) to /etc/varlink-httpd/authorized_keys, run with:
varlink-httpd --authorized-keys /etc/varlink-httpd/authorized_keysThe source can be gh:<user> (shorthand for
https://github.com/<user>.keys) or any https:// URL. The output
path is auto-detected but can be overridden with a second positional
argument. Once written to /etc/varlink-httpd/authorized_keys,
the bridge picks up the file automatically (discovery path 2) so the
--authorized-keys flag is no longer needed.
When running as a systemd service, the bridge discovers keys from credentials automatically (discovery paths 3 and 4):
[Service]
LoadCredential=ssh.authorized_keys.root:/root/.ssh/authorized_keysThe varlinkctl-http uses two methods for signing, checked in order:
-
VARLINK_SSH_KEY— If the private key is passed it will read the private key file directly. If the public key is passed it will look for the corresponding private key in the ssh agent.$ export VARLINK_SSH_KEY=~/.ssh/id_ed25519 -
SSH_AUTH_SOCK— fall back to the SSH agent, using the first Ed25519 or ECDSA key it finds. No setup required when the agent is running.
Using VARLINK_SSH_KEY is useful in environments without an SSH agent
(e.g. systemd services, containers, CI):
[Service]
Environment=VARLINK_SSH_KEY=/my/private/bridge_keySSH key auth and TLS/mTLS are independent and should be combined. For example, use regular TLS (not mTLS) for transport encryption and SSH keys for user authentication:
$ varlink-httpd \
--cert=server.pem \
--key=server-key.pem \
--authorized-keys=~/.ssh/authorized_keysThis is recommended because for websocket requests only the initial "upgrade" request is signed with the ssh key, after the upgrade it is a plain WebSocket which relies on the underlying TLS for security.