Skip to content

[BUG] - Proxy protocol not usable and not configurable #518

@lodatol

Description

@lodatol

When enabling PROXY PROTOCOL ports the chart fails or produces incorrect runtime behavior because validation and configuration are split between the ingress section and the frontend section. The templates currently validate PROXY settings against .Values.ingress.* while the frontend pod is the actual component that must parse the PROXY protocol. This makes it impossible to deploy correct configurations when using PROXY (or forces insecure workarounds), causing lost client IPs, open-relay custom config risk, and failure of IP-based botnet/advanced spam filtering.

Actual behavior

Enabling PROXY protocol ports triggers template failures if .Values.ingress.realIpFrom is empty, even when frontend-level real IPs are configured.
Enabling PROXY protocol ports also triggers failures if .Values.ingress.realIpHeader is set, creating contradictory conditions that block valid deployments.
In deployments where ingress is not used or where frontend and ingress are configured independently (e.g., frontend LoadBalancer/NodePort + separate ingress), the remote client IP is not restored correctly and mail components treat connections with incorrect/missing client IPs.
Resulting runtime issues include mis-applied spam/botnet rules, potential open-relay custom config exposure, and inability to rely on real client addresses for policy enforcement.
Expected behavior

Validation and configuration for PROXY protocol should live in the frontend scope, because the frontend pod is responsible for recognizing and handling PROXY headers.
Chart should allow PROXY to be enabled when frontend.realIpFrom and/or frontend.realIpHeader are correctly configured, independent of ingress configuration or whether an ingress controller is deployed.
There should be no contradictory fail conditions that block valid frontend-level PROXY configurations.
Reproduction steps
In values.yaml enable one or more PROXY protocol ports (e.g., under current ingress-based keys) while leaving .Values.ingress.realIpFrom empty or setting .Values.ingress.realIpHeader.
Run helm template / helm install/upgrade in a cluster where frontend and ingress may be deployed independently.
Observe template failure messages like:
"PROXY protocol is enabled for some ports, but ingress.realIpFrom is not set"
"PROXY protocol is enabled for some ports, but ingress.realIpHeader is set"
Alternatively deploy with ingress disabled and frontend configured to expect PROXY; observe client IP not being restored or chart validation blocking deployment.
Root cause

PROXY protocol related checks and values are defined/validated under the ingress section while the PROXY handling happens in the front pod. Because frontend load balancer and ingress are optionally deployed and may be shared across services, separating the configuration creates contradictory validation and mismatched runtime behavior.
Historical reasons: older charts used separate services for web vs TCP/SMTP; modern ingress controllers support TCP/UDP and PROXY at the ingress/front layer, so the configuration belongs to the frontend scope.
Impact
Blocks valid deployments that use PROXY protocol.
Causes incorrect client IP handling, breaking IP-based spam/botnet rules and exposing open-relay risk in custom configurations.
Forces maintainers/users to apply unsafe workarounds (restricting allowed nets to localhost or disabling features) or to fork/patch the chart.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugIssues that are confirmed to be bugs

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions