Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MSK Agent Integration broken on KRaft Clusters #17743

Open
jcarvalho opened this issue Jun 5, 2024 · 3 comments
Open

MSK Agent Integration broken on KRaft Clusters #17743

jcarvalho opened this issue Jun 5, 2024 · 3 comments

Comments

@jcarvalho
Copy link

Amazon MSK has recently launched support for KRaft Clusters, which adds the Controller Nodes to the output of the ListNodes API Call.

These node entries do not have a brokerNodeInfo entry, which causes the Agent Integration to crash with the following error:

2024-06-05 12:41:15 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:71 in Error) | check:amazon_msk | Error running check: [{"message": "'BrokerNodeInfo'", "traceback": "Traceback (most recent call last):
  File \"/opt/datadog-agent/embedded/lib/python3.11/site-packages/datadog_checks/base/checks/base.py\", line 1224, in run
    self.check(instance)
  File \"/opt/datadog-agent/embedded/lib/python3.11/site-packages/datadog_checks/amazon_msk/amazon_msk.py\", line 115, in check
    broker_info = node_info['BrokerNodeInfo']
                  ~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 'BrokerNodeInfo'
"}]

Ideally, the integration should also scrape the Controller Nodes (which may also expose Prometheus metrics), but it would at least be great to still support scraping the Broker Nodes when KRaft is in use.

Output of the info page

(Posting only relevant version + check information, happy to share more details over DM)

===============
Agent (v7.54.0)
===============
  Status date: 2024-06-05 12:38:51.71 UTC (1717591131710)
  Agent start: 2024-06-05 12:38:46.925 UTC (1717591126925)
  Pid: 1
  Go Version: go1.21.9
  Python Version: 3.11.8
  Build arch: amd64
  Agent flavor: agent
  Log Level: INFO


  Running Checks
  ==============

    amazon_msk (4.7.0)
    ------------------
      Instance ID: amazon_msk:c28c17180d3df175 [ERROR]
      Configuration Source: kube_services:kube_service://datadog-cluster-checks/[REDACTED]
      Total Runs: 36
      Metric Samples: Last Run: 0, Total: 0
      Events: Last Run: 0, Total: 0
      Service Checks: Last Run: 1, Total: 36
      Average Execution Time : 293ms
      Last Execution Date : 2024-06-05 12:48:45 UTC (1717591725000)
      Last Successful Execution Date : Never
      Error: 'BrokerNodeInfo'
      Traceback (most recent call last):
        File "/opt/datadog-agent/embedded/lib/python3.11/site-packages/datadog_checks/base/checks/base.py", line 1224, in run
          self.check(instance)
        File "/opt/datadog-agent/embedded/lib/python3.11/site-packages/datadog_checks/amazon_msk/amazon_msk.py", line 115, in check
          broker_info = node_info['BrokerNodeInfo']
                        ~~~~~~~~~^^^^^^^^^^^^^^^^^^
      KeyError: 'BrokerNodeInfo'

Additional environment details (Operating System, Cloud provider, etc):

Steps to reproduce the issue:

  1. Setup a KRaft-enabled MSK Cluster
  2. Setup the MSK Agent Datadog Integration
  3. Verify that the checks runner fails with an exception

Describe the results you received:

The check fails with the exception above and no metrics are published to Datadog.

Describe the results you expected:

Ideally: The metrics for both the Controller and the Brokers are published to Datadog.
Desired: The metrics for the Brokers are published to Datadog.

Additional information you deem important (e.g. issue happens only occasionally):

Returned data for the ListNodes call in our KRaft-enabled cluster (redacted URLs and Account/Subnet IDs):

{
  "nodeInfoList": [
    {
      "nodeType": "CONTROLLER",
      "controllerNodeInfo": {
        "endpoints": [
          "c-10002.[redacted].kafka.us-east-1.amazonaws.com"
        ]
      }
    },
    {
      "nodeType": "CONTROLLER",
      "controllerNodeInfo": {
        "endpoints": [
          "c-10003.[redacted].kafka.us-east-1.amazonaws.com"
        ]
      }
    },
    {
      "nodeType": "CONTROLLER",
      "controllerNodeInfo": {
        "endpoints": [
          "c-10001.[redacted].kafka.us-east-1.amazonaws.com"
        ]
      }
    },
    {
      "nodeType": "BROKER",
      "nodeARN": "[redacted]",
      "instanceType": "m7g.large",
      "addedToClusterTime": "2024-06-04T14:03:11.193Z",
      "brokerNodeInfo": {
        "brokerId": 3,
        "clientVpcIpAddress": "[redacted]",
        "endpoints": [
          "b-3.[redacted].kafka.us-east-1.amazonaws.com"
        ],
        "clientSubnet": "[redacted]",
        "currentBrokerSoftwareInfo": {
          "kafkaVersion": "3.7.x.kraft"
        },
        "attachedENIId": "[redacted]"
      }
    },
    {
      "nodeType": "BROKER",
      "nodeARN": "[redacted]",
      "instanceType": "m7g.large",
      "addedToClusterTime": "2024-06-04T14:03:11.166Z",
      "brokerNodeInfo": {
        "brokerId": 2,
        "clientVpcIpAddress": "[redacted]",
        "endpoints": [
          "b-2.[redacted].kafka.us-east-1.amazonaws.com"
        ],
        "clientSubnet": "[redacted]",
        "currentBrokerSoftwareInfo": {
          "kafkaVersion": "3.7.x.kraft"
        },
        "attachedENIId": "[redacted]"
      }
    },
    {
      "nodeType": "BROKER",
      "nodeARN": "[redacted]",
      "instanceType": "m7g.large",
      "addedToClusterTime": "2024-06-04T14:03:11.139Z",
      "brokerNodeInfo": {
        "brokerId": 1,
        "clientVpcIpAddress": "[redacted]",
        "endpoints": [
          "b-1.[redacted].kafka.us-east-1.amazonaws.com"
        ],
        "clientSubnet": "[redacted]",
        "currentBrokerSoftwareInfo": {
          "kafkaVersion": "3.7.x.kraft"
        },
        "attachedENIId": "[redacted]"
      }
    }
  ]
}
@jcarvalho
Copy link
Author

I can confirm by patching the amazon_msk.py file to add the following check in this line causes the integration to work correctly again:

            if not 'BrokerNodeInfo' in node_info:
                continue

Would be great to get this upstream to remove the local patch 🙂

@notwedtm
Copy link

notwedtm commented Aug 8, 2024

Would love to see this get some progress!

@Noojuno
Copy link

Noojuno commented Aug 9, 2024

+1 on this issue, would really love to see this addressed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants