Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jq: error (at <stdin>:173): Cannot iterate over null (null) #67

Open
domvo opened this issue Feb 21, 2024 · 2 comments
Open

jq: error (at <stdin>:173): Cannot iterate over null (null) #67

domvo opened this issue Feb 21, 2024 · 2 comments

Comments

@domvo
Copy link

domvo commented Feb 21, 2024

When running the checker, I get the following jq error:

-------------------------------------------------------------
Prerequisites for check-ecs-exec.sh v0.7
-------------------------------------------------------------
  jq      | OK (/opt/homebrew/bin/jq)
  AWS CLI | OK (/opt/homebrew/bin/aws)

-------------------------------------------------------------
Prerequisites for the AWS CLI to use ECS Exec
-------------------------------------------------------------
  AWS CLI Version        | OK (aws-cli/2.15.17 Python/3.11.7 Darwin/23.0.0 source/arm64 prompt/off)
  Session Manager Plugin | OK (1.2.553.0)

-------------------------------------------------------------
Checks on ECS task and other resources
-------------------------------------------------------------
Region : eu-central-1
Cluster: REDACTED
Task   : REDACTED
-------------------------------------------------------------
  Cluster Configuration  | Audit Logging Not Configured
  Can I ExecuteCommand?  | arn:aws:iam::xxxxxxxxxxxxx:user/[email protected]
     ecs:ExecuteCommand: allowed
     ssm:StartSession denied?: allowed
  Task Status            | RUNNING
  Launch Type            | Fargate
  Platform Version       | 1.4.0
  Exec Enabled for Task  | OK
  Container-Level Checks |
    ----------
      Managed Agent Status
    ----------
jq: error (at <stdin>:173): Cannot iterate over null (null)

I found out that not all containers have a managedAgent property. I was able to fix it by changing line 422 to

agentsStatus=$(echo "${describedTaskJson}" | jq -r ".tasks[0].containers[] | (.managedAgents // [])[].lastStatus // \"FallbackValue\"")

This is of course only a quick fix. The underlying issue is that we have AWS GuardDuty enabled. GuardDuty injects a container into each task but those GuardDuty containers do not have a managedAgent.

This is how the container comes back after describing it:

{
  "containerArn": "arn:aws:ecs:eu-central-1:xxxxxxxxx:container/xxx-cluster-xxx/xxxxx/9efcbebb-1204-4212-84fa-1471bcadbf8c",
  "taskArn": "arn:aws:ecs:eu-central-1:xxxxxxxxx:task/xxxx-cluster-xxx/xxx",
  "name": "aws-guardduty-agent-GAhgQ",
  "imageDigest": "sha256:9f8cd438fb66f62d09bfc641286439f7ed5177988a314a6021ef4ff880642e68",
  "runtimeId": "c9103216b805432497d68c0190237d44-4043820195",
  "lastStatus": "RUNNING",
  "networkBindings": [
    
  ],
  "networkInterfaces": [
    {
      "attachmentId": "c88ab07b-c263-419c-ba64-adea5c51eb07",
      "privateIpv4Address": "10.10.4.210"
    }
  ],
  "healthStatus": "UNKNOWN"
},
@domvo
Copy link
Author

domvo commented Feb 21, 2024

I found that not all containers have a managedAgent property. I was able to fix it by changing line 422 to

  agentsStatus=$(echo "${describedTaskJson}" | jq -r ".tasks[0].containers[] | select(.managedAgents != null) | .managedAgents[].lastStatus")

@gfcameron
Copy link

I was seeing the same issue, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants