Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Kafka Credentials to ProviderConfig Secret #19

Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ cover.out

# ignore IDE folders
.vscode/
.idea/
.idea/
kubeconfig
19 changes: 17 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -183,13 +183,28 @@ CROSSPLANE_NAMESPACE = upbound-system
# - UPTEST_DATASOURCE_PATH (optional), see https://github.com/upbound/uptest#injecting-dynamic-values-and-datasource
uptest: $(UPTEST) $(KUBECTL) $(KUTTL)
@$(INFO) running automated tests
@KUBECTL=$(KUBECTL) KUTTL=$(KUTTL) $(UPTEST) e2e "${UPTEST_EXAMPLE_LIST}" --data-source="${UPTEST_DATASOURCE_PATH}" --setup-script=cluster/test/setup.sh --default-conditions="Test" || $(FAIL)
@if [[ -n "$${UPTEST_CONFLUENT_KAFKA_CLUSTER_ID:-}" && -n "$${UPTEST_CONFLUENT_PRINCIPAL:-}" ]]; then \
{ \
echo "confluent_kafka_cluster_id: $${UPTEST_CONFLUENT_KAFKA_CLUSTER_ID}"; \
echo "confluent_principal: $${UPTEST_CONFLUENT_PRINCIPAL}"; \
} > "$(OUTPUT_DIR)/datasource.yaml"; \
if [[ -n "$${UPTEST_DATASOURCE_PATH:-}" ]]; then \
echo "" >> "$(OUTPUT_DIR)/datasource.yaml"; \
cat "$${UPTEST_DATASOURCE_PATH}" >> "$(OUTPUT_DIR)/datasource.yaml"; \
fi; \
export UPTEST_DATASOURCE_PATH="$(OUTPUT_DIR)/datasource.yaml"; \
fi; \
KUBECTL=$(KUBECTL) KUTTL=$(KUTTL) CROSSPLANE_NAMESPACE=$(CROSSPLANE_NAMESPACE) \
$(UPTEST) e2e "$${UPTEST_EXAMPLE_LIST}" \
--data-source="$${UPTEST_DATASOURCE_PATH}" \
--setup-script=cluster/test/setup.sh \
--default-conditions="Test" || $(FAIL)
@$(OK) running automated tests

local-deploy: build controlplane.up local.xpkg.deploy.provider.$(PROJECT_NAME)
@$(INFO) running locally built provider
@$(KUBECTL) wait provider.pkg $(PROJECT_NAME) --for condition=Healthy --timeout 5m
@$(KUBECTL) -n upbound-system wait --for=condition=Available deployment --all --timeout=5m
@$(KUBECTL) -n $(CROSSPLANE_NAMESPACE) wait --for=condition=Available deployment --all --timeout=5m
@$(OK) running locally built provider

e2e: local-deploy uptest
Expand Down
3 changes: 2 additions & 1 deletion cluster/test/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ ${KUBECTL} -n upbound-system wait --for=condition=Available deployment --all --t

echo "Creating a default provider config..."
cat <<EOF | ${KUBECTL} apply -f -
apiVersion: confluent.upbound.io/v1beta1
apiVersion: confluent.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
Expand All @@ -24,3 +24,4 @@ spec:
name: provider-secret
namespace: upbound-system
key: credentials
EOF
18 changes: 18 additions & 0 deletions examples/kafkaacl/kafkaacl.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: confluent.crossplane.io/v1alpha1
kind: KafkaACL
metadata:
name: kafka-acl-confluent-cloud
spec:
deletionPolicy: Delete
forProvider:
host: '*'
kafkaCluster:
- id: ${data.confluent_kafka_cluster_id}
operation: READ
patternType: PREFIXED
permission: ALLOW
principal: ${data.confluent_principal}
resourceName: test-
resourceType: TOPIC
providerConfigRef:
name: default
7 changes: 5 additions & 2 deletions examples/providerconfig/secret.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ type: Opaque
stringData:
credentials: |
{
"username": "admin",
"password": "t0ps3cr3t11"
"cloud_api_key": "admin",
"cloud_api_secret": "t0ps3cr3t11",
"kafka_api_key": "kafka_admin",
"kafka_api_secret": "P@55w0rd",
"kafka_rest_endpoint": "https://abc-12345z.region.provider.confluent.cloud:443"
Copy link
Collaborator

@jaylevin jaylevin Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed the confluent terraform docs also have a kafka_id field in the provider auth block for single-cluster management option. Did you happen to know anything about that and if it's required or not?

ref: https://registry.terraform.io/providers/confluentinc/confluent/latest/docs#static-credentials

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question.

I noticed kafka_id on the terraform docs as well, but the specific error I was seeing with KafkaACLs (and what @drneo-mehdi was seeing with KafkaTopics) only indicated that the key and secret were missing:

cannot run refresh: refresh failed: error reading Kafka Topic: one of (provider.kafka_api_key, provider.kafka_api_secret), (KAFKA_API_KEY, KAFKA_API_SECRET environment variables) or (resource.credentials.key, resource.credentials.secret) must be set:

That being said, I could try adding a kafka_id/kafkaID field to the ProviderConfig secret just in case I'm missing something. Let me know if you think that's warranted here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally haven't ran into any scenarios where I needed the kafka ID, but my team is only using the multi-cluster authentication method, which is why we never prioritized the other authentication method using kafka_rest_endpoint.

I'd say as long as it works we can get this merged and look into what role kafka_id plays at a later point 👍

}
14 changes: 10 additions & 4 deletions internal/clients/confluent.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,11 @@ import (

const (
// ProviderConfig secret keys
cloudAPIKey = "cloud_api_key"
cloudAPISecret = "cloud_api_secret"
cloudAPIKey = "cloud_api_key"
cloudAPISecret = "cloud_api_secret"
kafkaAPIKey = "kafka_api_key"
kafkaAPISecret = "kafka_api_secret"
kafkaRESTEndpoint = "kafka_rest_endpoint"

// error messages
errNoProviderConfig = "no providerConfigRef provided"
Expand Down Expand Up @@ -69,8 +72,11 @@ func TerraformSetupBuilder(version, providerSource, providerVersion string, sche

// Set credentials in Terraform provider configuration.
ps.Configuration = map[string]any{
cloudAPIKey: creds[cloudAPIKey],
cloudAPISecret: creds[cloudAPISecret],
cloudAPIKey: creds[cloudAPIKey],
cloudAPISecret: creds[cloudAPISecret],
kafkaAPIKey: creds[kafkaAPIKey],
kafkaAPISecret: creds[kafkaAPISecret],
kafkaRESTEndpoint: creds[kafkaRESTEndpoint],
}
return ps, nil
}
Expand Down
Loading