-
Notifications
You must be signed in to change notification settings - Fork 18
Middleware 31525 docs: add kafka solution articles ported from internal Confluence #760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yuhaosdl
wants to merge
4
commits into
main
Choose a base branch
from
MIDDLEWARE-31525
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
218 changes: 218 additions & 0 deletions
218
.../en/solutions/ecosystem/kafka/How_to_Access_Kafka_with_Username_and_Password.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,218 @@ | ||
| --- | ||
| products: | ||
| - Alauda Application Services | ||
| kind: | ||
| - Solution | ||
| ProductsVersion: | ||
| - 3.x | ||
| --- | ||
|
|
||
| # Access a Kafka Cluster with Username and Password | ||
|
|
||
| :::info Applicable Versions | ||
| ACP 3.x Kafka instances created from the management view. | ||
| ::: | ||
|
|
||
| ## Introduction | ||
|
|
||
| This guide shows how to create a Kafka cluster with SCRAM-SHA-512 authentication, create a topic and user, retrieve the generated password, and test producer and consumer access with Kafka command-line tools. | ||
|
|
||
| ## 1. Create a Kafka Cluster | ||
|
|
||
| Enable SCRAM-SHA-512 authentication on the listener used by clients and enable simple authorization: | ||
|
|
||
| ```yaml | ||
| apiVersion: kafka.strimzi.io/v1beta1 | ||
| kind: Kafka | ||
| metadata: | ||
| name: demo | ||
| namespace: demo-dba | ||
| spec: | ||
| kafka: | ||
| version: 2.5.0 | ||
| replicas: 3 | ||
| listeners: | ||
| plain: | ||
| authentication: | ||
| type: scram-sha-512 | ||
| external: | ||
| type: nodeport | ||
| tls: true | ||
| authentication: | ||
| type: scram-sha-512 | ||
| tls: | ||
| authentication: | ||
| type: tls | ||
| authorization: | ||
| type: simple | ||
| config: | ||
| log.message.format.version: "2.5" | ||
| offsets.topic.replication.factor: 3 | ||
| transaction.state.log.min.isr: 2 | ||
| transaction.state.log.replication.factor: 3 | ||
| storage: | ||
| type: persistent-claim | ||
| size: 10Gi | ||
| class: topolvm | ||
| zookeeper: | ||
| replicas: 3 | ||
| storage: | ||
| type: persistent-claim | ||
| size: 10Gi | ||
| class: topolvm | ||
| ``` | ||
|
|
||
| ## 2. Create a Topic | ||
|
|
||
| ```yaml | ||
| apiVersion: kafka.strimzi.io/v1beta1 | ||
| kind: KafkaTopic | ||
| metadata: | ||
| name: demo-topic | ||
| namespace: demo-dba | ||
| labels: | ||
| strimzi.io/cluster: demo | ||
| spec: | ||
| topicName: demo-topic | ||
| partitions: 10 | ||
| replicas: 3 | ||
| config: | ||
| retention.ms: 604800000 | ||
| segment.bytes: 1073741824 | ||
| ``` | ||
|
|
||
| ## 3. Create a User | ||
|
|
||
| ```yaml | ||
| apiVersion: kafka.strimzi.io/v1beta1 | ||
| kind: KafkaUser | ||
| metadata: | ||
| name: demo-user | ||
| namespace: demo-dba | ||
| labels: | ||
| strimzi.io/cluster: demo | ||
| spec: | ||
| authentication: | ||
| type: scram-sha-512 | ||
| authorization: | ||
| type: simple | ||
| acls: | ||
| - host: "*" | ||
| operation: Read | ||
| resource: | ||
| type: topic | ||
| name: demo-topic | ||
| patternType: literal | ||
| - host: "*" | ||
| operation: Write | ||
| resource: | ||
| type: topic | ||
| name: demo-topic | ||
| patternType: literal | ||
| - host: "*" | ||
| operation: Describe | ||
| resource: | ||
| type: topic | ||
| name: demo-topic | ||
| patternType: literal | ||
| - host: "*" | ||
| operation: Create | ||
| resource: | ||
| type: topic | ||
| name: demo-topic | ||
| patternType: literal | ||
| - host: "*" | ||
| operation: Read | ||
| resource: | ||
| type: group | ||
| name: demo-group | ||
| patternType: literal | ||
| ``` | ||
|
|
||
| ## 4. Get the Bootstrap Service | ||
|
|
||
| Internal access: | ||
|
|
||
| ```bash | ||
| kubectl -n demo-dba get svc demo-kafka-bootstrap | ||
| ``` | ||
|
|
||
| External access: | ||
|
|
||
| ```bash | ||
| kubectl -n demo-dba get svc demo-kafka-external-bootstrap | ||
| ``` | ||
|
|
||
| ## 5. Retrieve the Generated Password | ||
|
|
||
| ```bash | ||
| kubectl -n demo-dba get secret demo-user -o jsonpath='{.data.password}' | base64 -d | ||
| ``` | ||
|
|
||
| ## 6. Create the Client Properties File | ||
|
|
||
| For the internal plain listener with SCRAM-SHA-512: | ||
|
|
||
| ```properties | ||
| sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="demo-user" password="<password>"; | ||
| security.protocol=SASL_PLAINTEXT | ||
| sasl.mechanism=SCRAM-SHA-512 | ||
| ``` | ||
|
|
||
| Save it as `client.properties`. | ||
|
|
||
| ## 7. Run Test Pods | ||
|
|
||
| Use the same Kafka image as the broker when possible: | ||
|
|
||
| ```bash | ||
| kubectl -n demo-dba get pod demo-kafka-0 -o yaml | grep 'image:' | ||
|
|
||
| kubectl -n demo-dba run kafka-test0 -it \ | ||
| --image=<kafka-image> \ | ||
| --rm=true \ | ||
| --restart=Never \ | ||
| -- bash | ||
|
|
||
| kubectl -n demo-dba run kafka-test1 -it \ | ||
| --image=<kafka-image> \ | ||
| --rm=true \ | ||
| --restart=Never \ | ||
| -- bash | ||
| ``` | ||
|
|
||
| Copy the properties file into both pods: | ||
|
|
||
| ```bash | ||
| kubectl -n demo-dba cp ./client.properties kafka-test0:/home/kafka/client.properties | ||
| kubectl -n demo-dba cp ./client.properties kafka-test1:/home/kafka/client.properties | ||
| ``` | ||
|
|
||
| ## 8. Produce and Consume Messages | ||
|
|
||
| Producer: | ||
|
|
||
| ```bash | ||
| /opt/kafka/bin/kafka-console-producer.sh \ | ||
| --bootstrap-server demo-kafka-bootstrap:9092 \ | ||
| --topic demo-topic \ | ||
| --producer.config /home/kafka/client.properties | ||
| ``` | ||
|
|
||
| Consumer: | ||
|
|
||
| ```bash | ||
| /opt/kafka/bin/kafka-console-consumer.sh \ | ||
| --bootstrap-server demo-kafka-bootstrap:9092 \ | ||
| --topic demo-topic \ | ||
| --consumer.config /home/kafka/client.properties \ | ||
| --from-beginning \ | ||
| --group demo-group | ||
| ``` | ||
|
|
||
| ## Important Considerations | ||
|
|
||
| - The user secret is generated by the Strimzi user operator after the `KafkaUser` becomes ready. | ||
| - Use `SASL_PLAINTEXT` only for non-TLS listeners. For TLS listeners, configure truststore settings and use `SASL_SSL`. | ||
| - Grant both topic ACLs and group ACLs for consumers. | ||
| - External access requires the broker endpoints returned in Kafka metadata to be reachable from the client network. | ||
136 changes: 136 additions & 0 deletions
136
...solutions/ecosystem/kafka/How_to_Import_Kafka_Resources_From_Management_View.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,136 @@ | ||
| --- | ||
| products: | ||
| - Alauda Application Services | ||
| kind: | ||
| - Solution | ||
| ProductsVersion: | ||
| - 3.x | ||
| --- | ||
|
|
||
| # Import Kafka Resources Created From the Management View | ||
|
|
||
| :::info Applicable Versions | ||
| ACP 3.12.x. | ||
| ::: | ||
|
|
||
| ## Introduction | ||
|
|
||
| Older Kafka instances may have been created directly from the Strimzi management view. In ACP 3.12, the business view expects RDS-layer custom resources. Use the `rdskafka-sync` tool to generate RDS resources from existing Strimzi resources and import Kafka clusters, topics, and users into the business view. | ||
|
|
||
| The import updates the managed Kafka resources and can restart the Kafka instance. Run the check phase first and review the generated YAML before accepting the sync. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - Cluster administrator access to the target Kubernetes cluster. | ||
| - Existing Kafka resources created from the management view. | ||
| - Access to the `rdskafka-sync` image. | ||
| - A backup or rollback plan for the Kafka instance. | ||
|
|
||
| ## Quick Upgrade Workflow | ||
|
|
||
| ### 1. Check Import Readiness | ||
|
|
||
| For Docker-based environments: | ||
|
|
||
| ```bash | ||
| docker run -it --rm \ | ||
| -v ~/.kube/config:/root/.kube/config \ | ||
| build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \ | ||
| ./bin/check.sh | ||
| ``` | ||
|
|
||
| For containerd-based environments: | ||
|
|
||
| ```bash | ||
| ctr run --rm \ | ||
| --mount type=bind,src=/root/.kube,dst=/root/.kube,options=rbind:rw \ | ||
| --net-host \ | ||
| build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \ | ||
| sh ./bin/check.sh | ||
| ``` | ||
|
Comment on lines
+45
to
+50
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Both containerd commands will fail without a container ID/name between the image and command. Suggested fix-ctr run --rm \
+ctr run --rm \
--mount type=bind,src=/root/.kube,dst=/root/.kube,options=rbind:rw \
--net-host \
build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \
+ rdskafka-sync-check \
sh ./bin/check.sh-ctr run --rm \
+ctr run --rm \
--mount type=bind,src=/root/.kube,dst=/root/.kube,options=rbind:rw \
--net-host \
build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \
+ rdskafka-sync-run \
sh ./bin/sync.shAlso applies to: 68-73 🤖 Prompt for AI Agents |
||
|
|
||
| `Ready` means the resources can be imported. `Not Ready` means at least one resource failed validation; review the output and fix the reported cause before continuing. | ||
|
|
||
| ### 2. Run the Import | ||
|
|
||
| For Docker: | ||
|
|
||
| ```bash | ||
| docker run -it --rm \ | ||
| -v ~/.kube/config:/root/.kube/config \ | ||
| build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \ | ||
| ./bin/sync.sh | ||
| ``` | ||
|
|
||
| For containerd: | ||
|
|
||
| ```bash | ||
| ctr run --rm \ | ||
| --mount type=bind,src=/root/.kube,dst=/root/.kube,options=rbind:rw \ | ||
| --net-host \ | ||
| build-harbor.alauda.cn/middleware/rdskafka-sync:1.0 \ | ||
| sh ./bin/sync.sh | ||
| ``` | ||
|
|
||
| If the command completes without errors, the imported resource names are printed. Contact operations if any resource fails to import. | ||
|
|
||
| ## Using the CLI Directly | ||
|
|
||
| ### Check Resources | ||
|
|
||
| ```bash | ||
| ./rdskafka-sync check cluster | ||
| ./rdskafka-sync check cluster -n <namespace> | ||
| ./rdskafka-sync check topic -n <namespace> | ||
| ./rdskafka-sync check user -n <namespace> | ||
| ``` | ||
|
|
||
| The check output includes these fields: | ||
|
|
||
| | Field | Meaning | | ||
| | --- | --- | | ||
| | `NAMESPACE` | Namespace of the resource. | | ||
| | `RDSNAME` | RDS resource name. Empty means only the management-view resource exists and needs import. | | ||
| | `CLUSTERNAME` | Management-view Kafka resource name. | | ||
| | `VALIDATE` | Whether the resource passed import validation. Only `true` can be imported. | | ||
| | `REASON` | Validation failure reason. Empty when validation succeeds. | | ||
|
|
||
| ### Sync Resources | ||
|
|
||
| ```bash | ||
| ./rdskafka-sync sync cluster <name> -n <namespace> | ||
| ./rdskafka-sync sync topic <name> -n <namespace> | ||
| ./rdskafka-sync sync user <name> -n <namespace> | ||
| ./rdskafka-sync sync cluster -n <namespace> | ||
| ./rdskafka-sync sync topic -n <namespace> | ||
| ./rdskafka-sync sync user -n <namespace> | ||
| ``` | ||
|
|
||
| Force sync skips confirmation and must be used carefully: | ||
|
|
||
| ```bash | ||
| ./rdskafka-sync sync cluster <name> -n <namespace> -f | ||
| ``` | ||
|
|
||
| ## Validation Rules | ||
|
|
||
| The tool validates whether resources can be safely imported. Common validation failures include: | ||
|
|
||
| - The Kafka instance does not use PVC-based storage. | ||
| - The resource is being deleted. | ||
| - The resource is not in a ready state. | ||
| - Kafka topic or cluster config values use non-string values such as booleans or integers. The RDS operator expects string config values. | ||
|
|
||
| Imported topics always include the required RDS config keys. If the management-view topic did not define them, default values are added: | ||
|
|
||
| ```properties | ||
| retention.ms=604800000 | ||
| max.message.bytes=1048576 | ||
| ``` | ||
|
|
||
| ## Important Considerations | ||
|
|
||
| - Importing a Kafka cluster can restart the Kafka instance. | ||
| - Review the generated RDS YAML and the resulting Strimzi YAML before confirming the operation. | ||
| - Convert config values to strings before import. | ||
| - Run the check command immediately before sync so the validation output matches the current cluster state. | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
For the ACP 3.x + Strimzi versions this repository targets, what Kafkaspec.kafka.versionvalues are officially supported, and islog.message.format.versionstill required/recommended?💡 Result:
The repository likely targets Red Hat Streams for Apache Kafka (RHOK) 3.x + Strimzi, as "ACP" may refer to this distribution (no Confluent ACP uses Strimzi found). For RHOK 3.0 (Strimzi ~0.46.x), the primary supported Kafka version is 4.0.x (KRaft-only, ZooKeeper removed); Kafka 3.9.x supported only for upgrade [1][2][3]. For RHOK 3.1 (Strimzi ~0.48.x), Kafka 4.1 [4]. Recent Strimzi (0.45+ / 1.0+) supports Kafka versions marked 'supported: true' in kafka-versions.yaml, including 4.2.0 (default), 4.1.2, and older like 3.9.0-3.9.2, 3.8.0-3.8.1, etc., while dropping older ones progressively [5][6][7][8]. Full list from error messages/release notes (latest Strimzi 0.45.2): 4.1.1, 4.2.0, 4.0.0, 4.0.1, 4.1.0, 3.0.0, 3.2.3, 3.3.2, 3.4.1, 3.5.0, 3.5.1, 3.6.0, 3.5.2, 3.6.1, 3.7.0, 3.6.2, 3.7.1, 3.8.0, 3.0.1, 3.1.0, 3.1.1, 3.2.0, 3.1.2, 3.2.1, 3.3.1, 3.4.0, 3.8.1, 3.9.0, 3.9.1, 3.9.2 (among others older) [6][7]. Use spec.kafka.version with these exact strings; operator validates and pulls matching quay.io/strimzi/kafka images [9]. log.message.format.version is no longer required or recommended for supported Kafka 3.x/4.x in Strimzi. It was used in inter.broker.protocol.version < message format (e.g., pre-3.0 upgrades) to downgrade message format for compatibility, but for current supported versions matching broker capabilities, Strimzi configures modern defaults automatically without needing manual override [8][9][10]. No recent docs mention it as required; it's optional for legacy downgrade scenarios only.
Citations:
Replace hardcoded Kafka 2.5.0 with a supported version for ACP 3.x.
Kafka
version: 2.5.0is unsupported in ACP 3.x; current support matrix requires Kafka 3.x or 4.x (4.x preferred). Additionally, removelog.message.format.version: "2.5"entirely—it is no longer recommended for supported Kafka versions and serves no purpose in modern Strimzi deployments.Update the example to use a currently supported version (e.g.,
version: 3.9.0or4.2.0) and remove the deprecated config key.🤖 Prompt for AI Agents