Skip to main content
Version: 95.2

Kafka development & monitoring with Kpow

This stack provides a complete Apache Kafka development and monitoring environment, built using Confluent Platform components and enhanced with Kpow for enterprise-grade observability and management. It includes a 3-node, Zookeeper-less Kafka cluster running in KRaft mode, Schema Registry, Kafka Connect, and Kpow itself.

📌 Description​

This architecture is designed for developers and operations teams who need a robust, local Kafka ecosystem for building, testing, and managing Kafka-based applications. By leveraging Kafka's KRaft mode, it offers a simplified, more efficient, and modern setup without the dependency on ZooKeeper. It features high availability (3 brokers), schema management for data governance, a data integration framework (Kafka Connect), and a powerful UI (Kpow) for monitoring cluster health, inspecting data, managing topics, and troubleshooting.

It's ideal for scenarios involving event-driven architectures, microservices communication, data integration pipelines, and situations requiring deep visibility into Kafka internals.


🔑 Key Components​

🚀 Kpow (Kafka Management & Monitoring Toolkit)​

  • Container: kpow from (factorhouse/kpow:latest)
  • An engineering toolkit providing a rich web UI for comprehensive Kafka monitoring, management, and data inspection. Kpow gathers Kafka resource information, stores it locally in internal topics, and delivers custom telemetry and insights. Features include:
    • Comprehensive Real-time Kafka Resource Monitoring: Instant visibility of brokers, topics, consumer groups, partitions, offsets, and more. It gathers data every minute, with a "Live mode" for real-time updates and requires no JMX access.
    • Advanced Consumer and Streams Monitoring: Visualize message throughput and lag for consumers with multi-dimensional insights.
    • Deep Data Inspection with kJQ: A powerful JQ-like query language to search messages at high speed, supporting JSON, Avro, Protobuf, and more.
    • Schema Registry & Kafka Connect Integration: Full support for controlling and monitoring Schema Registries and Kafka Connect clusters.
    • Enterprise-Grade Security & Governance: Supports Authentication (LDAP, SAML, OpenID), role-based access controls (RBAC), data masking policies, and a full audit log.
    • Key Integrations: Includes Slack integration, Prometheus endpoints, and multi-cluster management capabilities.
  • Exposes UI at http://localhost:3000

🧠 KRaft-based Kafka Cluster (3 Brokers)​

  • Kafka Brokers (confluentinc/cp-kafka:8.1.1 x3): Form the core message bus, now operating in KRaft (Kafka Raft Metadata) mode.
    • Zookeeper-less Architecture: This setup removes the need for a separate Zookeeper cluster. The brokers themselves handle metadata management and controller election through an internal consensus mechanism.
    • Configured with distinct internal (1909x) and external (909x, 2909x) listeners for flexible Docker and Minikube networking.
    • Provides high availability and fault tolerance with a 3-node quorum.
    • Accessible externally via ports 9092, 9093, 9094.

📜 Schema Registry (confluentinc/cp-schema-registry:8.1.1)​

  • Manages schemas (Avro, Protobuf, JSON Schema) for Kafka topics, ensuring data consistency and enabling schema evolution.
  • Accessible at http://localhost:8081.
  • Configured with Basic Authentication.
  • Stores its schemas within the Kafka cluster itself.

🔌 Kafka Connect (confluentinc/cp-kafka-connect:8.1.1)​

  • Framework for reliably streaming data between Apache Kafka and other systems.
  • Accessible via REST API at http://localhost:8083.
  • Configured to use JSON converters by default.
  • Custom Connector Support: Volume mounts allow adding third-party connectors (e.g., JDBC, S3, Iceberg).
  • OpenLineage Integration: Configured to send lineage metadata to a Marquez API, enabling data lineage tracking.
  • Manages its configuration, offsets, and status in dedicated Kafka topics.

🧰 Use Cases​

Local Kafka Development & Testing​

  • Build and test Kafka producers, consumers, and Kafka Streams applications against a realistic, multi-broker KRaft cluster.
  • Validate application behavior during broker failures (by stopping/starting broker containers).

Data Integration Pipelines​

  • Utilize Kafka Connect to ingest data into Kafka from various sources or stream data out to data lakes, warehouses, and other systems.
  • Test and manage connector configurations via the Connect REST API or Kpow UI.

Schema Management & Evolution​

  • Define, register, and evolve schemas using Schema Registry to enforce data contracts and prevent breaking changes in data pipelines.

Real-Time Monitoring & Operations Simulation​

  • Use Kpow to monitor cluster health, track topic/partition metrics, identify consumer lag, and inspect messages in real-time.
  • Understand Kafka performance characteristics and troubleshoot issues within a controlled environment.

Learning & Exploring Modern Kafka​

  • Provides a self-contained environment to learn modern Kafka concepts, experiment with a Zookeeper-less configuration, and explore the capabilities of the Confluent Platform and Kpow.