Features
Data inspect
Kpow's Data inspect function allows you to quickly and easily search and filter records from multiple topics in a single query.
- Just enter multiple topics in the Topic selector.
- The interface will fetch and decode messages from all specified topics.
- You can combine this with kJQ filters to apply conditions across multiple streams.
All topics must be configured with compatible serdes (e.g., AVRO for value deserializer) for consistent filtering.
Records can then be easily updated and re-produced back to Kafka topics via Kpow's Data produce function.
Usage
Input Type
Data inspect has three different input types that identify which topics to search:
- Topic: search records in 1-n topics identified by topic names
- Topic Regex: search records in topics where the topics name matches a topic regex
- Group: search records in topics that are currently being consumed by a group
Mode
Data inspect has four different modes when searching topics:
- Slice: search records from one or more topics starting from a point in time
- Bounded window: search records from one or more topics with a query start and end time
- Partition: search records within a single topic partition and from an (optional) offset
- Key: search records from one or more topics, starting from a point in time and matching an exact key
Slice, Partition, or Key queries
When using Slice, Partition, or Key mode, Kpow will search records from a starting point in time, continuing until all topic partitions are exhausted or 100 results have been returned.
You may chose to manually 'continue' a query, reading further records beyond the 100 results or newly produced to the topic.
The 'From' field defines when the query starts:
- Recent: query the most recent 100 records, evenly distributed among all topics partitions.
- Last minute: query records produced in the last minute
- Last 15 minutes: query records produced in the last 15 minutes
- Last hour: query records produced in the last hour
- Last 24 hours: query records produced in the last 24 hours
- From earliest: query from the earliest record in each partition
- From timestamp: query records produced since a specific timestamp
- From datetime: query records produced since a specific local datetime
Bounded window queries
When using Bounded window mode, Kpow will search records between a start and end time, continuing until all topic partitions are exhausted or 100 results have been returned.
You may chose to manually 'continue' a query, reading further records beyond the 100 results, but once a query has reached the end it is not possible to continue further.
The 'Window start' field defines when the bounded window starts:
- Earliest: query from the earliest record in each partition
- From timestamp: query records produced since a specific timestamp
- From datetime: query records produced since a specific local datetime
- From duration ago: query records produced in the ISO8601#duration (e.g P1DT2H30M means the most recent 1 Day, 2 hours, and 30 minutes)
The 'Window end' field defines where the bounded window ends:
- Now: query until the current time
- To timestamp: query records produced until a specific timestamp
- To datetime: query records produced until a specific local datetime
- To duration from window start: query records produced until the ISO8601#duration from window start (e.g. PT15M is means a 15 minute window)
Bounded Window Durations
Kpow accepts ISO8601#duration format for all duration input, for example:
PT15M
is fifteen minutesPT3H15M
is three hours and fifteen minutesP2DT3H15M
is two days, three hours, and fifteen minutesP2W
is two weeksP1M
is one monthP3Y6M4DT12H30M5S
is three years, six months, four days, twelve hours, thirty minutes, and five seconds
Serdes
By default, the TOPIC_INSPECT
access policy is disabled. To view the contents of messages in the data inspect UI, see the configuration section of this document.
See the Serdes section for more information about using Data inspect serdes.
Filtering
Kpow offers very fast JQ-like filters for search data on topics. These filters are compiled and executed on the server, allowing you to search tens of thousands of messages a second.
See the kJQ filters section for documentation on the query language.
Headers
Select a Headers Deserializer in the Data inspect form to include Message Headers in your results.
Query Results
Results Toolbar
Query progress
Data inspect queries have a start and end cursor position. The start is defined by the window of the query, and the end position is the time in which the query was first executed. Once a query has been executed, the query metadata has the notion of "progress": how many records you have scanned, and how many records remain for the query. The green progress bar above the toolbar represents the total progress of the query. You can always click "Continue consuming" to keep progressing your cursor.
Data policies
If you have any Data policies that apply to the query that was executed, the Show context section will show you what policies matches your queries, and the redactions applied.
Result Metadata Table
Clicking the "Show context" button in the results toolbar will expand the Result Metadata Table, which is a table of your queries cursors across all partitions.
Result Metadata Table Explanation
- Topic: The Kafka topic from which the data was queried
- Partition: the partition the row relates to
- Query start: the offset that data inspect started scanning from for this partition. Calculated from the query window.
- Query end: the offset that data inspect will scan up to. Calculated from the query window.
- Scanned Records: the number of records in this partition that have positively matched the key or value filters specified in the query
- Remaining Offsets: Offsets still available beyond the query window.
- Realized Records: Records returned after filters were applied.
- Deserialization Errors: Records that failed to deserialize due to schema or format issues. Excluded from Realized records.
- Consumed: the percentage of overall records consumed for this partition.
Display Options
Kpow's Display panel allows you to tailor how Kafka records are presented during inspection. This panel is accessible from the Results Toolbar by clicking the "Display" button.
Note: All Display Options are automatically persisted to localStorage.
This means your preferences (ordering, formatting, visibility, etc.) will remain consistent across browser sessions.
Ordering
You can now choose how records are sorted:
- By Timestamp (ascending or descending)
- By Offset (ascending or descending)
Collapse Data Threshold
Use the Collapse data over input to set a maximum byte threshold for displaying record content. If either the key or value of a record exceeds this threshold, the content will be collapsed and shown with a toggle:
Users can manually expand or contract the content using the toggle provided.
Record Formatting
Each record attribute can now be customized in terms of its display style:
- Key:
Pretty printed
orRaw
- Value:
Pretty printed
orRaw
- Timestamp:
UNIX timestamp
,UTC datetime
, orLocal datetime
- Size:
Pretty printed
orInt
(raw byte count)
Visible Fields
Toggle visibility for metadata fields like:
- Topic
- Partition
- Offset
- Headers
- Timestamp
- Age
- Key size
- Value size
- Record size
Click Reset to default to restore the original display preferences.
Downloading Records
You can download records using the Download button in the top-right of the Inspect UI.
Available formats include:
- CSV
- CSV(flat)
- EDN
- JSON
- JSON (escaped)
Note:
The current Display Options (e.g. raw vs pretty printed, visible fields) directly impact how records are presented in the download output.
For example:
- If
Raw
is selected for keys or values, the downloaded data will reflect the unformatted structure.- If certain metadata fields (like headers or sizes) are toggled off, they may be excluded from the downloaded dataset.
To ensure the export format matches your expectations, review and adjust your Display settings prior to downloading.
CSV (Flat) Format
The CSV (Flat) format flattens the structure of Kafka records to include all nested fields inline with dot notation.
For example, given a Kafka message like:
{
"id": "abc-001",
"invoice": {
"date": "2024-10-04T00:00:00Z",
"amount": 123.45
}
}
The CSV (Flat) export will produce headers like:
value.id,value.invoice.date,value.invoice.amount
And corresponding rows:
abc-001,2024-10-04T00:00:00Z,123.45
This is useful for:
Flattening complex record structures into tabular form
Easily opening records in spreadsheet tools like Excel or Google Sheets
Ensuring compatibility with downstream tools expecting flat CSV data
Tip: Use Display Options to toggle visible fields and formatting preferences before exporting. These impact the structure and content of the exported dataset.
Configuration
Engine
SAMPLER_CONSUMER_THREADS
- Kpow creates a connection pool of consumers when querying with data inspect. This environment variables specifies the number of consumer threads globally available in the pol. Default: 6.
SAMPLER_TIMEOUT_MS
- a query will finish querying once 100 positively matched records have been found or after a timeout (default 7s). You can always progress the query and continue scanning by clicking "Continue Consuming".
Increase the sampler timeout to run longer queries and the consumer threads to query more partitions in parallel.
The default configuration should be suitable for most installations.
Serdes
Custom serdes + serdes configuration
See the Serdes section for details on how to configure custom serdes, integrate schema registry and more for data inspect.
TOPIC_INSPECT authorization
To enable inspection of key/value/header contents of records, set the ALLOW_TOPIC_INSPECT
environment variable to true
. If you are using role-based access control, view our guide here.
Data policies/redaction
To configure data policies (configurable redaction of Data inspection results) view our Data policies.