docs: update documentation regarding flow protobuf definitions

This commit is contained in:
Vincent Bernat
2023-01-25 08:13:39 +01:00
parent 6099691d05
commit 3684d06f90
3 changed files with 17 additions and 21 deletions

View File

@@ -85,12 +85,11 @@ Once you are ready, you can run everything in the background with
## Serialized flow schemas
Flows sent to Kafka are encoded with a versioned schema, described in
the `flow-*.proto` files. For each version of the schema, a different
Kafka topic is used. For example, the `flows-v2` topic receive
serialized flows using the first version of the schema. The inlet
service exports the schemas as well as the current version with its
HTTP service, via the `/api/v0/inlet/schemas.json` endpoint.
Flows sent to Kafka are encoded with a versioned schema. When the schema
changes, a different Kafka topic is used. For example, the
`flows-ZUYGDTE3EBIXX352XPM3YEEFV4` topic receive serialized flows using a
specific version of the schema. The inlet service exports the schema with its
HTTP service, via the `/api/v0/inlet/flow.proto` endpoint.
## ClickHouse database schemas

View File

@@ -143,9 +143,8 @@ filtering. It may also work with a LocRIB.
### Kafka
Received flows are exported to a Kafka topic using the [protocol
buffers format][]. The definition file is `flow/flow-*.proto`. Each
flow is written in the [length-delimited format][].
Received flows are exported to a Kafka topic using the [protocol buffers
format][]. Each flow is written in the [length-delimited format][].
[protocol buffers format]: https://developers.google.com/protocol-buffers
[length-delimited format]: https://cwiki.apache.org/confluence/display/GEODE/Delimiting+Protobuf+Messages
@@ -167,9 +166,7 @@ The following keys are accepted:
messages to Kafka. Increasing this value will improve performance,
at the cost of losing messages in case of problems.
The topic name is suffixed by the version of the schema. For example,
if the configured topic is `flows` and the current schema version is
1, the topic used to send received flows will be `flows-v2`.
The topic name is suffixed by a hash of the schema.
### Core

View File

@@ -281,17 +281,17 @@ topic. However, the metadata can be read using
alive with:
```console
$ kcat -b kafka:9092 -C -t flows-v2 -L
Metadata for flows-v2 (from broker -1: kafka:9092/bootstrap):
$ kcat -b kafka:9092 -C -t flows-ZUYGDTE3EBIXX352XPM3YEEFV4 -L
Metadata for flows-ZUYGDTE3EBIXX352XPM3YEEFV4 (from broker -1: kafka:9092/bootstrap):
1 brokers:
broker 1001 at eb6c7781b875:9092 (controller)
1 topics:
topic "flows-v2" with 4 partitions:
topic "flows-ZUYGDTE3EBIXX352XPM3YEEFV4" with 4 partitions:
partition 0, leader 1001, replicas: 1001, isrs: 1001
partition 1, leader 1001, replicas: 1001, isrs: 1001
partition 2, leader 1001, replicas: 1001, isrs: 1001
partition 3, leader 1001, replicas: 1001, isrs: 1001
$ kcat -b kafka:9092 -C -t flows-v2 -f 'Topic %t [%p] at offset %o: key %k: %T\n' -o -1
$ kcat -b kafka:9092 -C -t flows-ZUYGDTE3EBIXX352XPM3YEEFV4 -f 'Topic %t [%p] at offset %o: key %k: %T\n' -o -1
```
Alternatively, when using `docker-compose`, there is a Kafka UI
@@ -299,7 +299,7 @@ running at `http://127.0.0.1:8080/kafka-ui/`. You can do the following
checks:
- are the brokers alive?
- is the `flows-v2` topic present and receiving messages?
- is the `flows-ZUYGDTE3EBIXX352XPM3YEEFV4` topic present and receiving messages?
- is ClickHouse registered as a consumer?
## ClickHouse
@@ -334,10 +334,10 @@ from Kafka's point of view:
$ kafka-consumer-groups.sh --bootstrap-server kafka:9092 --describe --group clickhouse
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
clickhouse flows-v2 0 5650351527 5650374314 22787 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-0-77740d0a-79b7-4bef-a501-25a819c3cee4 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-0
clickhouse flows-v2 3 3035602619 3035628290 25671 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-3-1e4629b0-69a3-48dd-899a-20f4b16be0a2 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-3
clickhouse flows-v2 2 1645914467 1645930257 15790 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-2-79c9bafe-fd36-42fe-921f-a802d46db684 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-2
clickhouse flows-v2 1 889117276 889129896 12620 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-1-f0421bbe-ba13-49df-998f-83e49045be00 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-1
clickhouse flows-ZUYG… 0 5650351527 5650374314 22787 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-0-77740d0a-79b7-4bef-a501-25a819c3cee4 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-0
clickhouse flows-ZUYG… 3 3035602619 3035628290 25671 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-3-1e4629b0-69a3-48dd-899a-20f4b16be0a2 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-3
clickhouse flows-ZUYG… 2 1645914467 1645930257 15790 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-2-79c9bafe-fd36-42fe-921f-a802d46db684 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-2
clickhouse flows-ZUYG… 1 889117276 889129896 12620 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-1-f0421bbe-ba13-49df-998f-83e49045be00 /240.0.4.8 ClickHouse-ee97b7e7e5e0-default-flows_3_raw-1
```
Errors related to Kafka ingestion are kept in the `flows_3_raw_errors`