This change split the inlet component into a simpler inlet and a new
outlet component. The new inlet component receive flows and put them in
Kafka, unparsed. The outlet component takes them from Kafka and resume
the processing from here (flow parsing, enrichment) and puts them in
ClickHouse.
The main goal is to ensure the inlet does a minimal work to not be late
when processing packets (and restart faster). It also brings some
simplification as the number of knobs to tune everything is reduced: for
inlet, we only need to tune the queue size for UDP, the number of
workers and a few Kafka parameters; for outlet, we need to tune a few
Kafka parameters, the number of workers and a few ClickHouse parameters.
The outlet component features a simple Kafka input component. The core
component becomes just a callback function. There is also a new
ClickHouse component to push data to ClickHouse using the low-level
ch-go library with batch inserts.
This processing has an impact on the internal representation of a
FlowMessage. Previously, it was tailored to dynamically build the
protobuf message to be put in Kafka. Now, it builds the batch request to
be sent to ClickHouse. This makes the FlowMessage structure hides the
content of the next batch request and therefore, it should be reused.
This also changes the way we decode flows as they don't output
FlowMessage anymore, they reuse one that is provided to each worker.
The ClickHouse tables are slightly updated. Instead of using Kafka
engine, the Null engine is used instead.
Fix#1122
Today the timestamp can only be from kernel timetstamp put on the UDP packet
by the kernel.
I propose to add 2 alternative methods of getting the timestamp for netflow/IPFix packets:
- TimestampSourceNetflowPacket: use the timestamp field in the netflow packet itself
- TimestampSourceNetflowFirstSwitched: use the FirstSwitched field from each flow
(the field is actually in uptime, so we need to shift it according to sysUptime)
Using those fields requires the router to have accurate time (probably NTP),
but it allows for architectures where a UDP packet is not immediately
received by the collector, eg. if there is a kafka in-between.
That in turns allows to do maintenance on the collector,
without messing up the statistics
* Add functionality for overwriting the exporter address with flow source ip
* Remove "agent-id-src-addr-overwrite" from default config
* Improve use-src-addr-for-exporter-addr documentation
* Rename to UseSrcAddrForExporterAddr
* Fix use-src-addr-for-exporter-addr key in example config
* Add UseSrcAddrForExporterAddr to configuration test
Co-authored-by: Marvin Gaube <marvin.gaube@exaring.de>
This is useful if we cannot tune the sampling rate of the source
equipment and it is too high for us. The sampling rate is adapted.
This is difficult to test, so hopefully, this is correct!
For each case, we test from native map and from YAML. This should
capture all the cases we are interested.
Also, simplify pretty diff by using stringer for everything. I don't
remember why this wasn't the case. Maybe IP addresses? It's possible
to opt out by overriding formatters.
mapstructure is not zeroing stuff to allow incremental parsing of
configuration. This is fine for most structures, but when we get a
list, we don't want to merge the list provided by the user and the
default value. In this case, we zero out the list.
The documentation said we were listening on port 2055. That was not
true. Listen to a random port since the input can be used for
something else than Netflow.