This change split the inlet component into a simpler inlet and a new
outlet component. The new inlet component receive flows and put them in
Kafka, unparsed. The outlet component takes them from Kafka and resume
the processing from here (flow parsing, enrichment) and puts them in
ClickHouse.
The main goal is to ensure the inlet does a minimal work to not be late
when processing packets (and restart faster). It also brings some
simplification as the number of knobs to tune everything is reduced: for
inlet, we only need to tune the queue size for UDP, the number of
workers and a few Kafka parameters; for outlet, we need to tune a few
Kafka parameters, the number of workers and a few ClickHouse parameters.
The outlet component features a simple Kafka input component. The core
component becomes just a callback function. There is also a new
ClickHouse component to push data to ClickHouse using the low-level
ch-go library with batch inserts.
This processing has an impact on the internal representation of a
FlowMessage. Previously, it was tailored to dynamically build the
protobuf message to be put in Kafka. Now, it builds the batch request to
be sent to ClickHouse. This makes the FlowMessage structure hides the
content of the next batch request and therefore, it should be reused.
This also changes the way we decode flows as they don't output
FlowMessage anymore, they reuse one that is provided to each worker.
The ClickHouse tables are slightly updated. Instead of using Kafka
engine, the Null engine is used instead.
Fix#1122
Done with:
```
git grep -l 'for.*:= 0.*++' \
| xargs sed -i -E 's/for (.*) := 0; \1 < (.*); \1\+\+/for \1 := range \2/'
```
And a few manual fixes due to unused variables. There is something fishy
in BMP rib test. Add a comment about that. This is not equivalent (as
with range, random is evaluated once, while in the original loop, it is
evaluated at each iteration). I believe the intent was to behave like
with range.
SNMP is the first (and default) provider. Further commits should add:
- [ ] SNMP coalescing (or at the metadata level?)
- [ ] Configuration conversion
- [ ] At least one other provider (static one?)
This is a first step to make it accept configuration. Most of the
changes are quite trivial, but I also ran into some difficulties with
query columns and filters. They need the schema for parsing, but parsing
happens before dependencies are instantiated (and even if it was not the
case, parsing is stateless). Therefore, I have added a `Validate()`
method that must be called after instantiation. Various bits `panic()`
if not validated to ensure we catch all cases.
The alternative to make the component manages a global state would have
been simpler but it would break once we add the ability to add or
disable columns.
At first, there was a tentative to use BMP collector implementation
from bio-rd. However, this current implementation is using GoBGP
instead:
- BMP is very simple from a protocol point of view. The hard work is
mostly around decoding. Both bio-rd and GoBGP can decode, but for
testing, GoBGP is able to generate messages as well (this is its
primary purpose, I suppose parsing was done for testing purpose).
Using only one library is always better. An alternative would be
GoBMP, but it also only do parsing.
- Logging and metrics can be customized easily (but the work was done
for bio-rd, so not a real argument).
- bio-rd is an application and there is no API stability (and I did
that too)
- GoBGP supports FlowSpec, which may be useful in the future for the
DDoS part. Again, one library for everything is better (but
honestly, GoBGP as a lib is not the best part of it, maybe
github.com/jwhited/corebgp would be a better fit while keeping GoBGP
for decoding/encoding).
There was a huge effort around having a RIB which is efficient
memory-wise (data are interned to save memory), performant during
reads, while being decent during insertions. We rely on a patched
version of Kentik's Patricia trees to be able to apply mutations to
the tree.
There was several tentatives to implement some kind of graceful
restart, but ultimetaly, the design is kept simple: when a BMP
connection goes down, routes will be removed after a configurable
time. If the connection comes back up, then it is just considered new.
It would have been ideal to rely on EoR markers, but the RFC is
unclear about them, and they are likely to be per peer, making it
difficult to know what to do if one peer is back, but not the other.
Remaining tasks:
- [ ] Confirm support for LocRIB
- [ ] Import data in ClickHouse
- [ ] Make data available in the frontend
Fix#52