It should be a bit more secure to not install scripts by default and to
allow one to update dependencies with a delay. Also, it is faster. The
downside is that it is not usually shipped with npm, but we can download
it through corepack (which is shipped with node). It also has more
builtin features, including patching packages (but we don't need that
anymore).
Some of the files were quite big:
- asns.csv ~ 3 MB
- index.js ~ 1.5 MB
- *.svg ~ 2 MB
Use a ZIP archive to put them all and embed it. This reduce the binary
size from 89 MB to 82 MB. 🤯
This also pulls some code modernization (use of http.ServeFileFS).
While I was relunctant to let Go download the right toolchain if we
didn't have one, this makes everything simpler. The Go version is now
fully controlled by `go.mod`. It also a nice for people wanting to build
on older distributions.
For Nix, GOTOOLCHAIN is set to local, so we rely on `go_latest` being
up-to-date enough. But they are usually quite fast to update, so it
should be OK.
gocov and gocovxml are unmaintained. There is
https://github.com/boumenot/gocover-cobertura which is linked from
Gitlab, but it is missing some lines during the conversion (code defined
in callbacks called from var, see hellogopher as an example), so it is
not reliable.
This change split the inlet component into a simpler inlet and a new
outlet component. The new inlet component receive flows and put them in
Kafka, unparsed. The outlet component takes them from Kafka and resume
the processing from here (flow parsing, enrichment) and puts them in
ClickHouse.
The main goal is to ensure the inlet does a minimal work to not be late
when processing packets (and restart faster). It also brings some
simplification as the number of knobs to tune everything is reduced: for
inlet, we only need to tune the queue size for UDP, the number of
workers and a few Kafka parameters; for outlet, we need to tune a few
Kafka parameters, the number of workers and a few ClickHouse parameters.
The outlet component features a simple Kafka input component. The core
component becomes just a callback function. There is also a new
ClickHouse component to push data to ClickHouse using the low-level
ch-go library with batch inserts.
This processing has an impact on the internal representation of a
FlowMessage. Previously, it was tailored to dynamically build the
protobuf message to be put in Kafka. Now, it builds the batch request to
be sent to ClickHouse. This makes the FlowMessage structure hides the
content of the next batch request and therefore, it should be reused.
This also changes the way we decode flows as they don't output
FlowMessage anymore, they reuse one that is provided to each worker.
The ClickHouse tables are slightly updated. Instead of using Kafka
engine, the Null engine is used instead.
Fix#1122
It was working, but this is a bit of work to keep it working. Now that
we use docker compose with GitHub CI, GitLab CI starts to diverge. As
GitLab setup often run dind as a runner, I don't know if it is as easy
to run docker compose in GitLab and I don't have time to test it.
If someone is interested in this support, there are two possibilities:
use act to run GitHub CI from GitLab or also adapt docker compose
workflow to GitLab CI (but it should work with docker-based runners).
The unit tests require a SR Linux container. Unfortunately, there is no
good open source implementation of a gNMI target. It is unknown if
anything else than SR Linux would work.
Fix: #759
This is a bit of a nightmare tool. Different versions accepting
different flags (nvi vs vim) and when something is wrong, you are
stuck into some interactive/not really interactive tool.