Some of the files were quite big:
- asns.csv ~ 3 MB
- index.js ~ 1.5 MB
- *.svg ~ 2 MB
Use a ZIP archive to put them all and embed it. This reduce the binary
size from 89 MB to 82 MB. 🤯
This also pulls some code modernization (use of http.ServeFileFS).
Add ability to enable the demo flows just with a profile instead of
modifying .env. Add more instructions on how to use Docker Compose and
how to hack on the console.
While I was relunctant to let Go download the right toolchain if we
didn't have one, this makes everything simpler. The Go version is now
fully controlled by `go.mod`. It also a nice for people wanting to build
on older distributions.
For Nix, GOTOOLCHAIN is set to local, so we rely on `go_latest` being
up-to-date enough. But they are usually quite fast to update, so it
should be OK.
Alloy does not allow to turn the parsed metadata into actual metadata,
without enumerating each of them. Also, Vector is far more versatile.
And you can put unittests!
Also, parse more logs. Everything should be there, except ClickHouse.
Fix#1907
This is a bit like Traefik. We set metrics.port on each container we
want to scrape metrics from (and optionally metrics.path).
Semi-related, but we also rely on exposed port for Traefik and we override
it for all containers to be sure we select the right one. This is less
error prone as we need at least one exposed port and some containers may
or may not have one. Just always set an exposed port if we have metrics
or traefik rules.
The idea is that alloy can also be used for more. For example, we could
introduce Loki (with a `docker-compose-loki.yml`) and it would use alloy
too. Alloy configuration needs to be split into several parts and both
`docker-compose-prometheus.yml` and `docker-compose-loki.yml` would
define it but with an additional volume for their specific part of the
configuration (using the `extend` mechanism).
However, we don't use the bundled Node Exporter, nor the bundled
cAdvisor. It is better to have individual components to avoid reduce the
amount of code with elevated privileges (both Node Exporter and cAdvisor
need specific privileges). Also, we keep Prometheus instead of switching
to the full Grafana stack with Mimir as it is a more common setup and
this is not a goal to provide something universally scalable.
Also, Prometheus is now behind the private endpoint as it is possible to
send metrics.
Docker can easily break the firewall rules such that masquerading
happens internally.
```
ip saddr 247.16.12.0/24 oifname != "br-65eaa81ed142" counter packets 812 bytes 132030 masquerade
ip saddr 247.16.12.0/24 oifname != "br-fa3db0ecc1de" counter packets 0 bytes 0 masquerade
ip saddr 247.16.12.0/24 oifname != "br-c7a7788478c5" counter packets 0 bytes 0 masquerade
```
When the "current" bridge is the second one, inter-container
communication gets masqueraded. I didn't find an associated issue.
Also fix configuration of node-exporter to really monitor the host. And
fix Prometheus configuration which was broken since we tried to monitor
Traefik (in 8f73f70050).
The apache image defines a volume under /var/lib/kafka/data, which is
created as an anonymous volume by docker unless docker compose properly
mounts to the right path.
This is unfortunately a breaking change.
And also add documentation on how to use IPv6. The proposed setup relies
on NAT66, which is not good, but it works on any host with IPv6
connectivity. The documentation explains how to configure routed IPv6.
By using an IPv4 subnet in class E, we ensure that it is very unlikely
users will have overlap between their Docker setup and their production
network. This way, no need to change the Docker daemon configuration.
And cache it with Docker. This way, it only needs to be built once. And
remove the all_js target. There is another method when working on JS in
CONTRIBUTING.md.