mirror of
https://github.com/akvorado/akvorado.git
synced 2025-12-12 06:24:10 +01:00
doc: various updates
This commit is contained in:
@@ -12,9 +12,7 @@ a web interface to browse the result.
|
||||
## Quick start
|
||||
|
||||
A `docker-compose.yml` file is provided to quickly get started. Once
|
||||
running, *Akvorado* web interface should be running on port 80 and an
|
||||
inlet accepting both NetFlow (port 2055) and sFlow (port 6343).
|
||||
You need to configure SNMP on your exporters to accept requests from Akvorado.
|
||||
running, *Akvorado* web interface should be running on port 8080.
|
||||
|
||||
```console
|
||||
# docker-compose up
|
||||
@@ -25,6 +23,20 @@ disabled by removing the `akvorado-exporter*` services from
|
||||
`docker-compose.yml` (or you can just stop them with `docker-compose
|
||||
stop akvorado-exporter{1,2,3,4}`).
|
||||
|
||||
If you want to send you own flows, the inlet is accepting both NetFlow
|
||||
(port 2055) and sFlow (port 6343). You should also customize some
|
||||
settings in `akvorado.yaml`. They are described in details in the
|
||||
[“configuration” section](02-configuration.md) section of the
|
||||
documentation.
|
||||
|
||||
- `clickhouse` → `asns` to give names to your internal AS numbers
|
||||
- `clickhouse` → `networks` to attach attributes to your networks
|
||||
- `inlet` → `core` → `exporter-classifiers` to define rules to attach
|
||||
attributes to your exporters
|
||||
- `inlet` → `core` → `interface-classifiers` to define rules to attach
|
||||
attributes to your interfaces (including the "boundary" attribute
|
||||
which is used by default by the web interface)
|
||||
|
||||
Take a look at the `docker-compose.yml` file if you want to setup the
|
||||
GeoIP database. It requires two environment variables to fetch them
|
||||
from [MaxMind](https://dev.maxmind.com/geoip/geolite2-free-geolocation-data).
|
||||
@@ -38,15 +50,16 @@ from [MaxMind](https://dev.maxmind.com/geoip/geolite2-free-geolocation-data).
|
||||
- The **inlet service** receives flows from exporters. It poll each
|
||||
exporter using SNMP to get the *system name*, the *interface names*,
|
||||
*descriptions* and *speeds*. It query GeoIP databases to get the
|
||||
*country* and the *AS number*. It applies rules to classify
|
||||
exporters into *groups*. Interface rules attach to each interface a
|
||||
*boundary* (external or internal), a *network provider* and a
|
||||
*connectivity type* (PNI, IX, transit). The flow is exported to
|
||||
*Kafka*, serialized using *Protobuf*.
|
||||
*country* and the *AS number*. It applies rules to add attributes to
|
||||
exporters. Interface rules attach to each interface a *boundary*
|
||||
(external or internal), a *network provider* and a *connectivity
|
||||
type* (PNI, IX, transit). The flow is exported to *Kafka*,
|
||||
serialized using *Protobuf*.
|
||||
|
||||
- The **configuration service** configures the external components. It
|
||||
creates the *Kafka topic* and configures *ClickHouse* to receive the
|
||||
flows from Kafka.
|
||||
- The **orchestrator service** configures the internal and external
|
||||
components. It creates the *Kafka topic* and configures *ClickHouse*
|
||||
to receive the flows from Kafka. It exposes configuration settings
|
||||
for the other services to use.
|
||||
|
||||
- The **console service** exposes a web interface to look and
|
||||
manipulate the flows stored inside the ClickHouse database.
|
||||
|
||||
@@ -48,18 +48,18 @@ The resulting executable is `bin/akvorado`.
|
||||
|
||||
The following `make` targets are available:
|
||||
|
||||
- `make help` to get help
|
||||
- `make` to build the binary (in `bin/`)
|
||||
- `make test` to run tests
|
||||
- `make test-verbose` to run tests in verbose mode
|
||||
- `make test-race` for race tests
|
||||
- `make test-xml` for tests with xUnit-compatible output
|
||||
- `make test-coverage` for test coverage (will output `index.html`,
|
||||
- `make help` to get help
|
||||
- `make` to build the binary (in `bin/`)
|
||||
- `make test` to run tests
|
||||
- `make test-verbose` to run tests in verbose mode
|
||||
- `make test-race` for race tests
|
||||
- `make test-xml` for tests with xUnit-compatible output
|
||||
- `make test-coverage` for test coverage (will output `index.html`,
|
||||
`coverage.xml` and `profile.out` in `test/coverage.*/`.
|
||||
- `make test PKG=helloworld/hello` to restrict test to a package
|
||||
- `make clean`
|
||||
- `make lint` to lint source code
|
||||
- `make fmt` to format source code
|
||||
- `make test PKG=helloworld/hello` to restrict test to a package
|
||||
- `make clean`
|
||||
- `make lint` to lint source code
|
||||
- `make fmt` to format source code
|
||||
|
||||
## Docker image
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Configuration
|
||||
|
||||
The orchestrator service is configured through a YAML file and
|
||||
includes the configuration of the other services. Other servcies are
|
||||
includes the configuration of the other services. Other services are
|
||||
expected to query the orchestrator through HTTP on start to retrieve
|
||||
their configuration.
|
||||
|
||||
|
||||
@@ -156,16 +156,16 @@ listed as dimensions can usually be used. Accepted operators are `=`,
|
||||
`!=`, `<`, `<=`, `>`, `>=`, `IN`, `NOTIN`, `LIKE`, `UNLIKE`, `ILIKE`,
|
||||
`IUNLIKE`, when they make sense. Here are a few examples:
|
||||
|
||||
- `InIfBoundary = external` only selects flows whose incoming
|
||||
- `InIfBoundary = external` only selects flows whose incoming
|
||||
interface was classified as external. The value should not be
|
||||
quoted.
|
||||
- `InIfConnectivity = "ix"` selects flows whose incoming interface is
|
||||
- `InIfConnectivity = "ix"` selects flows whose incoming interface is
|
||||
connected to an IX.
|
||||
- `SrcAS = AS12322`, `SrcAS = 12322`, `SrcAS IN (12322, 29447)`
|
||||
- `SrcAS = AS12322`, `SrcAS = 12322`, `SrcAS IN (12322, 29447)`
|
||||
limits the source AS number of selected flows.
|
||||
- `SrcAddr = 203.0.113.4` only selects flows with the specified
|
||||
- `SrcAddr = 203.0.113.4` only selects flows with the specified
|
||||
address. Note that filtering on IP addresses is usually slower.
|
||||
- `ExporterName LIKE th2-%` selects flows coming from routers
|
||||
- `ExporterName LIKE th2-%` selects flows coming from routers
|
||||
starting with `th2-`.
|
||||
|
||||
Field names are case-insensitive. Comments can also be added by using
|
||||
|
||||
@@ -43,7 +43,7 @@ flow monitor-map monitor2
|
||||
Optionally, AS path can be pushed to the forwarding database and the
|
||||
source and destination AS will be present in Netflow packets:
|
||||
|
||||
```
|
||||
```cisco
|
||||
router bgp <asn>
|
||||
address-family ipv4 unicast
|
||||
bgp attribute-download
|
||||
@@ -264,7 +264,7 @@ sflow run
|
||||
|
||||
Then, configure SNMP:
|
||||
|
||||
```
|
||||
```eos
|
||||
snmp-server community <community> ro
|
||||
snmp-server vrf VRF-MANAGEMENT
|
||||
```
|
||||
@@ -322,3 +322,14 @@ FORMAT Vertical
|
||||
[Altinity's knowledge
|
||||
base](https://kb.altinity.com/altinity-kb-useful-queries/query_log/)
|
||||
contains some other useful queries.
|
||||
|
||||
### Errors
|
||||
|
||||
You can get the latest errors with:
|
||||
|
||||
```sql
|
||||
SELECT last_error_time, last_error_message
|
||||
FROM system.errors
|
||||
ORDER BY last_error_time LIMIT 10
|
||||
FORMAT Vertical
|
||||
```
|
||||
|
||||
@@ -214,6 +214,7 @@ $ kcat -b kafka:9092 -C -t flows-v2 -f 'Topic %t [%p] at offset %o: key %k: %T\n
|
||||
Alternatively, when using `docker-compose`, there is a Kafka UI
|
||||
running at `http://127.0.0.1:8080/kafka-ui/`. You can do the following
|
||||
checks:
|
||||
|
||||
- are the brokers alive?
|
||||
- is the `flows-v2` topic present and receiving messages?
|
||||
- is ClickHouse registered as a consumer?
|
||||
|
||||
@@ -202,21 +202,21 @@ spawned by the other components and wait for signals to terminate. If
|
||||
|
||||
## Other interesting dependencies
|
||||
|
||||
- [gopkg.in/tomb.v2](https://gopkg.in/tomb.v2) handles clean goroutine
|
||||
- [gopkg.in/tomb.v2](https://gopkg.in/tomb.v2) handles clean goroutine
|
||||
tracking and termination. Like contexts, it allows to signal
|
||||
termination of a bunch of goroutines. Unlike contexts, it also
|
||||
enables us to catch errors in goroutines and react to them (most of
|
||||
the time by dying).
|
||||
- [github.com/benbjohnson/clock](https://github.com/benbjohnson/clock) is
|
||||
- [github.com/benbjohnson/clock](https://github.com/benbjohnson/clock) is
|
||||
used in place of the `time` module when we want to be able to mock
|
||||
the clock. This is used for example to test the cache of the SNMP
|
||||
poller.
|
||||
- [github.com/cenkalti/backoff/v4](https://github.com/cenkalti/backoff)
|
||||
- [github.com/cenkalti/backoff/v4](https://github.com/cenkalti/backoff)
|
||||
provides an exponential backoff algorithm for retries.
|
||||
- [github.com/eapache/go-resiliency](https://github.com/eapache/go-resiliency)
|
||||
- [github.com/eapache/go-resiliency](https://github.com/eapache/go-resiliency)
|
||||
implements several resiliency pattersn, including the breaker
|
||||
pattern.
|
||||
- [github.com/go-playground/validator](https://github.com/go-playground/validator)
|
||||
- [github.com/go-playground/validator](https://github.com/go-playground/validator)
|
||||
implements struct validation using tags. We use it to had better
|
||||
validation on configuration structures.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
For each version, changes are listed in order of importance. Minor
|
||||
changes are not listed here. Each change is mapped to a category
|
||||
identified with a specific icon:
|
||||
|
||||
- 💥: breaking change
|
||||
- ✨: new feature
|
||||
- 🗑️: removed feature
|
||||
|
||||
Reference in New Issue
Block a user