4.9 KiB
Configuration
Akvorado can be configured through a YAML file. Each aspect is configured through a different section:
reporting: Log and metric reportinghttp: Builtin HTTP serverweb: Web interfaceflow: Flow ingestionsnmp: SNMP pollergeoip: GeoIP databasekafka: Kafka brokercore: Core
You can get the default configuration with ./akvorado --dump --check.
Durations can be written in seconds or using strings like 10h20m.
Reporting
Reporting encompasses logging and metrics. Currently, as Akvorado is
expected to be run inside Docker, logging is done on the standard
output and is not configurable. As for metrics, they are reported by
the HTTP component on the /metrics endpoint and there is nothing to
configure either.
HTTP
The builtin HTTP server serves various pages. Its configuration
supports only the listen key to specify the address and port to
listen. For example:
http:
listen: 0.0.0.0:8000
Web
The web interface presents the landing page of Akvorado. It also embeds the documentation. It accepts only the following key:
grafanaurlto specify the URL to Grafana and exposes it as/grafana.
Flow
The flow component handles flow ingestion. It supports the following configuration keys:
listento specify the IP and UDP port to listen for new flowsworkersto specify the number of workers to spawn to handle incoming flowsbufferlengthto specify the number of flows to buffer when pushing them to the core component
For example:
flow:
listen: 0.0.0.0:2055
workers: 2
SNMP
Flows only include interface indexes. To associate them with an interface name and description, SNMP is used to poll the sampler sending each flows. A cache is maintained to avoid polling continuously the samplers. The following keys are accepted:
cachedurationtells how much time to keep data in the cache before polling againcacherefreshtells how much time to poll existing data before they expirecacherefreshintervaltells how often to check if cached data is about to expirecachepersistfiletells where to store cached data on shutdown and read them back on startupdefaultcommunitytells which community to use when polling samplerscommunitiesis a map from a sampler IP address to the community to use for a sampler, overriding the default value set above,workerstell how many workers to spawn to handle SNMP polling.
As flows missing interface information are discarded, persisting the cache is useful to quickly be able to handle incoming flows. By default, no persistent cache is configured.
GeoIP
The GeoIP component adds source and destination country, as well as the AS number of the source and destination IP if they are not present in the received flows. It needs two databases using the MaxMind DB file format, one for AS numbers, one for countries. If no database is provided, the component is inactive. It accepts the following keys:
asndatabasetells the path to the ASN databasecountrydatabasetells the path to the country database
If the files are updated while Akvorado is running, they are automatically refreshed.
Kafka
Received flows are exported to a Kafka topic using the protocol
buffers format. The definition file is flow/flow.proto. It is
also available through the /flow.proto HTTP endpoint.
Each flow is written in the length-delimited format.
The following keys are accepted:
topictells which topic to use to write messagesautocreatetopictells if we can automatically create the topic if it does not existbrokersspecifies the list of brokers to use to bootstrap the connection to the Kafka clusterversiontells which minimal version of Kafka to expectusetlstells if we should use TLS to connection (authentication is not supported)flushintervaldefines the maximum flush interval to send received flows to Kafkaflushbytesdefines the maximum number of bytes to store before flushing flows to Kafkamaxmessagebytesdefines the maximum size of a message (it should be equal or smaller to the same setting in the broker configuration)compressioncodecdefines the compression codec to use to compress messages (none,gzip,snappy,lz4andzstd)
Core
The core orchestrates the remaining components. It receives the flows from the flow component, add some information using the GeoIP databases and the SNMP poller, and push the resulting flow to Kafka.
It only accepts the workers key to define how many workers should be
spawn.