flow: rename buffer-length option to buffer-size

It is a better match. I was worried people would understand the size
as in bytes, but we also say a channel size.
This commit is contained in:
Vincent Bernat
2022-03-22 22:43:36 +01:00
parent e57b107bfa
commit cf1fe53511
5 changed files with 13 additions and 8 deletions

View File

@@ -137,7 +137,7 @@ func (c *Component) runWorker(workerID int) error {
continue continue
} }
// Forward to Kafka // Forward to Kafka (this could block
c.d.Kafka.Send(sampler, buf.Bytes()) c.d.Kafka.Send(sampler, buf.Bytes())
c.metrics.flowsForwarded.WithLabelValues(sampler).Inc() c.metrics.flowsForwarded.WithLabelValues(sampler).Inc()

View File

@@ -75,7 +75,7 @@ configuration keys:
- `listen` to specify the IP and UDP port to listen for new flows - `listen` to specify the IP and UDP port to listen for new flows
- `workers` to specify the number of workers to spawn to handle - `workers` to specify the number of workers to spawn to handle
incoming flows incoming flows
- `bufferlength` to specify the number of flows to buffer when pushing - `buffer-size` to specify the number of flows to buffer when pushing
them to the core component them to the core component
For example: For example:

View File

@@ -216,6 +216,11 @@ In the future, we may:
- Add more information to the landing page, including some basic statistics. - Add more information to the landing page, including some basic statistics.
- Automatically build dashboards for Grafana.[^grafana] - Automatically build dashboards for Grafana.[^grafana]
- Builds dashboards with [D3.js][].[^d3js] - Builds dashboards with [D3.js][].[^d3js]
- Buffer message to disks instead of blocking (when sending to Kafka)
or dropping (when querying the SNMP poller). At least, we should
have a solution to clearly signal when something is not scaling as
intended, otherwise, we just drop buffers at the beginning of the
pipeline.
- Collect routes by integrating GoBGP. This is low priority if we - Collect routes by integrating GoBGP. This is low priority if we
consider information from Maxmind good enough for our use. consider information from Maxmind good enough for our use.

View File

@@ -6,15 +6,15 @@ type Configuration struct {
Listen string Listen string
// Workers define the number of workers to use for decoding. // Workers define the number of workers to use for decoding.
Workers int Workers int
// BufferLength defines the length of the channel used to // BufferSize defines the size of the channel used to
// communicate incoming flows. 0 can be used to disable // communicate incoming flows. 0 can be used to disable
// buffering. // buffering.
BufferLength uint BufferSize uint
} }
// DefaultConfiguration represents the default configuration for the flow component // DefaultConfiguration represents the default configuration for the flow component
var DefaultConfiguration = Configuration{ var DefaultConfiguration = Configuration{
Listen: "localhost:2055", Listen: "localhost:2055",
Workers: 1, Workers: 1,
BufferLength: 1000, BufferSize: 1000,
} }

View File

@@ -55,7 +55,7 @@ func New(r *reporter.Reporter, configuration Configuration, dependencies Depende
r: r, r: r,
d: &dependencies, d: &dependencies,
config: configuration, config: configuration,
incomingFlows: make(chan *FlowMessage, configuration.BufferLength), incomingFlows: make(chan *FlowMessage, configuration.BufferSize),
} }
c.d.Daemon.Track(&c.t, "flow") c.d.Daemon.Track(&c.t, "flow")
c.initHTTP() c.initHTTP()