mirror of
https://github.com/akvorado/akvorado.git
synced 2025-12-12 06:24:10 +01:00
inet/flow: add sflow support (#23)
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Akvorado: flow collector, hydrater and visualizer.
|
||||
|
||||
This program receives flows (currently Netflow/IPFIX), hydrates them
|
||||
This program receives flows (currently Netflow/IPFIX and sFlow), hydrates them
|
||||
with interface names (using SNMP), geo information (using MaxMind),
|
||||
and exports them to Kafka, then ClickHouse. It also exposes a web
|
||||
interface to browse the collected data.
|
||||
|
||||
@@ -50,6 +50,11 @@ inlet:
|
||||
listen: 0.0.0.0:2055
|
||||
workers: 6
|
||||
receive-buffer: 10485760
|
||||
- type: udp
|
||||
decoder: sflow
|
||||
listen: 0.0.0.0:6343
|
||||
workers: 6
|
||||
receive-buffer: 10485760
|
||||
core:
|
||||
workers: 6
|
||||
exporter-classifiers:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Introduction
|
||||
|
||||
*Akvorado*[^name] receives flows (currently Netflow/IPFIX), hydrates
|
||||
*Akvorado*[^name] receives flows (currently Netflow/IPFIX and sFlow), hydrates
|
||||
them with interface names (using SNMP), geo information (using
|
||||
MaxMind), and exports them to Kafka, then ClickHouse. It also exposes
|
||||
a web interface to browse the result.
|
||||
@@ -13,8 +13,8 @@ a web interface to browse the result.
|
||||
|
||||
A `docker-compose.yml` file is provided to quickly get started. Once
|
||||
running, *Akvorado* web interface should be running on port 80 and an
|
||||
inlet accepting NetFlow available on port 2055. You need to configure
|
||||
SNMP on your exporters to accept requests from Akvorado.
|
||||
inlet accepting both NetFlow (port 2055) and sFlow (port 6343).
|
||||
You need to configure SNMP on your exporters to accept requests from Akvorado.
|
||||
|
||||
```console
|
||||
# docker-compose up
|
||||
|
||||
@@ -55,9 +55,9 @@ of the inlet services are `flow`, `kafka`, and `core`.
|
||||
The flow component handles incoming flows. It only accepts the
|
||||
`inputs` key to define the list of inputs to receive incoming flows.
|
||||
|
||||
Each input has a `type` and a `decoder`. For `decoder`, only `netflow`
|
||||
is currently supported. As for the `type`, both `udp` and `file` are
|
||||
supported.
|
||||
Each input has a `type` and a `decoder`. For `decoder`, both
|
||||
`netflow` or `sflow` are supported. As for the `type`, both `udp`
|
||||
and `file` are supported.
|
||||
|
||||
For the UDP input, the supported keys are `listen` to set the
|
||||
listening endpoint, `workers` to set the number of workers to listen
|
||||
@@ -72,6 +72,10 @@ flow:
|
||||
decoder: netflow
|
||||
listen: 0.0.0.0:2055
|
||||
workers: 3
|
||||
- type: udp
|
||||
decoder: sflow
|
||||
listen: 0.0.0.0:6343
|
||||
workers: 3
|
||||
workers: 2
|
||||
```
|
||||
|
||||
@@ -87,11 +91,16 @@ flow:
|
||||
paths:
|
||||
- /tmp/flow1.raw
|
||||
- /tmp/flow2.raw
|
||||
- type: file
|
||||
decoder: sflow
|
||||
paths:
|
||||
- /tmp/flow1.raw
|
||||
- /tmp/flow2.raw
|
||||
workers: 2
|
||||
```
|
||||
|
||||
Without configuration, *Akvorado* will listen for incoming
|
||||
Netflow/IPFIX flows on a random port (check the logs to know which
|
||||
Netflow/IPFIX and sFlow flows on a random port (check the logs to know which
|
||||
one).
|
||||
|
||||
### Kafka
|
||||
|
||||
@@ -214,8 +214,7 @@ routing-options {
|
||||
|
||||
#### sFlow
|
||||
|
||||
Currently, *Akvorado* does not support sFlow. Once it does, for QFX
|
||||
devices, you can use sFlow.
|
||||
For QFX devices, you can use sFlow.
|
||||
|
||||
```junos
|
||||
protocols {
|
||||
@@ -247,6 +246,29 @@ snmp {
|
||||
}
|
||||
```
|
||||
|
||||
### Arista
|
||||
|
||||
#### sFlow
|
||||
|
||||
For Arista devices, you can use sFlow.
|
||||
|
||||
```eos
|
||||
sflow sample 1024
|
||||
sflow vrf VRF-MANAGEMENT destination 192.0.2.1
|
||||
sflow vrf VRF-MANAGEMENT source-interface Management1
|
||||
sflow interface egress enable default
|
||||
sflow run
|
||||
```
|
||||
|
||||
#### SNMP
|
||||
|
||||
Then, configure SNMP:
|
||||
|
||||
```
|
||||
snmp-server community <community> ro
|
||||
snmp-server vrf VRF-MANAGEMENT
|
||||
```
|
||||
|
||||
## ClickHouse
|
||||
|
||||
While ClickHouse works pretty good out-of-the-box, it is still
|
||||
|
||||
@@ -86,8 +86,7 @@ often abstracted, this is not the case for metrics. Moreover, the
|
||||
design to scale is a bit different as *Akvorado* will create a socket
|
||||
for each worker instead of distributing incoming flows using a channel.
|
||||
|
||||
Only Netflow v9 and IPFIX are currently supported. However, as *GoFlow2*
|
||||
also decodes sFlow, support can be added later.
|
||||
Netflow v9, IPFIX, and sFlow are currently supported.
|
||||
|
||||
The design of this component is modular. It is possible to "plug"
|
||||
new decoders and new inputs easily. It is expected that most buffering
|
||||
@@ -237,10 +236,10 @@ In the future, we may:
|
||||
and BGP next hop (or an indirection to keep memory usage down) to
|
||||
the next AS and the AS path (in this case, again, an indirection to
|
||||
keep memory down). We need a configuration knob to determine what
|
||||
source to use for origin AS: BGP, Netflow (likely the same
|
||||
source to use for origin AS: BGP, Netflow/sFlow (likely the same
|
||||
information), or GeoIP. This could be dependant on the fact we have
|
||||
a private AS or not.
|
||||
- DDoS service to detect and mitigate DDoS (with Flowspec).
|
||||
- DDoS service to detect and mitigate DDoS (with Flow-spec).
|
||||
- Support VRFs.
|
||||
- Add dynamic configuration with something like [go-archaius][] or
|
||||
[Harvester][].
|
||||
|
||||
@@ -16,6 +16,7 @@ This release introduce a new protobuf schema. When using
|
||||
`docker-compose`, a restart of ClickHouse is needed after upgrading
|
||||
the orchestrator to load this new schema.
|
||||
|
||||
- ✨ *inlet*: add sflow support [PR #23][]
|
||||
- ✨ *inlet*: classify exporters to group, role, site, region, and tenant [PR #14][]
|
||||
- ✨ *orchestrator*: add role, site, region, and tenant attributes to networks [PR #15][]
|
||||
- ✨ *docker-compose*: clean conntrack entries when inlet container starts
|
||||
@@ -30,6 +31,7 @@ the orchestrator to load this new schema.
|
||||
[PR #11]: https://github.com/vincentbernat/akvorado/pull/11
|
||||
[PR #14]: https://github.com/vincentbernat/akvorado/pull/14
|
||||
[PR #15]: https://github.com/vincentbernat/akvorado/pull/15
|
||||
[PR #23]: https://github.com/vincentbernat/akvorado/pull/23
|
||||
[UI for Apache Kafka]: https://github.com/provectus/kafka-ui
|
||||
|
||||
## 1.4.2 - 2022-07-16
|
||||
|
||||
@@ -91,6 +91,7 @@ services:
|
||||
<<: *akvorado-image
|
||||
ports:
|
||||
- 2055:2055/udp
|
||||
- 6343:6343/udp
|
||||
restart: unless-stopped
|
||||
command: inlet http://akvorado-orchestrator:8080
|
||||
volumes:
|
||||
|
||||
@@ -29,6 +29,9 @@ func DefaultConfiguration() Configuration {
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,33 +32,10 @@ func TestDecodeConfiguration(t *testing.T) {
|
||||
"decoder": "netflow",
|
||||
"listen": "192.0.2.1:2055",
|
||||
"workers": 3,
|
||||
},
|
||||
},
|
||||
},
|
||||
Expected: Configuration{
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: &udp.Configuration{
|
||||
Workers: 3,
|
||||
QueueSize: 100000,
|
||||
Listen: "192.0.2.1:2055",
|
||||
},
|
||||
}},
|
||||
},
|
||||
}, {
|
||||
Name: "from existing configuration",
|
||||
From: Configuration{
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}},
|
||||
},
|
||||
Source: map[string]interface{}{
|
||||
"inputs": []map[string]interface{}{
|
||||
map[string]interface{}{
|
||||
}, {
|
||||
"type": "udp",
|
||||
"decoder": "netflow",
|
||||
"listen": "192.0.2.1:2055",
|
||||
"decoder": "sflow",
|
||||
"listen": "192.0.2.1:6343",
|
||||
"workers": 3,
|
||||
},
|
||||
},
|
||||
@@ -71,6 +48,56 @@ func TestDecodeConfiguration(t *testing.T) {
|
||||
QueueSize: 100000,
|
||||
Listen: "192.0.2.1:2055",
|
||||
},
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: &udp.Configuration{
|
||||
Workers: 3,
|
||||
QueueSize: 100000,
|
||||
Listen: "192.0.2.1:6343",
|
||||
},
|
||||
}},
|
||||
},
|
||||
}, {
|
||||
Name: "from existing configuration",
|
||||
From: Configuration{
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}},
|
||||
},
|
||||
Source: map[string]interface{}{
|
||||
"inputs": []map[string]interface{}{
|
||||
map[string]interface{}{
|
||||
"type": "udp",
|
||||
"decoder": "netflow",
|
||||
"listen": "192.0.2.1:2055",
|
||||
"workers": 3,
|
||||
}, map[string]interface{}{
|
||||
"type": "udp",
|
||||
"decoder": "sflow",
|
||||
"listen": "192.0.2.1:6343",
|
||||
"workers": 3,
|
||||
},
|
||||
},
|
||||
},
|
||||
Expected: Configuration{
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: &udp.Configuration{
|
||||
Workers: 3,
|
||||
QueueSize: 100000,
|
||||
Listen: "192.0.2.1:2055",
|
||||
},
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: &udp.Configuration{
|
||||
Workers: 3,
|
||||
QueueSize: 100000,
|
||||
Listen: "192.0.2.1:6343",
|
||||
},
|
||||
}},
|
||||
},
|
||||
}, {
|
||||
@@ -79,6 +106,9 @@ func TestDecodeConfiguration(t *testing.T) {
|
||||
Inputs: []InputConfiguration{{
|
||||
Decoder: "netflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: udp.DefaultConfiguration(),
|
||||
}},
|
||||
},
|
||||
Source: map[string]interface{}{
|
||||
@@ -86,6 +116,9 @@ func TestDecodeConfiguration(t *testing.T) {
|
||||
map[string]interface{}{
|
||||
"type": "file",
|
||||
"paths": []string{"file1", "file2"},
|
||||
}, map[string]interface{}{
|
||||
"type": "file",
|
||||
"paths": []string{"file1", "file2"},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -95,6 +128,11 @@ func TestDecodeConfiguration(t *testing.T) {
|
||||
Config: &file.Configuration{
|
||||
Paths: []string{"file1", "file2"},
|
||||
},
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: &file.Configuration{
|
||||
Paths: []string{"file1", "file2"},
|
||||
},
|
||||
}},
|
||||
},
|
||||
}, {
|
||||
@@ -169,6 +207,13 @@ func TestMarshalYAML(t *testing.T) {
|
||||
QueueSize: 1000,
|
||||
Workers: 3,
|
||||
},
|
||||
}, {
|
||||
Decoder: "sflow",
|
||||
Config: &udp.Configuration{
|
||||
Listen: "192.0.2.11:6343",
|
||||
QueueSize: 1000,
|
||||
Workers: 3,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -183,6 +228,12 @@ func TestMarshalYAML(t *testing.T) {
|
||||
receivebuffer: 0
|
||||
type: udp
|
||||
workers: 3
|
||||
- decoder: sflow
|
||||
listen: 192.0.2.11:6343
|
||||
queuesize: 1000
|
||||
receivebuffer: 0
|
||||
type: udp
|
||||
workers: 3
|
||||
`
|
||||
if diff := helpers.Diff(strings.Split(string(got), "\n"), strings.Split(expected, "\n")); diff != "" {
|
||||
t.Fatalf("Marshal() (-got, +want):\n%s", diff)
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
|
||||
"akvorado/inlet/flow/decoder"
|
||||
"akvorado/inlet/flow/decoder/netflow"
|
||||
"akvorado/inlet/flow/decoder/sflow"
|
||||
)
|
||||
|
||||
// Message describes a decoded flow message.
|
||||
@@ -51,4 +52,5 @@ func (c *Component) wrapDecoder(d decoder.Decoder) decoder.Decoder {
|
||||
|
||||
var decoders = map[string]decoder.NewDecoderFunc{
|
||||
"netflow": netflow.New,
|
||||
"sflow": sflow.New,
|
||||
}
|
||||
|
||||
138
inlet/flow/decoder/sflow/root.go
Normal file
138
inlet/flow/decoder/sflow/root.go
Normal file
@@ -0,0 +1,138 @@
|
||||
// SPDX-FileCopyrightText: 2022 Tchadel Icard
|
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
|
||||
// Package sflow handles sFlow v5 decoding.
|
||||
package sflow
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"net"
|
||||
|
||||
"github.com/netsampler/goflow2/decoders/sflow"
|
||||
"github.com/netsampler/goflow2/producer"
|
||||
|
||||
"akvorado/common/reporter"
|
||||
"akvorado/inlet/flow/decoder"
|
||||
)
|
||||
|
||||
// Decoder contains the state for the sFlow v5 decoder.
|
||||
type Decoder struct {
|
||||
r *reporter.Reporter
|
||||
|
||||
metrics struct {
|
||||
errors *reporter.CounterVec
|
||||
stats *reporter.CounterVec
|
||||
sampleRecordsStatsSum *reporter.CounterVec
|
||||
sampleStatsSum *reporter.CounterVec
|
||||
}
|
||||
}
|
||||
|
||||
// New instantiates a new sFlow decoder.
|
||||
func New(r *reporter.Reporter) decoder.Decoder {
|
||||
nd := &Decoder{
|
||||
r: r,
|
||||
}
|
||||
|
||||
nd.metrics.errors = nd.r.CounterVec(
|
||||
reporter.CounterOpts{
|
||||
Name: "errors_count",
|
||||
Help: "sFlows processed errors.",
|
||||
},
|
||||
[]string{"exporter", "error"},
|
||||
)
|
||||
nd.metrics.stats = nd.r.CounterVec(
|
||||
reporter.CounterOpts{
|
||||
Name: "count",
|
||||
Help: "sFlows processed.",
|
||||
},
|
||||
[]string{"exporter", "agent", "version"},
|
||||
)
|
||||
nd.metrics.sampleRecordsStatsSum = nd.r.CounterVec(
|
||||
reporter.CounterOpts{
|
||||
Name: "sample_records_sum",
|
||||
Help: "sFlows samples sum of records.",
|
||||
},
|
||||
[]string{"exporter", "agent", "version", "type"},
|
||||
)
|
||||
nd.metrics.sampleStatsSum = nd.r.CounterVec(
|
||||
reporter.CounterOpts{
|
||||
Name: "sample_sum",
|
||||
Help: "sFlows samples sum.",
|
||||
},
|
||||
[]string{"exporter", "agent", "version", "type"},
|
||||
)
|
||||
|
||||
return nd
|
||||
}
|
||||
|
||||
// Decode decodes an sFlow payload.
|
||||
func (nd *Decoder) Decode(in decoder.RawFlow) []*decoder.FlowMessage {
|
||||
buf := bytes.NewBuffer(in.Payload)
|
||||
key := in.Source.String()
|
||||
|
||||
ts := uint64(in.TimeReceived.UTC().Unix())
|
||||
msgDec, err := sflow.DecodeMessage(buf)
|
||||
|
||||
if err != nil {
|
||||
switch err.(type) {
|
||||
case *sflow.ErrorVersion:
|
||||
nd.metrics.errors.WithLabelValues(key, "error version").Inc()
|
||||
case *sflow.ErrorIPVersion:
|
||||
nd.metrics.errors.WithLabelValues(key, "error ip version").Inc()
|
||||
case *sflow.ErrorDataFormat:
|
||||
nd.metrics.errors.WithLabelValues(key, "error data format").Inc()
|
||||
default:
|
||||
nd.metrics.errors.WithLabelValues(key, "error decoding").Inc()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update some stats
|
||||
msgDecConv, ok := msgDec.(sflow.Packet)
|
||||
if !ok {
|
||||
nd.metrics.stats.WithLabelValues(key, "unknown", "unknwon").Inc()
|
||||
return nil
|
||||
}
|
||||
agent := net.IP(msgDecConv.AgentIP).String()
|
||||
version := "5"
|
||||
samples := msgDecConv.Samples
|
||||
nd.metrics.stats.WithLabelValues(key, agent, version).Inc()
|
||||
for _, s := range samples {
|
||||
switch sConv := s.(type) {
|
||||
case sflow.FlowSample:
|
||||
nd.metrics.sampleStatsSum.WithLabelValues(key, agent, version, "FlowSample").
|
||||
Inc()
|
||||
nd.metrics.sampleRecordsStatsSum.WithLabelValues(key, agent, version, "FlowSample").
|
||||
Add(float64(len(sConv.Records)))
|
||||
case sflow.CounterSample:
|
||||
nd.metrics.sampleStatsSum.WithLabelValues(key, agent, version, "CounterSample").
|
||||
Inc()
|
||||
nd.metrics.sampleRecordsStatsSum.WithLabelValues(key, agent, version, "CounterSample").
|
||||
Add(float64(len(sConv.Records)))
|
||||
case sflow.ExpandedFlowSample:
|
||||
nd.metrics.sampleStatsSum.WithLabelValues(key, agent, version, "ExpandedFlowSample").
|
||||
Inc()
|
||||
nd.metrics.sampleRecordsStatsSum.WithLabelValues(key, agent, version, "ExpandedFlowSample").
|
||||
Add(float64(len(sConv.Records)))
|
||||
}
|
||||
}
|
||||
|
||||
flowMessageSet, err := producer.ProcessMessageSFlow(msgDec)
|
||||
for _, fmsg := range flowMessageSet {
|
||||
fmsg.TimeReceived = ts
|
||||
fmsg.TimeFlowStart = ts
|
||||
fmsg.TimeFlowEnd = ts
|
||||
}
|
||||
|
||||
results := make([]*decoder.FlowMessage, len(flowMessageSet))
|
||||
for idx, fmsg := range flowMessageSet {
|
||||
results[idx] = decoder.ConvertGoflowToFlowMessage(fmsg)
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
// Name returns the name of the decoder.
|
||||
func (nd *Decoder) Name() string {
|
||||
return "sflow"
|
||||
}
|
||||
160
inlet/flow/decoder/sflow/root_test.go
Normal file
160
inlet/flow/decoder/sflow/root_test.go
Normal file
@@ -0,0 +1,160 @@
|
||||
// SPDX-FileCopyrightText: 2022 Tchadel Icard
|
||||
// SPDX-License-Identifier: AGPL-3.0-only
|
||||
|
||||
package sflow
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"akvorado/common/helpers"
|
||||
"akvorado/common/reporter"
|
||||
"akvorado/inlet/flow/decoder"
|
||||
)
|
||||
|
||||
func TestDecode(t *testing.T) {
|
||||
r := reporter.NewMock(t)
|
||||
sdecoder := New(r)
|
||||
|
||||
// Send data
|
||||
data, err := ioutil.ReadFile(filepath.Join("testdata", "data-1140.data"))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
got := sdecoder.Decode(decoder.RawFlow{Payload: data, Source: net.ParseIP("127.0.0.1")})
|
||||
if got == nil {
|
||||
t.Fatalf("Decode() error on data")
|
||||
}
|
||||
expectedFlows := []*decoder.FlowMessage{
|
||||
{
|
||||
SequenceNum: 812646826,
|
||||
SamplingRate: 1024,
|
||||
TimeFlowStart: 18446744011573954816,
|
||||
TimeFlowEnd: 18446744011573954816,
|
||||
Bytes: 1518,
|
||||
Packets: 1,
|
||||
Etype: 0x86DD,
|
||||
Proto: 6,
|
||||
SrcPort: 46026,
|
||||
DstPort: 22,
|
||||
InIf: 27,
|
||||
OutIf: 28,
|
||||
IPTos: 8,
|
||||
IPTTL: 64,
|
||||
TCPFlags: 16,
|
||||
IPv6FlowLabel: 426132,
|
||||
SrcAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:38").To16(),
|
||||
DstAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:39").To16(),
|
||||
ExporterAddress: net.ParseIP("172.16.0.3").To16(),
|
||||
}, {
|
||||
SequenceNum: 812646826,
|
||||
SamplingRate: 1024,
|
||||
TimeFlowStart: 18446744011573954816,
|
||||
TimeFlowEnd: 18446744011573954816,
|
||||
Bytes: 439,
|
||||
Packets: 1,
|
||||
Etype: 0x800,
|
||||
Proto: 6,
|
||||
SrcPort: 443,
|
||||
DstPort: 56876,
|
||||
InIf: 49001,
|
||||
OutIf: 25,
|
||||
IPTTL: 59,
|
||||
TCPFlags: 24,
|
||||
FragmentId: 42354,
|
||||
FragmentOffset: 16384,
|
||||
SrcAS: 13335,
|
||||
DstAS: 39421,
|
||||
SrcNet: 20,
|
||||
DstNet: 27,
|
||||
SrcAddr: net.ParseIP("104.26.8.24").To16(),
|
||||
DstAddr: net.ParseIP("45.90.161.46").To16(),
|
||||
ExporterAddress: net.ParseIP("172.16.0.3").To16(),
|
||||
}, {
|
||||
SequenceNum: 812646826,
|
||||
SamplingRate: 1024,
|
||||
TimeFlowStart: 18446744011573954816,
|
||||
TimeFlowEnd: 18446744011573954816,
|
||||
Bytes: 1518,
|
||||
Packets: 1,
|
||||
Etype: 0x86DD,
|
||||
Proto: 6,
|
||||
SrcPort: 46026,
|
||||
DstPort: 22,
|
||||
InIf: 27,
|
||||
OutIf: 28,
|
||||
IPTos: 8,
|
||||
IPTTL: 64,
|
||||
TCPFlags: 16,
|
||||
IPv6FlowLabel: 426132,
|
||||
SrcAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:38").To16(),
|
||||
DstAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:39").To16(),
|
||||
ExporterAddress: net.ParseIP("172.16.0.3").To16(),
|
||||
}, {
|
||||
SequenceNum: 812646826,
|
||||
SamplingRate: 1024,
|
||||
TimeFlowStart: 18446744011573954816,
|
||||
TimeFlowEnd: 18446744011573954816,
|
||||
Bytes: 64,
|
||||
Packets: 1,
|
||||
Etype: 0x800,
|
||||
Proto: 6,
|
||||
SrcPort: 55658,
|
||||
DstPort: 5555,
|
||||
InIf: 28,
|
||||
OutIf: 49001,
|
||||
IPTTL: 255,
|
||||
TCPFlags: 2,
|
||||
FragmentId: 54321,
|
||||
SrcAS: 39421,
|
||||
DstAS: 26615,
|
||||
SrcNet: 27,
|
||||
DstNet: 17,
|
||||
SrcAddr: net.ParseIP("45.90.161.148").To16(),
|
||||
DstAddr: net.ParseIP("191.87.91.27").To16(),
|
||||
ExporterAddress: net.ParseIP("172.16.0.3").To16(),
|
||||
}, {
|
||||
SequenceNum: 812646826,
|
||||
SamplingRate: 1024,
|
||||
TimeFlowStart: 18446744011573954816,
|
||||
TimeFlowEnd: 18446744011573954816,
|
||||
Bytes: 1518,
|
||||
Packets: 1,
|
||||
Etype: 0x86DD,
|
||||
Proto: 6,
|
||||
SrcPort: 46026,
|
||||
DstPort: 22,
|
||||
InIf: 27,
|
||||
OutIf: 28,
|
||||
IPTos: 8,
|
||||
IPTTL: 64,
|
||||
TCPFlags: 16,
|
||||
IPv6FlowLabel: 426132,
|
||||
SrcAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:38").To16(),
|
||||
DstAddr: net.ParseIP("2a0c:8880:2:0:185:21:130:39").To16(),
|
||||
ExporterAddress: net.ParseIP("172.16.0.3").To16(),
|
||||
},
|
||||
}
|
||||
for _, f := range got {
|
||||
f.TimeReceived = 0
|
||||
}
|
||||
|
||||
if diff := helpers.Diff(got, expectedFlows); diff != "" {
|
||||
t.Fatalf("Decode() (-got, +want):\n%s", diff)
|
||||
}
|
||||
gotMetrics := r.GetMetrics(
|
||||
"akvorado_inlet_flow_decoder_sflow_",
|
||||
"count",
|
||||
"sample_",
|
||||
)
|
||||
expectedMetrics := map[string]string{
|
||||
`count{agent="172.16.0.3",exporter="127.0.0.1",version="5"}`: "1",
|
||||
`sample_records_sum{agent="172.16.0.3",exporter="127.0.0.1",type="FlowSample",version="5"}`: "14",
|
||||
`sample_sum{agent="172.16.0.3",exporter="127.0.0.1",type="FlowSample",version="5"}`: "5",
|
||||
}
|
||||
if diff := helpers.Diff(gotMetrics, expectedMetrics); diff != "" {
|
||||
t.Fatalf("Metrics after data (-got, +want):\n%s", diff)
|
||||
}
|
||||
}
|
||||
BIN
inlet/flow/decoder/sflow/testdata/data-1140.data
vendored
Normal file
BIN
inlet/flow/decoder/sflow/testdata/data-1140.data
vendored
Normal file
Binary file not shown.
Reference in New Issue
Block a user