* Add limit type field selection
* Take into account LimitType to generate SQL request for Line graph
Also, add LimitType in default configuration for console
* Take into account LimitType to generate SQL request for Sankey graph
* Refactor on SQL query used by line and sankey
* Add limitType description in doc
* Order by max in graphLine when limitType max is used
* Fix query when using top max
Revert some modifications, as they were no longer relevant with the query fixed.
* Rework way to sort by max in line graph type
* Add configuration validation on LimitType
---------
Co-authored-by: Dimitri Baudrier <github.52grm@simplelogin.com>
Done with:
```
git grep -l 'for.*:= 0.*++' \
| xargs sed -i -E 's/for (.*) := 0; \1 < (.*); \1\+\+/for \1 := range \2/'
```
And a few manual fixes due to unused variables. There is something fishy
in BMP rib test. Add a comment about that. This is not equivalent (as
with range, random is evaluated once, while in the original loop, it is
evaluated at each iteration). I believe the intent was to behave like
with range.
This is useful to detect interfaces that are close to saturation
quickly. It would usually require to group by exporter name and
interface name and it may not make sense for some graph types (like
stacked 100%). It is useful with Lines and Grid.
This is a first step to make it accept configuration. Most of the
changes are quite trivial, but I also ran into some difficulties with
query columns and filters. They need the schema for parsing, but parsing
happens before dependencies are instantiated (and even if it was not the
case, parsing is stateless). Therefore, I have added a `Validate()`
method that must be called after instantiation. Various bits `panic()`
if not validated to ensure we catch all cases.
The alternative to make the component manages a global state would have
been simpler but it would break once we add the ability to add or
disable columns.
This is needed if we want to be able to mix use of several tables
inside a single query (for example, flows_1m0s for a part of the query
and flows_5m0s for another part to overlay historical data).
Also, the way we handle time buckets is now cleaner. The previous way
had two stages of rounding and was incorrect. We were discarding the
first and last value for this reason. The new way only has one stage
of rounding and is correct. It tries hard to align the buckets at the
specified start time. We don't need to discard these values anymore.
We still discard the last one because it could be incomplete (when end
is "now").