* AI: Added support for non BHWC models
Tensorflow models use BHWC by default, however, if we are using
converted models, we can find that the expected input is BCHW. Now the
input is configurable (although the restriction of being dimesion 4 is
still there) via Shape parameter on the input definition. Also, the
model instrospection will try to deduce the input shape from the model
signature.
* AI: Added more tests for enum parsing
ShapeComponent was missing from the tests
* AI: Modified external tests to the new url
The path has been moved from tensorflow/vision to tensorflow/models
* AI: Moved the builder to the model to reuse it
It should reduce the amount of allocations done
* AI: fixed errors after merge
Mainly incorrect paths and duplicated variables
New parameters have been added to define the input of the models:
* ResizeOperation: by default center-crop was being performed, now it is
configurable.
* InputOrder: by default RGB was being used as the order for the array
values of the input tensor, now it can be configured.
* InputInterval has been changed to InputIntervals (an slice). This
means that every channel can have its own interval conversion.
* InputInterval can define now stddev and mean, because sometimes
instead of adjusting the interval, the stddev and mean of the training
data should be use.
Now when loading labels internal/ai/tensorflow package will try to look
for all the files that match the glob label*.txt and will return the
labels that match the expected number. Some models add a first label
called background, which is a bias.
Also, a new parameter has been added to models to allow a second path to
look for the label files. This path is set to nasnet asset on
internal/ai/vision.
By inspecting existing models we saw that many times logits are returned
instead of probabilities. Photoprism uses probabilites to rank the
quality of the results, so we need to transform those logits. Our
approach is to add a new layer at runtime to the graph that performs the
softmax operation.