Open
Description
Following the creation of the framework for evaluating and comparing feature branches and catching regressions prior letting them into the master branch/versions we need to specify the critical commands we want to keep track of.
We already have in place automation to enabled redis-benchmark related benchmarks on CI/LOCAL machines, meaning we're only missing the use-cases that are critical to Product/Eng. With that in mind @DvirDukhan and @K-Jo please specify on this issue the benchmarks you want to see on CI given the two types of benchmarks:
- per command ones:
- For this specification, you can only state the command and a dataset to be preloaded ( RDB ).
Example:- Command:
AI.TENSORSET mytensor FLOAT 2 2 VALUES 1 2 3 4
. Rdb: not required - Command:
AI.TENSORGET mytensor
. Rdb: "https://s3.amazonaws.com/benchmarks.redislabs/redisjson/performance.docs/performance.docs.rdb"
- Command:
- For this specification, you can only state the command and a dataset to be preloaded ( RDB ).
- per benchmark suite ones: ( This ones will be the last ones to be added to CI )
- aibench vision: https://github.com/RedisAI/aibench . Specify variations,etc...
As soon as you guys define the benchmark requirements I'll provide the initial set of examples on a kick-off PR. Afterwards the team should add as many benchmarks as needed...