support init mode for metric model#278
Conversation
…async until the data is accumlulating enough
7581052 to
ffeb399
Compare
|
/LGTM |
| initMode = predictionconfig.ModelInitMode(initModeStr) | ||
| } | ||
|
|
||
| historyLength, exists := config["cpu-model-history-length"] |
There was a problem hiding this comment.
How about moving the constant to a common file, like EVPACaller-%s-%s 、 cpu-model-history-length、24h etc.?
There was a problem hiding this comment.
How about moving the constant to a common file, like
EVPACaller-%s-%s、cpu-model-history-length、24hetc.?
Yes, we can do it later
| marginFraction = "0.15" | ||
| } | ||
|
|
||
| initModeStr, exists := props["mem-model-init-mode"] |
There was a problem hiding this comment.
This part of memory is almost same like cpu, how about a function ?
| } | ||
|
|
||
| historyLength, exists := props["mem-model-history-length"] | ||
| if !exists { |
There was a problem hiding this comment.
Why history length of cpu and memory is not same, cpu is 24h but memory is 48h?
There was a problem hiding this comment.
Why history length of cpu and memory is not same, cpu is 24h but memory is 48h?
This is just an empirical value borrowed from vpa. memory is incompressible resource, so use longer history data is more safe and robust. cpu is compressible resource, and generally it is daily cycle because of people traffic
| type ModelInitMode string | ||
|
|
||
| const ( | ||
| // means recover or init the algorithm model directly from history datasource, this process may block because it is time consuming for data fetching & model gen |
There was a problem hiding this comment.
Can we make sure which mode is default?
There was a problem hiding this comment.
Can we make sure which mode is default?
this param is specified by caller or user, if it is not specified, default is history mode, original logic
|
|
||
| var initError error | ||
| switch c.initMode { | ||
| case config.ModelInitModeLazyTraining: |
There was a problem hiding this comment.
lazy training mode is not from history?
There was a problem hiding this comment.
lazy training mode is not from history?
yes, it means accumulating data from real time data source, then when the data length is enough, it can do predict. i have not thought a better naming
…async until the data is accumlulating enough
What type of PR is this?
Feature
What this PR does / why we need it:
support init percentile model by different modes, by default, it always use history mode, keep the original logic.
for recommendation, it is once task use original algorithm, so it does not impact.
for tsp, it has params and caller, so it does not impact.
for evpa, use lazytrainning mode to do predict until it is accumulating data to enough window. this way the prediction is more robust and safe.
Which issue(s) this PR fixes:
Fixes #212
Special notes for your reviewer: