You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In current eager cache implementation for tripy, we normalize the Trace of a tensor and take the string representation of it as a part of the key for cache inserts/lookups to obtain cache executables.
However, if some storage ops that are not lifted in the trace as inputs (i.e. output storage op placeholder for trace outputs), have float values with high precision, there is a non-zero probability that the early trimming/rounding of the string representation of the floating point values in the trace string (part of the key) would lead to incorrect cache behaviour (false positives).
Even though the odds of this happening is low for now, this issue can be addressed by
hashing the trace instead of taking its str representation (need to ensure all children of trace at all levels are hashable (i.e all lists need to be turned into hashableList type)
lifting all storage ops as inputs which will first require to solve shape of shape inference in MLIR-TRT.
The text was updated successfully, but these errors were encountered:
In current eager cache implementation for tripy, we normalize the
Trace
of a tensor and take the string representation of it as a part of the key for cache inserts/lookups to obtain cache executables.However, if some storage ops that are not lifted in the trace as inputs (i.e. output storage op placeholder for trace outputs), have
float
values with high precision, there is a non-zero probability that the early trimming/rounding of the string representation of the floating point values in the trace string (part of the key) would lead to incorrect cache behaviour (false positives).Even though the odds of this happening is low for now, this issue can be addressed by
hashableList
type)The text was updated successfully, but these errors were encountered: