There is some kind of rounding bug affecting the zero point in calibrations.
Depending on the max value, the zero point is in some cases off by a small negative fraction, when both the raw and calibrated min value is meant to be zero.
Try the demo macro below to reproduce the issue.
The two different calibrations applied should be identical, but in the first case the zero point is off.
The arrays show that all values are calculated and shown correctly, except for raw zero / calibrate(0).
This causes problems for me because I need to test if the calibrated value for raw zero is zero or not.
Currently, the statement
if (calibrate(0) == 0.0) then ....
will fail in these cases, in a manner that is impossible to predict.
I cannot use rounding as a workaround for the zero point test, because in some of my 16-bit data the calibrated max value is already a small fraction and the calibrated zero point may correspond to a small positive raw value. In these cases the calibrated zero value is intentionally off by a small negative value, and I cannot find any robust way to separate this intentional small negative zero point from the round-off-error small negative zero point.