I could just collect a few ms of samples and average them, and call that the offset, but I am wondering if there was a better way to accomplish this.
You need to do this for every input range you are using, because different input ranges have different offsets.
I had been planning to just check for offset with the range set to 0 for the highest resolution read on the offset, and then use that for all ranges. Is that not a good idea I take it? It seems odd that the offset would change based on the input range if the probe is the same, and I want to be as accurate as I can get.
Why is it that I need to re-cal for each range? I was running my cal with the range set to '0' for the smallest range, to get the most granular offset. It seems like that wouldn't change as long as the probe used is the same.
EDIT: Sorry for the duplicate post, didn't realize the previous one had to be approved.