How does the scaling of input signals actually work? Is it purely by software, or is there a programmable gain amp in the data logger (ADC-16).
If it is purely by software (a scaling factor), isn't it best to level shift and amplify sensor output voltages so that they are nearly the full scale range of the input.
The ADC-16 input range is plus or minus 2.5volts, so a sensor voltage could be conditioned to be, say, plus or minus 2.4 volts.
Will this increase the resolution?