Thanks for sending in your questions. I will go over some items below which will hopefully answer your questions.
To begin with, all digital storage oscilloscopes employ a Analog to Digital converter. These chips are the brain behind the scope, simply taking a analog signal that you input into it and turn it into a series of 1’s and 0’s in which the (in our case) computer can interpret and read/display on the screen. These chips have two main specifications which we concern ourselves with in the typical world, how fast they can convert the analog signal to a digital signal, but more importantly in your application, at what resolution it can do this.
Vertical resolution on scopes are specified in the number of bits. This is how many 0’s and 1’s that the ADC has in order to plot a single data point. In your case, you are using the scope in 15 bit mode. This means the scope has 2^15 points to plot the signal vertically, or 32768 ‘dots’. In your first example, your are using the +-5V range. This means from top to bottom you are displaying 10V of data. Now we can take this 10V full screen and divide it by our number of dots, 32768, to figure out the most accurate the the scope can plot the dot is every 305.18uV. In your second scenario, you changed the voltage range from +-5V down to +-500mV by using the analog offset feature. This was a good move. By doing so, the ADC can now plot the 32768 dots across a much narrow range, 1V in this instance. We therefore increase the precision by a factor of 10 - the scope can place a dot every 30.52uV’s. In scenario 3, you went down to +-20mV, which would be peak to peak of 200mV, so the ADC is plotting every 6.10uV. Precision is at a all time high. If we simply took this information, you are thinking my best results are going to be in the +-20mV range, but more importantly, my scope is reading way out of tolerance. Well there is a bit more to it than this.
Enter precision vs accuracy. In the ideal world, the scope would be able to accomplish this type of precision with 100% accuracy. Unfortunately, electronic circuits can not be 100% accurate. You see, oscilloscopes have an internal A/D converter which converts analog data into digital data. These chips have fixed input voltage ranges, they cannot handle large swinging signals (so such as 20V). Instead oscilloscopes employ different circuits for the signal to go through depending on the voltage range to either attenuate or amplify the signal in order to have the A/D converter get the maximum resolution out of the measured signal. No oscilloscope can do this process with 100% accuracy. The amplifiers, resistors, capacitors even noise picked up from the cables on the probes going into the scope prevent this from occurring. Hence the reason for an accuracy specification.
The PicoScope 5000 has two different accuracy specifications in 15 bit mode, depending on the voltage range you have chosen. ±50mV to ±20V the scope has an absolute worst accuracy spec of ±1% full scale (±0.25% is typical, but let’s assume the worst for now). On the ±10mV and ±20mV ranges, this accuracy spec increases to ±5% full scale (±2% typical).
Let’s go with the assumption that your signal it a perfectly true noise free DC signal at 2.214V. In the 5V range, we have a max of ±1% accuracy rating. This means the scope will read within ±0.05V of the signal, or 2.264V to 2.164V. Similar to the readings that you see in scenerio 1 (your readings show between 2.264V and 2.191V). This is if we assumed the signal you were measuring had no noise on it.
In scenario 2, you offset the signal by -2.2V and are using the ±500mV range instead. In this range, you still have the ±1% accuracy rating, however since you are in a lower voltage range this tightens the tolerance to ±.005V. If we are still assuming the actual signal is 2.214V, and we subtract 2.2V from it from the offset setting, we would be measuring .014V now. If our accuracy is ±.005V, the scope should read between 0.019V and 0.009V, similar to your readings (your screenshot shows 0.0177V and 0.009748V).
In scenario 3, you changed the voltage range to ±20mV. In theory, the scope should read between ±0.001V - which you did see a decrease on your pk-pk voltage measurement. However there is one thing that changes things here also. In order to get the signal in the ±20mV voltage range, you had to AC couple the signal. In AC coupling mode, the channel accepts input frequencies from about 1 hertz up to it’s -3dB maximum bandwidth. Therefore, you will want to take into consideration, depending on your application, that it is removing the DC component of the signal at the same time.
So which range/method should you use? This depends on your application. If the DC component of the signal is irrelevant, use the ±20mV and AC couple the signal. If the DC component is relevant, you are best off using the ±500mV range and offsetting the signal.
I hope this all makes sense. Let me know if you have any questions on any of it.