According to the specs of this scope, the vertical resolution is 8 bit. But according to the "picoscope-3000-series-a-api-programmers-guide.pdf" page 11, the vertical resolution is at least 9 bit.
I have done several measurements with the scope where I used the API functions. I get the data with the following call:
//9. Transfer the block of data from the oscilloscope using ps3000aGetValues.
status = ps3000aGetValues(unit.handle, 0, (uint32_t*)&required_samples, 1, PS3000A_RATIO_MODE_NONE, 0, NULL);
I expected that the LSB byte of the 16-bit values should be zero for 8-bit resolution, or the 7 LSB bits should be zero in case of 9 bit resolution. To my surprise I found that all bits change over several 1000 measurements. This phenomen occurs in Block mode as in ETS mode.
My question now is, what is the vertical resolution of this scope ?
The vertical resolution of the PicoScope 3206D MSO is 8-bit. The values returned from the driver are scaled to 16-bit (this allows for an effective increase in resolution when downsampling using averaging).
Which specific part of the Programmer's Guide are you seeing the reference to 9 bits? Are you using the latest version of the guide?
Hello Hitesh,
I refer to the lastest version of the documentation.( picoscope-3000-series-a-api-programmers-guide.pdf)
At page 5 (Chapter3.2) there is a graphic where the digital values of the figure 7F00, 3F80, 0000, C080 and 8100 are used. When I divide 3F80h(16256d) by 256 I get 63,5. This implies that the upper 9 bits are used.
Furthermore in my data, I see all 16 bits of a received data-word changing. As you can see in my piece of code I do not use downsampling (PS3000A_RATIO_MODE_NONE).
What do you exactly mean "the 8 bit value is scaled to 16 bit" ? Is this scaling a shift-left 8* operation or a multiply with 256 action ?
is there any progress in answering my problem ? I can't continue with my measurements without the knowledge how the 8 bits are scaled to 16 bits. As I allready mentioned in the past, all 16 bits, which I receive from the ps3000a.dll driver, are changing. I don't understand the logic behind this.
The scaling is performed by a multiplication, which means that certain ADC count values will not be returned and you may get certain values at the lower levels.
Please also refer to this thread for further information.
Is there a particular reason why the scaling is impacting on your application?
There is a max ADC count value in the header file. All values returned from the driver will be related to this. You can then use the selected range for the channel, and this max value, to work out the actual voltage being seen at the scope input.
I feel like we are talking about different things. I am not interested in the actual voltage being seen by the scope, I only want to know why my LSB bits are not zero.
Today I re-read the api documentation and realized that you clearly state that the downsampling is performed by dedicated hardware in the scope and not in the driver. This also means that, unless you tamper with the data in the driver, the non-zero LSB bits come from the scope, which in turn means that the scope is performing downsampling. As I stated in my original post, I explicitly use PS3000A_RATIO_MODE_NONE in my call to ps3000aGetValues.
So can you please simply explain to me, either why the scope always performs the downsampling, or where my conceptual error is in thinking that calling ps3000aGetValues with PS3000A_RATIO_MODE_NONE should disable it.
Because, if I understand your documentation correctly, downsampling should not be performed, thus meaning somewhere between the scope measuring the data and me obtaining the data for the driver something went wrong. And this would sadly mean that the data and in turn the whole scope is not reliable.
Unless you request downsampling, the scope won't downsample. What you are seeing is the application of a digital correction factor that is calculated for each range when the scope is factory tested, these values being stored on the scope. This multiplication causes the bits to be non zero, and ensures that the vertical range specifications are met.
I get your point, that the scope doesn't downsample and that the raw 8-bit ADC value is multiplicated with a value, which is not 256, so that also the 8 LSB bits of the 16 bit result can change.
In the attached picture files for Channel A and B you can easily see the discrete ADC levels which are marked with the dotted lines. What I still don't understand are the intermediate values between the discrete ADC levels, which I see in both the A channel as well in the B channel.
What I further don't understand is the number of counts between 2 discrete levels, which are different for the two channels. Channel A is 286 and channel B = 282
For your convenience I attached also the original measured data for channel A and B. Because of file length limitations I had to crop the two files.
Could you explain me why the Picoscope has this behaviour and also answer my question how I can get reliable results from these values ?
You are seeing the levels between as you are plotting full 16 bit values, if you just plotted to top 8 bits this effect will disappear.
For your scope, Channel A is 286 and channel B is 282, because each channel has it's own digital gain value, calculated and stored in the device when it goes through our final tests.
You give, up to now, no answer on my question in my last post:
"What I still don't understand are the intermediate values between the discrete ADC levels, which I see in both the A channel as well in the B channel."
When Channel A of my oscilloscope has a digital gain value of 286, then from a pure mathematical point of view, these intermediate values can't occur. So, besides a multiplication of the ADC value with 286, something else also happens. Furthermore, in the PicoScope 3000 Series (A API) Programmer's Guide at page 5 is stated, that the maximum values are + - 32512, which is neither a multiple of 286 or 282(Channel B)
Could you please give me an answer on these 2 questions ?
When you mask the top 8 bits of both 286 and 282 when represented as a 16 bit number you get 256. As the ADC is 8 bit it means these values are actually the same level.
286 - 0000 0001 0001 1110
282 - 0000 0001 0001 1010
256 - 0000 0001 0000 0000
The lower 8 bits are actually below the noise floor of the scope, so not useful to consider when plotting the raw data, they are introduced by the scope due to way the samples are processed.