Attached is a screen shot of LabVIEW code that is my attempt to read from whatever PicoScope 3000 series happens to be connected to the computer the shortest sample interval it is capable of. The theory:
2) Since all I am interested in is the shortest possible time, a sample count of two would be the minimum necessary to do determine that.
But the theory does not hold up because the interval value (9 probe in the screen shot) I get from a 3205D device is 6.662E+9 ns. This is 9 orders of magnitude off what I expect. So where has this gone wrong? What is the right way to do this? Can the equivalent result of the function ps6000aGetMinimumTimebaseStateless() for a 6000 series be done for a 3000 series PicoScope?
Hi,
FYI - we have LabVIEW section on the forum - forum20.html
Just had look at your LabVIEW code.
We don't have any LabVIEW code examples that find the fastest timebase based on the scopes settings but this should be straight forward.
Before calling ps3000GetTimebase2 function you should call SetChannel functions for all channels.
As the number of enabled channels will effected the maximum timebase available.
Note all channels are enabled by default with our APIs, so you need to do this for each channel.
To find the fastest timebase call ps3000GetTimebase2 function in a loop incrementing the timebase value each time starting at 0 until PICOSTATUS returns 0.
Exist the loop and the variable timeIntervalNanoseconds will be valid.
(The function will return PICO_INVALID_TIMEBASE for the PICOSTATUS code for a invalid timebase value)
We use the ps3000GetTimebase2 in our examples (PicoScope3000aGetTimebase2.vi) but is not used in a loop. Its used to report the timeInterval used and other values.
There is no psXXXXxGetMinimumTimebaseStateless() function for the 3000a API.