Help with "fast" mode and "slow" mode in

Which product is right for your exact requirements
Guest

Help with "fast" mode and "slow" mode in

Post by Guest »

Initially, I was very, very impressed with the specs of the ADC-216: 333K 16bit samples per second, at a price point around $800US, which is maybe a half to a third the cost of the cheapest 16bit solutions from National Instruments or DataQ, none of which can perform nearly that many samples per second.

Now, though, I must admit that I'm a more than a little perplexed. From the owner's manual, I read the following:
  • p. 4: Buffer size... 32K

    p. 5: In fast mode, the computer starts the ADC-2xx to collect a block of data into its internal memory. When the ADC has collected the whole block, the computer stops the ADC and transfers the whole block into computer memory.

    p. 5: In slow mode, the ADC-2xx uses its internal memory as a FIFO: the computer can read values from the FIFO as soon as the readings are taken. In this mode, the ADC-2xx is running continuously, and there is no limit to the number of values that can be read.

    http://www.picotech.com/document/pdf/adc2xx.pdf
I found this thread at the programmer's support forum:
  • When using Picolog in fast block mode, as you are (previous post) the file size is resticted to the size of the buffer on the unit. The ADC-216 has a buffer of 32k when one channel is used and 16k for each channel when both are used.

    http://www.picotech.com/support/viewtopic.php?t=493
Here is a very troubling little blurb on page 7 of the owner's manual:
  • p. 7: Slow mode is used to collect samples at regular intervals over long periods. It is not currently implemented. adc200_get_single may be used until slow mode is available: it performs the same functions, but the user is responsible for timing when to take samples.

    http://www.picotech.com/document/pdf/adc2xx.pdf
There is a brief meta-language description of the programming algorithm ["Sequence of Calls"] at the bottom of page 8 and the top of page 9:
  • This is the procedure for reading and displaying a block of data:
    • open the ADC-200
      select ranges until the required mV range is located
      set AC/DC switches, channels, trigger and oversampling
      select timebases until the required ns per sample is located
      set the signal generator frequency (if required)
      start the ADC200 running
      wait till the ADC200 says that it is ready
      stop the ADC200
      transfer the block of data from the ADC
      display the data
    http://www.picotech.com/document/pdf/adc2xx.pdf
Finally, it appears that, from the programmer's point of view, slow mode, if it exists at all, seems to be invoked by sending the parameter to the function although I haven't seen any good documentation on this.

So: Does slow mode exist? How much "slower" is it than fast mode? Or is there only fast mode, which requires turning on the ADC-216, sampling, turning off the ADC-216, downloading, turning the ADC-216 back on, etc? It's not clear what's meant by the "32K" buffer; it appears to be either 32 KBytes [which, at 16 bits, would be 16 KSamples], or 32 KSamples [which, at 16 bits, would be 64 KBytes], so it would appear that, in fast mode, the ADC-216 can only record the maximum claimed sampling rate of 333 KSamples per Second for only about 1/10th or 1/20th of a second, before sampling must stop and the unit must wait for a download to complete before it can begin sampling anew.

Could someone help me out? We don't need anywhere near the full 333 KSamples per second in our lab, or at least we don't need them just yet, but we will darn sure need more than one tenth of a second sample durations.

I hope we can get this cleared up, because, like I said above, the basic specs on the ADC-216 are very nice, as is the price point.

Thanks!

User avatar
markspencer
Site Admin
Site Admin
Posts: 598
Joined: Wed May 07, 2003 9:45 am

Post by markspencer »

Hi,

Unfortunately, slow_mode has not been implemented and as stated in the help manual you will have to use get_single.

The memory of 32k means that 32,000 data readings can be taken so the memory is:

memory size: 32k x 16

The ADC-216 has two seperate blocks of memory each capable of storing 16,000 readings. When only one channel is in use then the two blocks of memory can be used together giving the 32,000 buffer capacity. When both channels are in use then each channel uses its own memory block.

Therefore, when sampling in fast block mode at its fastest 333ksps the length of time it will take to fill the buffer will be 1/10th of a second for one channel active and 1/20 of a second when both channels are active.

I hope this clears up the information you needed, but if not please contact me again:

email: tech@picotech.com

Best regards,
Regards,

Mark Spencer

Guest

Post by Guest »

  • Unfortunately, slow_mode has not been implemented and as stated in the help manual you will have to use get_single... I hope this clears up the information you needed, but if not please contact me again...
Hi! I tried sending you an email, but I didn't get a reply. We will probably make a decision on a vendor by Friday, so I need these answers as soon as I can get them.

Here's the spec from page 15 of the manual:
  • adc200_get_single
    void adc200_get_single (short far * buffer)
    This routine starts the adc200, collects a small number of samples and then returns the average of these samples. It is intended as a stopgap until slow sampling is implemented. buffer a pointer to a buffer containing two integers. On return from this routine, the first entry contains a reading from channel A and the seconds entry contains a reading from channel B.
Three questions:
  • 1) Can I call a far pointer in a WIN32 program [running on W2K/WXP/W03], or will I be limited to DOS/16 bit environments?

    2) What kind of latency are we talking about with adc200_get_single? Suppose, for instance, that "Total Latency" is divided up like this:
    • Latency A: The time required to call the driver, which calls the port, which sends a message down the cable

      Latency B: The time required to start up the ADC-216, take "a small number of samples," "average" them [or is averaging performed by the driver?] and make a decison as to what value will be returned

      Latency C: The time required to send an answer up the cable, into the port, and through the driver, back to the program that made the call in the first place
    In theory, Latency A ought to be about the same as Latency C; at any rate, let's say that total latency looks like
    • Total Latency = Latency A + Latency B + Latency C
    Obviously you could argue that with all the handshaking overhead, the actual process is not quite as simplistic as this. Nevertheless, from the point of view of the applications programmer, there is some sort of "Total Latency" involved when making a call to adc200_get_single, and the theoretical maximum continuously sustainable number of samples per second is
    • 1 / (Total Latency)
    Thus Question 2: What is the "Total Latency" when using adc200_get_single, and what is the attendant theoretical maximum continuously sustainable number of samples per second?

    3) How does the "Total Latency" of adc200_get_single compare with the "Total Latency" of starting the ADC-216 in "fast" mode, filling the entire "32K" buffer, and downloading that buffer to the computer? Or are they pretty much one and the same?
Thanks!

User avatar
markspencer
Site Admin
Site Admin
Posts: 598
Joined: Wed May 07, 2003 9:45 am

Post by markspencer »

Hi,

Answer 1:

In the adc200.h file that is provided with the ADC200 driver you will find some #define lines:

Code: Select all

#if defined(WIN32)
#define PREF2 __declspec(dllexport) __stdcall   // BC5, MS
#define HUGE
#ifndef FAR
#define FAR
#endif
If the platform that you are using is Win32 (Windows 32 bit) then the FAR will be defined as a space so the function get_single will in effect be compiled as:

void adc200_get_single (short * buffer)

Answer 2:

The latency of the calling get_single can be interpreted as:

Total Latency = LatencyA + LatencyB + LatencyC.

After doing some experiments into the total time it will take to ask, recieve and return to the calling function it takes on average approximately 6.48ms to set up the unit and 2 microseconds to get the data back. This will vary depending on operating system, and PC specifications. The tests were carried out on a:

So therefore it is theroretically possible to obtain 154 samples per second.

AMD-K6 3d Processor
Windows ME
128 MB RAM


Answer 3:

All samples where taken at timebase 0:

The latency for the fast block mode can be set up into different categories, one that uses ADC200_set_rapid and the other method does not.

The one that uses set_rapid has the following performance on average, test on the samp PC as above. To set the unit up it is 4.6886 milliseconds with each reading taking an average of 25.096 microseconds. This gives 39660 samples per second, taking into account the set up period.

To collect the maxinum number of samples would take on average 807.76 ms.

The second option of not using the set_rapid produces the following results 21.987 ms for the unit to be set up and 25.094 microseconds to download one reading. This gives 38973 samples per second.

To collect the maxinum number of samples would take on average 824.99 ms

Therefore the method of data collection that produces the best result is the fast mode with rapid set.

The above times where taken from collecting data at 1000 to 31000 samples per sampling, with 1000 sample increments and executed for 100 times.

If further information is required please let me know.

Best regards,
Regards,

Mark Spencer

Guest

Post by Guest »

  • So therefore it is theroretically possible to obtain 154 samples per second.
Sorry, but 154 samples per second ain't gonna cut the mustard. [That's American slang for "154 is not enough."]
  • This gives 39660 samples per second, taking into account the set up period.
Okay, I'm really confused now. Let's say that every second, we get X sets of samples, each of which comprises Y distinct samples, and we're trying to maximize the product X * Y. At the one extreme, if Y = 1, you've just indicated that X = 154. On the other hand, we know that we can get 32K samples in about one tenth of a second. How do you get 39660, i.e. what are its X & Y values, i.e. could you please supply X & Y for the following equation?
  • 39660 samples per second = X sets of samples each of size Y
Then I will assume that once every 1/X seconds, I can pull Y samples, the product X * Y = 39660 will be the maximum number of samples I can pull per second, and [by the sampling theorem] 1/X will be about twice the maximum frequency I'll be able to sample.

Thanks!

User avatar
markspencer
Site Admin
Site Admin
Posts: 598
Joined: Wed May 07, 2003 9:45 am

Post by markspencer »

Hi,

There are three distinct setups which each give three different results, as expressed in my post.

Using the block function adc200_get_values and adc200_set_rapid(TRUE) the number of samples in one second is: 39660

Using the block function adc200_get_values and adc200_set_rapid(FALSE) the number of samples in one second is: 38973

The above two are classified as fast mode.

The adc200_get_single will give 154 samples, this is on my PC and could vary depending on the PC, other Processes running etc.

This is classifed as slow mode.


I would suggest that you download the help manual and read the Priniple of operation section available here.

Best regards,
Regards,

Mark Spencer

Guest

Post by Guest »

  • Using the block function adc200_get_values and adc200_set_rapid(TRUE) the number of samples in one second is: 39660
Yes, but the question is: Are the 39660 samples sampled precisely (1/39660)th of a second apart? My impression from your answer above is that they are not. Rather, it appears that the device is inaccessible for a very long period of time [approximately 4.6886 MILLIseconds], and then a large burst of samples are obtained in a very short period of time [only about 25.096 MICROseconds]. Because the period of dead time is orders of magnitude larger than the period of live time, it would appear to me that, by the sampling "theorem," the maximum effective bandwidth that I will be able to sample is only about one half of
  • (1 / 4.6556 milliseconds) ~ 215Hz
i.e. the maximum effective bandwidth is only about 107Hz.

If I have misunderstood your explanation, and the 39660 samples are indeed taken at well regulated intervals of (1/39660)th of a second, then please correct me - we are very interested in your product.
  • I would suggest that you download the help manual and read the Priniple of operation section available here.
Sorry, the link didn't work for me.

Thanks!










      Guest

      Post by Guest »

      • 4.6886 MILLIseconds... (1 / 4.6556 milliseconds) ~ 215Hz
        i.e. the maximum effective bandwidth is only about 107Hz...
      Oops - typed (1 / 4.6556 milliseconds) when I meant to type 1 /(4.6886 milliseconds). This gives ~213Hz, but the 107Hz is essentially unchanged.

      Thanks!

      User avatar
      markspencer
      Site Admin
      Site Admin
      Posts: 598
      Joined: Wed May 07, 2003 9:45 am

      Post by markspencer »

      Hi,

      The proces that occurs to get data is as follows:

      Ask for samples.

      Set Unit up

      Sample Data (this is carried out by ADC216 and the samples are the same time period apart.

      Download data. During this period the unit cannot sample. The down load rate will be determined by your PC, the OS giving time to your Application.

      Repeat procedure for next cycle of data readings.

      The Maximum Bandwidth that you can measure is state in the units specification. 166kHz. The ADC216 has an onboard clock that is used to clock the readings in. The bandwidth is determined by the sample rate at timebase 0, which all these samples where being taken at. This equates to 1 sample every 3 microsecond or 333ksps. As the timebases increase in value, so is the sample rate by factor of 2 each time.

      eg. 0 3003 ns
      1 6006 ns
      2 12012 ns
      3 24024 ns

      Another factor is the oversample.

      All this is explained in the help manual. From the Home page use the 'download' button.

      Best regards,


      To get the help manuals use the
      Regards,

      Mark Spencer

      Guest

      Post by Guest »

      Will you please tell me which of the following statements is true, or at least the closest to the truth?
      • CANDIDATE ONE: The maximum continuously sustainable sampling rate is 39660 samples per second, sampled at equal intervals of (1/39660)th of a second, and we provide the applications programmer both a hardware driver and a software library that will allow the applications programmer to write a program that can achieve this rate.

        CANDIDATE TWO: The maximum continuously sustainable sampling rate is 333000 samples per second, sampled at equal intervals of (1/333000)th of a second, and we provide the applications programmer both a hardware driver and a software library that will allow the applications programmer to write a program that can achieve this rate,

        CANDIDATE THREE: The maximum continuously sustainable sampling rate is X samples per second, sampled at equal intervals of (1/X)th of a second, where X = __________, and we provide the applications programmer both a hardware driver and a software library that will allow the applications programmer to write a program that can achieve this rate.
      We are very, very interested in your product, but we need to get a straight answer to this question. It's getting down to crunch time, and we have to make a decision soon.

      Thank you.

      User avatar
      markspencer
      Site Admin
      Site Admin
      Posts: 598
      Joined: Wed May 07, 2003 9:45 am

      Post by markspencer »

      Hi,

      1. The 39960 is the number of samples that can be downloaded through the parallel port in one second. It has nothing to do with the sampling rate of the unit. Which is a totally seperate entity.

      2. The unit can sample at 333ksps when one channel is active. This is totally seperate form the down load spped through the parallel port.

      3. The unit takes the samples at it maximum, if required, depending on the timebase choosen. Once all samples have been collected the unit stops collecting and the results are downloaded to the PC. Each is a totally different process when running in block mode. I think you should consult the help manual for this product and especially the 'Prinicples of operation'.

      Kindest regards,
      Regards,

      Mark Spencer

      Guest

      Post by Guest »

      markspencer wrote:I think you should consult the help manual for this product and especially the 'Prinicples of operation'.
      Could you give me a link to this "Principles of Operation" document? The only document I am familiar with is the Owner's Manual: That document is almost exclusively devoted to a description of the C-Programmer's Application Programming Interface, and is in fact the source of the query ["fast" mode -vs- "slow" mode] that began this thread in the first place.

      I am starting to get the impression that the ADC-216 is NOT a continuously sampling Analog to Digital Conversion device. Rather, it seems to be a burst mode device that can sample no more than a tenth of a second of data at a time [32K samples in a buffer being approximately one tenth of 333K samples per second], and that if the end user wishes to sample a phenomenon that lasts longer than one tenth of a second, then the end user must look elsewhere.

      Let me ask parenthetically: If the ADC-216 is NOT capable of continuous sampling, then what on earth was the one tenth of a second acoustical phenomenon that Charles Hansen was sampling in his review? I am not familiar with any symphonies, chamber music, tin-pan-alley standards, or gangsta rap hits that last for only one tenth of a second.

      Finally, let me frame the question thusly: If the ADC-216 is NOT capable of continuous sampling, do you sell any product that IS capable of continuous sampling [at 16 bits per sample]?

      If we can't clear up this confusion, we will be forced to purchase a product from one of your competitors.

      Thanks!

      Guest

      Post by Guest »

      Anonymous wrote:Let me ask parenthetically: If the ADC-216 is NOT capable of continuous sampling, then what on earth was the one tenth of a second acoustical phenomenon that Charles Hansen was sampling in his review? I am not familiar with any symphonies, chamber music, tin-pan-alley standards, or gangsta rap hits that last for only one tenth of a second.
      Let me expound on that point: If the ADC-216 operates as I am coming to understand that it operates, then even at the paleolithic standard of 16 bit audio sampled at 44.1KHz, a 32K buffer only holds 32/44.1 ~ three quarters of a second of audio, before the sampling must be stopped and the buffer downloaded.

      So what piece of music is there that benefits from three quarters of a second of sampling?

      Guest

      Post by Guest »

      Anonymous wrote:even at the paleolithic standard of 16 bit audio sampled at 44.1KHz, a 32K buffer only holds 32/44.1 ~ three quarters of a second of audio, before the sampling must be stopped and the buffer downloaded.
      It's twice as bad if you assume that 44.1KHz means a sampling "theorem" rate of 88.2K samples per second, but it's never been clear to me just what was meant by "44.1KHz".

      User avatar
      markspencer
      Site Admin
      Site Admin
      Posts: 598
      Joined: Wed May 07, 2003 9:45 am

      Post by markspencer »

      Hi,

      We provide details of and examples that the ADC-216 has been used for is testing audio equipment.

      http://www.picotech.com/applications/spectrum.html

      However, if you would like to discuss this please contact me on or via email, as we do not seem to be able to provide a satisfactory answer on the forum:

      Tel: 01480 396395
      email: tech@picotech.com

      Best regards
      Regards,

      Mark Spencer

      Post Reply