PicoScope 7 Software
Available on Windows, Mac and Linux
Congrats! That's a very powerful piece of kit. Super high acquisition speed (5 GSa/s) combined with super deep memory (2GSa).fabian wrote:I use the Picoscope 6404D for capturing data using the SDK and Labview.
As my application needs a fast data aquisition I need to use the fastest timebase that's possible (200 ps).
You didn't say if you were using USB2 or USB3. If the former, then you can speed transfers up substantially by using USB3 instead. You also didn't mention what sample size you were using in your captures. It's going to take some finite amount of time to transfer data from the internal buffer to the PC, via the USB link and the driver. The only way to reduce that gap is to reduce the size of the blocks being transferred. There is a limit to this though, because there is a fixed overhead for any transfer, that won't scale with smaller block sizes. (PicoTech would have to comment on what this lower-bound is.)When I use the block mode I can collect a block of data with a predefined number of samples, transfer the data to the PC and restart the data aquisition. The problem is that there's a time gap while transferring the data to the PC of about 10-20 ms. That is a really long time and does not suit for my application. Is there a way to reduce this time gap to let's say 1 ms or even less?
Rapid block mode does reduce the time between consecutive blocks in one acquisition, and the time between each burst is extremely small, as they claim. But this doesn't have any impact on how long it takes to then get that data out of the internal buffers and into the PC. If it's a large chunk, it's going to take just as long to transfer.Beside the normal block mode there's also the rapid block mode which seems to reduce the gap. The programmer's guide says about this mode "It reduces the gap from milliseconds to less than 1 microsecond.". It would be nice, if that's true, but when I tried using it Labview says nevertheless that the transfer takes about 15 ms.
That is a good range of sizes to have tested. And yes, measuring the timestamp in LabView before and after is a valid way of checking the interpacket latency.The block length isn't specified up to now and so I'm very variable with this. I tried very different ones starting from 100 to 1000000, while 100 has to me the minimum. The transmission time I've mentioned, was measured in Labview by capturing a timestamp before and after the transmission.
Yes, and no. There are two components to that time. One is the actual transfer time. The other is Setup overhead. Over USB3, the speed can vary between 80-180 MB/sec, depending on a variety of conditions. Let's say it was 130 MB/sec. With a Blocksize as large as 1 MB, the actual transfer time would be ~8 millisecs. With just 100kSa, it would be less than 1 ms. And 10k Sa, only <0.1 ms.This always returns around 15 ms with an amount of 10000 samples or lower. I'm not really sure if that's a good way for measuring the transmission time, but it seems like the 15 ms are a kind of needed time for the USB protocol.
I think it could be, depending on your expectations. One major question would be, when you're capturing at 200 ps/sample, for how long a duration do you need to capture? That will affect how many segments you will have available, and thus the total time interval you can span.I've also tried the rapid block mode. It seems to be the best mode for me, due to the documentation.
Unfortunately, no. Although I can understand why you might have that impression. The PicoScopes can either be acquiring data, or transferring it to the PC. Not both at once. (This is because the same FPGA engine that handles captures, also aggregates and feeds data to the USB port. And it only operates in one mode at a time.)My only problem with this is specifying the time gap. I understood the mode in a way, that it's capturing data in different internal buffers that can be read out one by one. While reading one buffer the Picoscope captures data into another buffer. Is this right?
That is a very good question! While PicoTech likes to brag often about how low it can be, they seem very shy about revealing how large it can be. Or any relationship between capture rates and inter-segment gaps. I have inquired here on several occasions, and I am still waiting for an answer to that same question. I'm not sure if they simply don't know, or prefer not to say.How long is the time gap between the last sample in buffer-block 1 and the first one in buffer-block 2? I don't know how to get this information.
Is this really true? I have understand this in the other way, I already described. If you are right, what is the benefit of the rapid block mode then? It really doesn't matter, if I partition the memory and write it full or if I write it full as one big memory.Mark_O wrote:Unfortunately, no. Although I can understand why you might have that impression. The PicoScopes can either be acquiring data, or transferring it to the PC. Not both at once. (This is because the same FPGA engine that handles captures, also aggregates and feeds data to the USB port. And it only operates in one mode at a time.)
So what Rapid Block mode does is partition the on-board memory space into segments, and capture bursts continuously, without transferring any of it to the PC. By deferring the transfer until later, that reduces the time between two consecutive captures to the smallest possible amount. As little as <1 us, in some cases, we have been told.
I talked to Picotech yesterday and they told me that they really don't know how long the gaps are. I tried to measure the gap and find out an average value, but it seems to highly fluctuate around 1 µs.Mark_O wrote:That is a very good question! While PicoTech likes to brag often about how low it can be, they seem very shy about revealing how large it can be. Or any relationship between capture rates and inter-segment gaps. I have inquired here on several occasions, and I am still waiting for an answer to that same question. I'm not sure if they simply don't know, or prefer not to say.
The benefit is that if the data being sampled is sparse, then all the (potentially) large gaps get skipped. This can increase the acquisition window by many orders of magnitude. OTOH, if the data is NOT sparse, and all the gaps between segments are very small (your situation?), then you are right... it doesn't help at all.Is this really true? I have understand this in the other way, I already described. If you are right, what is the benefit of the rapid block mode then? It really doesn't matter, if I partition the memory and write it full or if I write it full as one big memory.
OK. Thanks for that information! Good to know.I talked to Picotech yesterday and they told me that they really don't know how long the gaps are.
Well, in your specific case (sampling at the max possible rate), they do indicate it is <=1 us. So you did confirm that, which is good.I tried to measure the gap and find out an average value, but it seems to highly fluctuate around 1 µs.
OK. Even though Fabian reported yesterday that someone at Pico told him you didn't know. I'm glad to hear that you do.It is not true that we don't know the inter gap time,
OK. That is suitably vague.but it is true that it varies depending upon the setup of the scope, and also between the different scope ranges.
So I was correct then, when I hypothesized that you prefer not to say.In simple terms it takes a fixed number of clock cycles to write the last sample to memory, change to a new block, rearm the trigger, and restart the capture; so in theory it would be possible to give figures. We have chosen not to publish this information,
Thanks for setting up a straw man. Did you enjoy knocking him down?To ask the them to provide additional figures to cover all timebases, and all scope models, would take them away from more important work...
Ah, what engineer would NOT be concerned about such a thing? Is that not a pivotal question for using segmented acquisitions in the first place?If you are concerned that you are going to miss data due to the rearming of the trigger...
In other words, don't even bother using one of the more powerful (and much advertised) capabilities? That doesn't make any sense to me.then it would be better to reconfigure the scope to capture more data in the given block, or even to capture just one block of data using all of the memory in the scope.
I understand the data perfectly. What I don't understand is the capabilities and limitations of the test instrument. Which is why I came here, in search of technical information. Is this the wrong place to ask questions? Because that is the impression you are conveying.In most situations it is just a case of understanding the data and setting the scope up appropriately.
I think you need to remind me... what company do you work for? PicoTech may be interested in the answer to that....you can always look at our competitors for a solution that more accurately meets your requirements.
My actual use cases involve data streams that I sample at rates varying from 1 MS/s to 16 MS/s. Which is why I picked 10 MS/s as a representative intermediate value. At the high end of that range, a re-arm time up to 2 us worst case would probably be acceptable. At the low end, things are looser, and up to 8 us would likely be tolerable.Martyn wrote:A sample rate of 10MS/s would suggest that you are looking at 1MHz clocked data, possibly 2MHz, to give a good representation of the signal.
Now consider how many samples it would take to cover your bit stream, and you are likely to still be in the low KS, even with 64 bit data. If you change the sampling to 100MS/s, this would still only mean mid KS, but you are now looking at re arm times of around 1us, or close to the clock rate of the data stream. Unless there is data collision going on, for which I would suggest using single block capture for analysis, you should not be missing any data blocks, and therefore re arm time is no longer an issue.
Thanks. But there are two significant problems with those statements.Martyn wrote:In this case by changing the settings of the scope, because the extra memory depth allows you to do so, it is possible to overcome any perceived limitations in trigger re arm times.