Hi there again,
On to my last hurdle =). My project involves making a interface screen where data points are acquired and then the graph is displayed for the user to see the incoming signals. This is pretty much mimicking the picoscope 6 interface. I can vary the time divisions, vertical scale, etc.
My approach has been to use the block mode transfer and this seems to be working so far. Please correct me if I should be using one of the other approaches (rapidblock or streaming). I believe that as the user switches to slower sample rates, streaming would eventually be used - but I'll leave that for later =)
My question concerns the number of sample points I am using. I am using between 1000-2000 sample points (one single buffer) but I suspect this is causing me to occasionally miss or be late in catching a trigger value hence the rare glitch when viewing a stable signal. As I increase the sample points (say 10K, 100K, 1M) my functions that perform the draw operations to graph the output become VERY slow since there are loops that must process each data point. Am I missing something here?! I know that grabbing the data at 1M samples is doable since when I comment out my 'draw' functions, my debug output indicates data capturing is speedy. Is there some 'special' way to implement the output so I'm not limited to using small sample sizes? Will switching to rapidblock mode using segments accomplish this (ie. draw small chunks at at time?). I have been having trouble getting the segement/#captures/buffers working under rapid mode so I wanted to make sure on this forum before I attempt it again. I'm not sure which approach is correct and any suggestions would be extremely helpful!
I know programming wise it must be doable since the picoscope6.0 software can do show its output so smoothly using very high number of samples.
Thanks once again