Another problem would be, is there a way to read out the captured traces with python in order to save each trace as a numpy array?
If all samples fit in the scope memory then you can use rapid trigger mode.
you then have all 10k captures in a single .psdata file and you can export it to a matlab file.
then you can in numpy read the .mat file.
https://stackoverflow.com/questions/874 ... -in-python
What do you mean by rapid trigger mode? Is it some kind of "sequence" mode used in standard scopes, where I record only one trace with all my samples, then I have to split them into my 10k different captures? How do I activate that mode?
What scope memory are you refering to? How do I define if my scope is capable of recording that many traces? One trace required around 280kB
How do I convert the already recorded .psdata into a matlab file? I recorded 10k different files and would like to create one numpy array out of them
Then you can export to matlab and have a array of array's in matlab.
The 3205D MSO has 256 MSamples memory for all channels toghether.
So if you use 1 channel you can get 1000 captures of 250k Samples
If you use 2 cannels you can get 500 captures of 250k Samples.
Picoscope /c *.psdata /f mat /b all
The problem is, the traces arent converted in an order, for example 20220811-0001 (1).mat is number 1, while 20220811-0001 (10) is number 2 and 20220811-0001 (100) is number 3. I am not sure if the script I used is reading the data wrongly or if the captured traces are wrongly named. Or maybe the other .mat to python script converts the data wrongly?
mat_files = glob.glob('**/*.mat')
alldata = 
for fname in mat_files:
data = scipy.io.loadmat(fname)
# Append data to the list, only array A is interesting as it contains data from channel A
Could you tell me how to fix that?