Hi Gerry,
Thanks for clarifying what the 7 files produced under Average Display Mode represent.
The oscillators I am studying in regards to phase noise resonate at 10 MHz. I think it would be helpful to very, very briefly describe the problem. I can point you to a more detailed description if you are interested, but it is not an easy topic.
Oscillator phase noise is displayed as frequency fluctuations in oscillator output. That is, a 10 MHz oscillator will not perfectly produce a 10 MHz signal over time. At some time t, it might produce a signal that instanteneously represents, say, 10,000,001 Hz. The cause creating this deviation is fairly complex and is modeled as stochastic noise processes summing together to form a complicated signal that modulates the 10 MHz "carrier" signal.
There are several approaches to measuring oscillator phase noise. The one I am using employes a phase detector to instantaneously map oscillator frequency fluctuations into a voltage signal. Again, the details of how the phase detector works are not important to this discussion, but I can point you to more information if you are interested.
Phase noise for a 10 MHz oscillator generally arises from frequency fluctations in the range 1 Hz - 100 KHz (it could go higher, say to 200 KHz, but most 10 MHz oscillator phase noise specs I have read stop at 100 KHz or lower). The output of the phase detector is correspondingly a voltage signal with a spectrum in which the majority of power resides in the 1 Hz - 100 KHz range. This spectrum is detected by applying the phase detector voltage signal to a spectrum analyzer.
Oscillator phase noise is normally given in units of dBc/Hz, that is decibels below the oscillator, or "carrier", signal level for which the power in each frequency bin is normalized to 1 Hz. The voltage signal produced by the phase detector is not relative to the level of the "carrier" signal, so it is necessary to post-process the spectrum analyzer output to transform it into the desired form. Other adjustments are also necessary, but, once again, I will elide their description in order to keep this post to a reasonable length.
Not only must the power levels in the spectrum analyzer output be adjusted to accomodate differences between the phase detector voltage output spectrum and the desired phase noise spectrum, they also must be adjusted to normalize the phase noise spectrum to 1 Hz bins. This is perhaps easiest to explain by an example.
Before purchasing my PicoScope, I used a Siglent SSA3032X spectrum analyzer to process the phase detector output. The minimum RBW that the Siglent supports is 10 Hz, which I used when producing the spectrum from the output signal. In order to normalize the spectrum to 1 Hz, it is necessary to view the 10 Hz bins produced by the Siglent as the sum of 10 1 Hz bins. This means the power level in each 10 Hz bin must be reduced by a factor of 10 (10 dB) to achieve the desired normalization.
This background information sets up some questions I have about how to apply my PicoScope to the problem of measuring oscillator phase noise. First, the power levels of the phase detector output in the 1 Hz - 100 KHz range generally go no lower than -100 dBm. So, as long as the noise floor of the spectrum analyzer is less than this, its measurment of the phase detector output should provide valid results.
The classic way to measure an SA's noise floor is to put a terminator on its input and display the resulting spectrum. I did this for my Siglent SA by putting a 50 ohm terminator on its input and selecting a RBW of 10 Hz. I also did this with the PicoScope and, for a span of 1 - 100 KHz with 2,097,148 samples, observed a noise floor of about -130 dBm. But, the input impedance of the Siglent is 50 ohms, whereas the input impedance of the PicoScope is 1M ohm. So, putting a 50 terminator on its input doesn't really make sense. Consequently, I left the input unterminated, which generated about the same -130 dBm noise floor (actually, it was marginally less). Is there a better way to measure the PicoScope's noise floor for these span and sample parameter values?
The frequency bin width for the span and sample number given above is 95.37 mHz. This means normalizing to 1 Hz requires not decreasing the the power levels, but increasing them. That is, it is necessary to add bins together, in this case about 11 of them, to normalize to 1 Hz. Of course, I could reduce the number of samples, but since 100K is not a power of 2, I will have to combine frequency bins in any case.
I have done some research on adding together adjacent FFT frequency bins and it appears this may not generate the correct result. In particular, it may be necessary to take into account the windowing process used when generating those bins. In the section "Estimating Power and Frequency" in section 4 of the paper found
here, it is suggested that the sum must be divided by the noise power bandwidth of the window. This is stated without justification, so I am wondering if this advice has some concrete reasoning behind it.
I realize this is a pretty specialized question, so you may not have any thoughts to contribute, but if you do I would very much appreciate hearing it.
Cheers,
Dan