In an article on WUWT, “Solar Activity – Past, Present, Future“, Leif Svalgaard describes the problems of historical sunspot records because of changes in instrumentation and observers. He produces a new series which attempts to correct for the various problems. He has made the data available to me to perform a cycles analysis. This proves to be interesting as it enables linking our understanding of the telescopic record with the proxy records from C14 and Be10.

Before I performed a cycles analysis on the adjusted sunspot numbers, I took the square root of them. This has the result of making the 11 year cycle more symmetrical, and making typical fluctuations near minima and near maxima of about equal amounts. It does not have much effect on this particular analysis. Here is what the series looks like then:

It can be seen that the range of each cycle since about 1750 is quite similar. This is the result of using square root of sunspot numbers.

Next, a spectral analysis was done on this series using CATS. This allows finer resolution than with such tools as Fourier analysis or FFT. The location of peaks can be determined with high precision as shown here:

When analysis of shorter periods of sunspots are performed, the second highest peak here often appears only as a bump on the shoulder of the highest peak. Because of the longer period the peaks are able to be resolved clearly.

A number of researchers, myself included, have suggested that three of the periods found here might be related to planetary motions affecting the Sun. The periods are Jupiter’s period of 11.86 years, the Jupiter-Saturn conjunction period of 9.93 years, and the Jupiter-Venus-Earth syzygy cycle of 11.07 years (also happens to correspond to Jupiter + Neptune frequency). The increased precision of these estimates is almost able to rule out some of these suspected matches. In particular, the 10.01 year period should have uncertainty of +/-0.025 and it differs from 9.93 years by 0.08 years. Hard to say.

In order of strength the cycles periods are:

11.05, 10.49, 10.01, 11.79 years.

The interesting thing about these periods is the beats between them.

11.05 and 10.49 years gives 207 years beats.

11.05 and 10.01 years gives 106 year beats.

The other various pairs give beats of 220, 177, 95 and 66 years.

See previous articles about C14 cycles analysis and Be10 cycles analysis as these two series are considered to be proxies for the sunspot cycle. We see the strongest beats are very close to, and others are generally clustering around the C14 and Be10 periods of 208 and 104 years. This is very suggestive. The modern sunspot cycle has a high autocorrelation after a lag of 210 years.

We can use the 104 and 208 year cycles to make a crude cycles forecast. For this purpose the wilder fluctuations pre-1750 have been omitted. The result:

It can be seen that the 104 and 208 year lagged sunspot numbers give a reasonably good fit to the present weak cycle 24.

The most important conclusion is that the 104 and 208 year cycles are closely related to the beat cycles of the closely spaced strong cycles near 11 years. This type of behaviour is quite commonly found in cycles analysis.

This suggests that we will not return to strong solar cycles again until the 2040s.

Very interesting. I’ve been doing something similar on different data.

Is your CATS analysis looking at lagged autocorrelation? Your second plot seems to suggest that.

[square root] ” This has the result of making the 11 year cycle more symmetrical, and making typical fluctuations near minima and near maxima of about equal amounts. It does not have much effect on this particular analysis. ”

Well since the 11y cycle is basically the N -S phases of a magnetic cycle of 22y I was wondering how this could be ‘unfolded’ to analyse the underlying cycles in the causual magnetic variaitons. The signal effectively has been flipped.

It is worth considering whether it has been flipped by being squared or by an effect that is insensitive to the polarity.

The two halves are similar but not identical since they are probably subject to longer modulations within the 22y span.

A couple of questions on taking the root:

If it does not have much effect, why do it?

Does this have physical meaning? Is the data being examined the square of something else physically?

One thing I was intending to look at today was how the frequency content of Svalgaard’s modified series compares to the current ramped up version.

since the current TSI model is much the same but with the “average” added underneath I would expect it has strong similarity. It is also very likely that this “average” is a monthly running mean which will nicely screw up the periodic signal to start with. Adding in such a mean would dilute the shorter circa 11y components and seems to a bit arbitrary to start with.

I was very sceptical of this attempt to rewrite the record, but I don’t see anything too unfounded in Svalgaard’s method.

A few peoplle working on cyclic analysis are up in arms about all this but I suspect this reworked time series may show stronger cyclic signals that the old TSI reconstructions.

Have you compared to two suing CATS?

Best regards,

Here is a comparison of the Svalgaard and classic TSI reconstructions, with a 10 y gaussian and normalised to mean zero and unity std.dev.

file:///back/coredata/TSI_comparative.png

As I expected, the new rendition seems to show cyclic variations much more clearly.

Sorry, the link should be:

“The other various pairs give beats of 220, 177, 95 and 66 years.”

Interesting in relation to the M.O. Hadley processing of SST:

Greg, thanks for your various comments.

CATS can do lagged (auto)correlation, but figure 2 is simply a spectrum (well, a small part of a spectrum really). It is like a Fourier analysis except that it calculates values in between. Fourier analysis will only have a whole number of cycles in the time span under study. CATS will do whatever closer spacing you want, this graph has 20 times closer spacing.

See the article on the Lambda function https://cyclesresearchinstitute.wordpress.com/2010/07/16/the-lambda-function/ for why I use square roots of sunspot numbers. It makes a big difference for studying shorter term cycles like the 155 day cycle, but next to no difference for 11 year cycle vicinity. Used only for purist reasons. The lambda function allows a parametr which seems best at around 0.4 to 0.5 for sunspots. Taking square roots is equivalent to 0.5, so I didn’t need to explain the lambda function.

It seems to me that there may well be a genuine process of squaring which makes the positive and negative phases of the magnetic field all become positive sunspot numbers. But I have not attempted to produce the alternate + and – phases. This has been done, e.g. by Edward R Dewey and interesting cycles are found as a result. I have repeated this myself with estimated SSNs because it is difficult to calculate near the zero crossing points. Maybe I will report on this some time.

Interestingly, this analysis does show a cyle of peoriod 21.34 years, which is quite far from 22.1 years expected for double the strongest cycle. I have sometimes wondered if the N-S polarity might not reverse on some cycle (e.g. when the big sunspot pauses occur). The N-S reversals cannot follow all 4 of the strong cycles. Maybe it follows none.

I see that Rog Tallbloke is very skeptical of Leif Svalgaard’s reconstruction, thinking that it is an attempt to make the hockey stick flat. I must admit that I have long suspected that the older SSNs might be too low, so I think that this reconstruction is pretty good, at least from 1750 onward. This shows up as very even swings either way from the centre of the trend after 1750. That is a good test of validity.

I don’t understand what the Hadley adjustment is doing.

Regards

Ray

I have not directly compared the two SSN records using CATS. It is easy to do though. As long as I have the same period for each it should make sense.

“I don’t understand what the Hadley adjustment is doing.”

I don’t think Hadley do either, since they never bothered to check. Half the story seems to be their process for regridding to 5×5 seems to be killing longer frequencies. It’s not documented in detail and they don’t seem to have assessed the frequency effects of what they do.

Then there’s a pick list of various hypothetical bucket and other “corrections” that all put together make up the plot I linked.

As I noted in this article, they are removing the the majority of the variaiton from the majority of the record. (John Kennedy was unable to refute that statement, despite trying).

http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188237

I’ll have to try CAT. I did something similar in that analysis but CAT maybe more flexible and allow me to redo the frequency analysis more effectively. It looks like a good tool.

I posted this at tallbloke’s site:

A full freq analysis would be worth doing. It seems TB thinks Svalgaard is up to no good. He may be partially right but I think dropping ‘riding on the running mean’ is a good move.

” Fourier analysis will only have a whole number of cycles in the time span under study. CATS will do whatever closer spacing you want, this graph has 20 times closer spacing.”

I do this by choosing incrementally smaller windows. Are all the methods applied by the software documented? Having confidence in the result requires understanding what techniques are applied, at least at a conceptual level.

There is a manual with CATS which lists all the methods. Most of them are standard techniques, originally in the SSP (Scientific Subroutine Package) originally put out by IBM in the 1960s, such as regression, factor analysis, cannonical correlation etc. Things like autocorrelation or cross correlation are quite standard. In the case of the spectral analysis it is my own design but seems to be equivalent to or same as another method in common use. It is explained, but if it isn’t adequate, you can tell me off and I will fix it. :-)

The interface is a bit old fashioned, but the macro facility allows complicated things to be done simply so that after a while a person can get to be very productive.

I have done a full frequency analysis for SSN, but just redid the part near 11 years because this is the interesting part having 4 cycles that interact with each other. I had always believed that 3 of these were planetary cycles, but that would not lead to a 104 year cycle but a 96.5 year cycle. The evidence of C14 and Be10 proxies is that the cycle is close to 104 years, so it cannot be planetary cycles. This is clearly proven now.

Then there is the new paper based on torques which claims to get a 208 year planetary cycle. I cannot see how that happens, but I await demonstration. It is either totally brilliant or it is fraud/confusion. Nothing in between.

Hi guys,

Leif overdoes it with his bumping up of SSN prior to 1840 so far as I can tell.

Usoskin et al wrote an interesting paper about Leif’s technique of using the magnetic data. Also, I think that using a modelled and contested TSI record to create a trend back projected over 350 years is highly contentious.

Lets see if we can find a better way to do it.

This is a great study though, and I’m not too worried about the differences between the periods and the Jupiter orbital and J-S synodic. Leif wouldn’t have to be far wrong for these to disappear. Ray will be pleased to see the 10.49 value I’m sure, as this matches the period he found most effective as a basis for his z-axis studies.

Tallbloke, I don’t have an opinion about TSI record implications. I think that extra information is needed to determine that even with these SSNs. But I always suspected that old sunspot numbers might need boosting.

The changes made will not really affect the periods in the Sunspots, it is the longer base of time that made these sharper. To me it is really important that this produces the 208 and 104 year periods. But the fact that it does means that at least some of these periods cannot be planetary ones. This is a very important conclusion.

Yes, the 10.49 does match the z-axis studies on Sun as the true natural solar cycle period. Well spotted Rog.

Ray Tomes: “I have done a full frequency analysis for SSN, but just redid the part near 11 years because this is the interesting part ”

would you like to post both freq analyses as data somewhere?

I would like to look at your in detail. Since the older TSI recon was sat on top its own runny mean there would have been significant false signals getting in there and generally poorer S/N ratio.

I’d like to see exactly what kind of difference it makes. Whether it shifts the precise frequency of the peaks and how it improves resolution.

I had a quick look at CATS but it looks like quite a steep learning curve before I get anything. May be if you have this processing as a macro I could just plug into CATS, that would be good too.

Thanks.

Hi Greg,

Perhaps if we want to get into discussion of how to do things in CATS, then the CRI forum is better than the blog. Would you mind joining at http://cyclesresearchinstitute.org/forum/index.php ? There is a special CATS section there.

I am happy to do a full spectrum for original data and adjusted SSN data. I do not expect the location of ~11 year peaks in the spectrum to shift much. I would expect the longer cycles (say over 50 years) to be affected most. I need a SSN series that runs for the same time period as the adjusted one to work with. My present original SSN data starts later on. Can you refer me to a data for this so that I use one that you are happy with? Again, going to CRI forum for these results is better because graphs can be included in posts there.

Regards

Ray

Rog, I should add some further thoughts. SSN correlates with TSI (Total Solar Irradiance for those that do not know) over the 11 year cycle as we know. When we go to much longer periods (centuries) there is no requirement that the relationship be the same. It is quite possible that variations in SSN over these time spans correspond to much larger variations in TSI. It seems that the assumption is being made that the relationship is constant and there is no reason to assume that. I say this because I think that Leif’s adjustments to SSN are probably correct, but the conclusions about TSI are not necessarily so. Regards, Ray

Hi Ray,

the source of the data I used was included in the graph I linked above.

Running means distort all frequencies (the second lobe of the frequency response is actually negative).

While this should not remove any frequencies totally, it will very likely mean some may no longer be present at a significant level and other others that maybe were not important start to stand out. This will affect the circa 11y peaks as much as anything else.

Also adding in runny mean (distorting low-pass filter) to the base data will decrease the S/N ratio across the board. This I why I suspect Svalgaard’s removal of the mean is probably beneficial. It would be good to see this in numbers rather than just in general terms as I argue here.

My initial plot above looks to have about twice the amplitude on both 60y and 10y scales.

Rays “It is quite possible that variations in SSN over these time spans correspond to much larger variations in TSI.”

That is a valid point. And it could equally be much smaller. Hopefully this can be examined by looking at other solar activity proxies.

Ray, I think your observation of noise levels near peaks and troughs indicating the need to take the square root is probalby correct. It suggests that what we are observing in counting sunspot number / group number is not the underlying variable but manifestation related to its square.

Is this simply because it is based on what is a “visible” spot. The visibility test depends upon the solid angle it subtends at the telescope aperture. It is an area related test.

Since sun spots are roughly round is taking the root simply turning this into a radius criterion?

What phsical process or physical quantity determines the radius?

I think this is the right way to go but it would be good to have a physical reason.

You say it does not matter much. A common reply would be “well why are you doing it then?”

I think it does matter. It does not affect the peaks much but it makes a big difference to the line of the minima. Also my experience is that it can also make significant changes to the spectral analysis even when the changes to the time series are not that striking.

eg. look at the spectra of trade wind speed and wind speed squared:

I’m not too up on solar physics, perhaps you could ask Lief physical quantity could relate to the number of detecable features.

Hi Greg. I should expand on “it doesn’t matter very much”. For the purpose of analysis it doesn’t matter why taking the square root helps, and the reason I do it is that it makes short term oscillations more consistent over time. But of course as scientists we always want to know why it is so. There are many reasons why it might be, some of which you have proposed. It is something to be taken further. Although I used Leif’s data, the results would be very similar for any other sunspot data.

Pingback: SIDC/SILSO: Die 400jährige Reihe der Sonnenfleckenzahlen wird völlig überarbeitet | wobleibtdieglobaleerwaermung

Thanks Ray, I’m having another look at this.

It is not a question of scientists wanting to know the cause, that is a secondary issue that can and should be guided by the data analysis. What I want to know is why “making short term oscillations more consistent over time” is the criterion for selecting what is ‘best’.

There is an implicit and unstated assumption here and I’d like to know what it is. Why should we

assumethat the 11y modulation of the HF is not part of the signal?It seems to me that this criterion implicitly assumes that the HF is ‘noise’ and thus should be equally distributed. There may be another logic behind this but whatever it is, it needs to be stated clearly what the assumptions are and why they apply.

This is a very important question since all spectral decomposition techniques

requirethat the quantity being analysed can be combined linearly and this is not the case for something that is measuring the square of some physical quantity that is linear.For example energy is a conserved, linear additive quantity, so doing spectral analysis on wind speed is not valid, it will give false results. The square of wind speed is more appropriate. The same is true of SSN. It’s not an arbitrary choice, it should be done for a reason.

Now if one method gives a simpler, more parsimonious description, then this may be useful in inferring something about the underlying process and it’s causes. This may help decide which estimation of the central peak is more accurate, or which choice of assumptions is borne out by the data. But to do that the assumptions must clearly stated.

This is central even to basic analysis such are least squares regression which is widely used without the underlying conditions being met , or often being recognised by those doing the analysis. Misuse of LSQ is one of the main reasons for exaggerated estimations of climate sensitivity, this is purist pedantry.

Now you seems pretty familiar with these techniques and apparently have a long experience of applying them. Can you help in explaining what assumptions lead to “making short term oscillations more consistent over time” being better?

Thanks for clarifications you can make about this choice.

oops: this is not purist pedantry !

Hi Greg, thanks for your thoughts above. Yes, of course we want to understand why taking the square root is better. So first the desire for the shorter term oscillations to be more consistent.

If cycles analysis is meaningful then we desire a situation where the sum of a bunch of sine waves gives something close to the observed data. There are a number of clues that we haven’t achieved that. If the shorter term periods (higher frequencies) have additional weaker components either side with frequencies that differ from the main frequency by the frequency of a big long slow cycle, then this tells us that the data is not in the simplest form that leads to just summing cycles.

Let me do that in reverse to demonstrate.

Let x = 100*sin (2*pi*t/11.05 years) + 20*2*pi*t/0.4235 years)

This is a simplistic 11 year cycle with a 155 day cycle. These are real cycles but there are many more present. Now suppose that we actually measure y which is equal to x^2.

Then we will detect the two actual cycles but we will see that the 155 day cycle gets stronger and weaker during each 11 year cycle. In the literature you will find that people report it only near solar maxima whereas it is in fact continuously present. There will also show up in our cycles analysis cycles with frequency (1/years) 1/0.4235 +/- 1/11.05 which correspond to periods of 149 and 161 days. You can see a real world example of this on my page http://ray.tomes.biz/cy301.htm

and a further 3 cycles at or near half 155 days in the same graph,

According to Occam’s razor we should prefer the simplest explanation. In this case it is that there are two cycles only and we are measuring the not ideal thing.

Additional supporting evidence is that the seemingly random fluctuations are also made more even by taking square root of measured sunspots. This does not prove a physical process, but gives hints as to something going on at a deeper level. Also we can observe that alternate sunspot cycles have reverse polarity, so again this would lead to sunspots as measured being the square of something more fundamental.

Have fun! Ray

Thanks for the reply. Ray. This is the sort of analysis that I do on most climate data so I get what you’re talking about. So this is what I was saying above, that if such an analysis provides a more parsimonious description it may justify that sqrt operation.

My initial concern is that this seemed to be being done for no logical reason other than it was somehow “better” without any reason or criterion for that judgement being stated.

What you describe is interesting and suggests that there are two different physical processes happening. I will look into those short cycles in detail later.