The Astrological Association Journal of Research in Astrology

Comments to Understanding Astrology

by Dr Kyösti Tarvainen

I will only comment on the texts in the book where my research is discussed. First, there are general comments, and then observations related to single studies.

1. A definite indication of the existence of astrological influences is ignored

Already in 2015, I pointed to the authors that the following matter is a definite indication of the existence of astrological influences: the computer can determine estimates for the aspect orbs that are close to those values astrologers recommend. The orb estimation by the computer has succeeded in major and minor aspects, midpoints, synastry aspects, transits and solar arcs (Correlation 32(2): 55−61).

The sceptics’ hypothesis is that there are no astrological influences: they claim that astrologers just see faces in the clouds (in 16 places, see the Subject index).  How is then possible that, in the case of aspect orbs, the computer independently and mechanically sees the same faces in the clouds as astrologers have seen for 2000 years?

Scientists abandon a hypothesis when even a single case contradicts it.

2. Confusion on p-values and effect sizes

For studying scientific questions, p-values are used. When applying them in astrology, the technical starting point is that the considered astrological factor has no effect. The p-value is then the probability that the obtained observations deviate from the expected values by chance. A big p-value naturally points to the likelihood of no astrological effect − the observed deviations have happened by chance. On the other hand, a small p-value points to the opposite possibility that there is an astrological effect. A small p-value is said to be ‘statistically significant’. In science, there is a custom to use this expression when p <= 0.05.

Now, the authors claim on page 709 that, instead of p-values, it is better to use Bayes Factors, which express the ratio of the following probabilities: the probability of the data if there is an effect and the probability of the data if there is no effect. Strangely, the sceptics prefer this complicated method, which also considers the possibility that there is an effect – whose probability, according to the authors’ preconceptions, is zero. The explanation is that Bayes factors tend to be bigger than p-values: in the Bayes factor diagram on page 709, the biggest Bayes Factor is 100, and there is no smaller value than 0.01.

In addition, the authors confuse p-values with effect sizes. The authors ignore all my statistically significant studies since the effect sizes are small. However, in astrological studies, size effects naturally tend to be small since there are always so many other astrological and non-astrological factors. I have not been interested in effect sizes and have not determined them because the interesting scientific question is whether there are astrological influences and which factors work – whatever their mathematical effect size.

As an example of the sceptics’ practice of ignoring statistically significant results because the effect size is small, we may consider the extraversion study mentioned on page 266. The authors refer to my statement, ‘the obtained p-values of 0.02 and 0.003 gives strong support for a link between astrological signs and extraversion’.

The authors reject this conclusion by saying, ‘But it doesn’t because he considers only p values, and his sample sizes are far too small to reliably measure effect sizes that are tiny even by Gauquelin standards’.

This is an unscientific statement since only p-values are needed to determine the statistical significance. The effect sizes are another matter.

When rejecting a study with a small effect size, the authors often say that the effect size is uselessly small. This is also an unscientific statement since, in astrology, we are dealing with mind-boggling influences whose basic functioning has yet to be discovered. Therefore, the scientific question now is the validity of these effects, not their usefulness.

Furthermore, astrological studies have problems in operationalising the measured psychological character traits. For example, in the extroversion study, ‘extroverted professional groups’ were considered, but such groups also include introverts, decreasing the obtained mathematical effect size.

Therefore, the real effect size of an astrological factor must be discerned in practice. In the case of extroversion, we can see a strong effect of astrology’s positive (masculine) signs. For example, if an individual has the three major astrological factors, the Sun, Moon and Ascendant, in positive signs, the person is likely clearly extroverted – a well-known example is President Trump. I challenge the authors to present an introverted person who has these three factors in positive signs.

3. Unscientific argumentation

The authors use an unscientific, misleading argumentation method, which can be called sceptics’ ‘strawman’ technique. When criticising an astrological study, they often add their own considerations which do not concern the astrological research itself. In these extra considerations, the authors always notice inconsistencies, whereby the reader is often misled to believe that these inconsistencies are in the astrological study itself. For example, in the mentioned extroversion study, the authors added their own effect size considerations outside the study and obtained inconsistencies.

A way to add these misleading extra comments is the sceptics’ always-working technique of ‘divide and discredit’. Astrological studies need much data since many astrological and non-astrological factors exist. Therefore, when we divide the data into smaller parts, we usually do not see the same result consistently in these parts as in the whole data.

For example, when considering (page 605) the study where the validity of statements in the handbook of Sakoian and Acker is tested in the Gauquelins’ twelve professional groups, the authors did not inform the reader of the p-value of 0.001 for the whole data. Instead, the authors consider separately the 12 groups, where the astrological influences were not consistently seen in a statistically significant way.

A similar technique is applied in the same section’s consideration of the study concerning Henning’s potentials: the p-value of 0.03 for the whole data is not given to the reader, who is only informed that the results are not statistically significant in every group.

Correspondingly, I am accused of not dividing the data into two parts to observe whether the results replicate. However, this is not ordinarily possible in astrological studies since the needed data is so big. It is more secure to use all the available data and then replicate the study when new data is available. The synastry study mentioned later included about 20,000 couples or 40,000 persons, and I thought that the data could be divided into two parts. Indeed, the graphical results in both parts clearly showed the validity of classical synastry. But to get a numerical confirmation with a significant p-value, the whole data needed to consider.

In fact, almost all my studies have been a kind of replication studies – a replication of traditional astrological knowledge. At the beginning of the study, I have, set up a research hypothesis based on the astrological tradition or prominent astrologers. Because of this kind of replication, it has been natural to use one-tailed statistical tests.

An aspect of the authors’ unscientific argumentation is that the academic credentials of the authors are given but not consistently of astrology researchers. For example, the authors call me an astrologer at some points − even if I have twice informed them that I am not an astrologer. I never started as an astrologer since I had been used, in my mathematical and physical studies, to a better accuracy than is possible as an astrologer. The authors call me an astrologer to reduce the reader’s confidence in my research – the authors inform the reader on page 918 that astrologers do not follow where the facts lead.

In the section concerning the astrology book and Henning’s potentials, it is said that I have not considered the crud factor (page 708). However, the crud factor relates to correlations between humans, not between physical planets and humans.

After the above general comments, I briefly consider individual studies.

4. Study of mathematicians

Again, the authors don’t give the reader the main results of the study on mathematicians (page 735). The p-value for the considered 25 astrological factors and 2,759 mathematicians was 0.001. Even for the small number of 99 mathematicians, who received Nobel-like prizes, the p-value was statistically significant, 0.04.

The authors present a surprising claim, ‘There is no evidence that 2000 years of experience could have determined anything about aspects and orbs or indeed anything about astrology, see Chapter 3’.

However, there are written documents on how the development of astrology has proceeded in recent centuries: how Kepler invented the minor aspects, how the astrological interpretation of the new planets Uranus, Neptune and Pluto developed in astrological literature, how the study of midpoints and solar arcs developed in the 20th century. Now, we have statistically significant confirmation for all these new planets and astrological factors.

There are many other weird comments. For example, the first comment lets the reader understand that the study tries to predict who will become a famous mathematician. In reality, it is just studied whether famous mathematicians have more than expected such astrological factors that are favourable for mathematicians concerning, for example, logical thinking and the power of concentration needed when tackling mathematical problems.

At the end of this section, the authors consider the possibility that people became mathematicians due to prevailing astrological beliefs. They have determined the study’s effect size using the ‘divide and discredit’ technique: based on the figure in the middle of page 736, they have calculated the mean effect size in parts of the data. However, the effect size for the whole data (n = 2759 and p = 0.001; see the diagram on page 708) is about ten times bigger than that given by the authors.

This correct effect size means that the mathematicians’ results could be explained if about 3 % of people who started to study mathematics made this decision based on the favourable factors for mathematicians in their astrological chart. In science, every claim has to be substantiated. I challenge the authors to name just one person who decided to become a mathematician based on his or her astrological chart.

5. Studies using the Gauquelin data (page 711)

I was the first researcher to use the whole Gauquelin data when it became available on the net, and I did eleven statistical studies concerning the validity of traditional astrology. In each study, a hypothesis was set up based on traditional astrology or prominent astrologers’ statements. All hypotheses got statistically significant confirmation.

I also gathered data of prominent mathematicians considered above, Finnish theologians and Finnish lawyers. The results for mathematicians and theologians were statistically significant. The results for lawyers were not statistically significant, possibly since lawyers’ astrological factors were specified in the US, where the court system and the selection of law students are different from that in Finland. Two other tests were set up for this lawyer data.

Fisher’s meta-analysis method gives an approximate combined p-value of 0.000,000,000,000,000,000,001 for all my studies testing traditional astrology. The authors describe Fisher’s method but don’t give the obtained combined p-value.

On page 712, the authors list eight of my studies. They do not mention in this list or other places in the book the following studies published by 2020:

On Lawyer’s Astrological Factors. J. of Research of the AFA. Vol. 17, pp. 1-10. N=17765, p=0.38, 0.04, 0.04.
A study of the Part of Fortune. J. of Research of the AFA, Vol. 18, pp. 9-14. N=17560, p=0.08 (houses), 0.02 (signs)

I have also sent the authors the following after the 2020 published studies:

A study of midpoints in theologians’ charts. Correlation 33(2):47-53. N=6285, (p=0.01)
Guessing aspects from interviews and obituaries. Correlation 34(1): 77-87. N=243, p=0.000005.
How aspects to a Pisces Sun enhance fame. Correlation 34(2): 11-15. N=1580, p=0.04.

The authors do their best to knock down my research. First, the summary figure on page 712 places eight studies in a shaded area, which has the warning to the reader, ‘Be suspiscious of any result failing in this shaded area’. The upper limit for this warning area is the effect size of 0.06. However, there is no explanation for how this value of 0.06 has been determined. When asked, the authors informed that the value of 0.06 is based on effect sizes in personality tests, which include self-attribution. However, I have used birth data, which cannot have the same kind of self-attribution effects that personality tests can have when people give information about their personality.

The figure’s legend has a reference to ‘a test for potential artifacts’ in section 7.7.2016.3, but there is no such a test – such a general artefact test is downright impossible. The Gauquelin data includes errors in the birth hours: many times are rounded the hour, and there is avoidance of midnight times. However, these errors in birth times cannot produce statistically significant results in eleven different studies. In Correlation 32(2): 69−74, there is a study by Vincent Godbout and me on the effects of the rounding errors in the Gauquelin data.

Following is a list of the claims the authors present. (1) The authors inform readers that Fisher’s meta-analysis method, producing the combined p-value above, ‘is just a statistical game’. (2) The authors write that Fisher’s meta-analysis says nothing about their [individual studies’] genuine significance. If this term means statistical significance in the single studies, this claim is wrong since only the lawyers’ study was not statistically significant. (4) The authors write that the results should have been checked via Bayes Factor (see above). (5) The authors tell the reader that, for big data sets, the obtained p values are ‘both unsurprising and unexceptional (range 0.05 to 0.001)’. I challenge the authors to play with eleven different non-astrological models in the Gauquelin data to observe whether they get a statistically significant result every time. (6) They blame I did not divide the data to see if the results replicated (see above). (7) The authors point to a small effect size in the synastry example (see above).

To make sure that the reader understands that my studies have been a total failure, the authors present two strawman arguments at the end of this section. An odd strawman argument deals with my ‘failure’ to solve the ‘serious problem’ why so much data and computer power is needed in my ‘failed’ studies. The second irrational strawman argument is that my multiattribute approach does not explain why astrologers can get apparently right readings from a wrong chart. After this statement, the authors state, ‘In other words, there is nothing here that contributes to what research requires for real progress’.

Speaking of progress, astrology researchers have made real advancements in astrological concepts, techniques and interpretations, as summarised in  The Astrological Journal, September/October 2018,  Volume 60, No, 5, 40−47, and in Correlation, 33(2): 55−64, 2021.

The most remarkable progress is that three researchers have confirmed in over ten statistical studies that the tropical zodiac works better than the sidereal one in Western astrology. The strongest argument in academic world against astrology has been that, in the West and East, different zodiacs are used: in science, facts are the same all over the world. Now we have statistical confirmation of the tropical zodiac.

The authors refer to one old study concerning these two zodiacs, but they don’t mention these new studies confirming the tropical zodiac. Similarly, they do not mention other technical advancements in these two articles.

6. Dean’s time twin study 

I have earlier shown by simple simulations (Correlation, 32(1): 75−79, 2018) that the serial correlation method used by Dean in his time-twin study (pages 804 and 719) does not work for detecting astrological effects. When using Dean’s serial correlation method to real-type astrological effects, the method could not discern the astrological effects: in simulations, the obtained correlation coefficient fluctuated near zero with big p-values.

Dean does not deny my earlier simulations, but he now presents a signal example (the rightmost in the figure on page 719), for which the serial correlation coefficient is 0.8. Taking into account the time interval of 4.8 minutes in the time-twin data, the signal example has a positive value at five time points during about 20 minutes; then, in ten minutes, it reaches a negative value; and then returns to the positive value. No astrological factor behaves in this cyclical way; thus, this signal example does not prove that the serial correlation method would detect astrological effects.

Astrological factors are sometimes on, sometimes off. Among many simulations, I considered the case when Mercury is in the first house. People born during this period may be expected to be on average different from those who are not born Mercury there. For the statistical study of these kinds of real astrological effects, the t-test is natural (when I mentioned that Dean did not use the t-test, I meant that he did not use it as the primary test). The simulations showed that the t-test worked well.