The Astrological Association Journal of Research in Astrology

Oshop’s Responses to Criticisms in the Book, Understanding Astrology.

On pages 703 to 704 of the massive compendium, Understanding Astrology: A critical review of a thousand empirical studies 1900-2020, critics Dean, Mather, Nias, Smit, and Kelly were kind enough to consider some work of mine. I showed therein that Amazon review misspellings have peak cyclicity matching windows of time when Mercury is retrograde.

Comprehension of that material is hampered by it being the first of a series of disparate (but linked) online presentations with marked, distinct phases over the course of a few years. The critics seemed to refer only to the initiation of the process. Nonetheless, I wish to show below that their brief analysis seems at best to be insincere.

I will start with the culminating criticism and work backward to the initial one.

Critique 1: “Simple expectation leads to self-fulfilling behavior.”

Response 1: Misspellings can be objectively measured even though they come from the mind. An expectation of consistent, confounding motivations in the mind in the act of misspelling is quite speculative. See also response 4. Moreover, I needfully considered large, bulk, free, public data. Is their use always off the table?

Critique 2: There is no survey of the literature.

Response 2: Reviewers only refer to the first wave of the experimental process, self-published on my personal website, not to culminating publications wherein some literature survey is indeed included, even as those studies are referenced in the same location. [3]

Critique 3: “Why are there no replications?”

Response 3: I too would love to see replications. The onus on producing them, however, is not usually levied on the principal investigator, especially in the initial stages of investigation.

Critique 4: “A harmonic of the retrograde period would seem to implicate occasions when Mercury is not retrograde….”

Response 4: Yes, indeed it does. Fascinating, no? If one accepts untested for the moment critique 1, fewer misspellings in non-retrograde times would imply dogged, assiduous, constant checking by every Amazon reviewer for that day’s Mercury retrograde status. Absurd.

Critique 5: “How do we know the effect has not contaminated subsequent years?”

Response 5: In subsequent studies, I used the full and total 14.5 years of data that was released by Amazon and found the effect repeated throughout. [1, 2, 3] In the initial, first wave of investigation, I was hindered by computing power for such a large database and elected to test first just the last five years.

Critique 6: “What proportion of reviewers might have known about Mercury retrograde…?”

Response 6: This is simply a rephrasing of critique 1.

Critique 7: What about weekend and seasonal effects?

Response 7: In the data, one does clearly see changes in the volume of reviews seasonally (e.g. over the Western winter holiday season) and weekly (e.g. over the weekend). Even in the early study referenced in the book, I consider instead the average rate of misspellings on a given day. [4]

In these average rates, the Fourier transform does not show seasonal or weekly peaks relative to the strong fundamental wave matching Mercury retrogrades.


Importantly, refer to the correlogram of the data which more clearly and quickly depicts this fact. [2]

oshop graphic

Critique 8: “Why does the data in the second half plunge…?”

Response 8: That is a trend. It is not usually the responsibility of time series analysis to explain a trend, only to describe it mathematically, which is done at all my layers of publication. (I would speculate that the downward trend may be from the increased capabilities of online spell-checkers over the 14.5 years, 1999 to 2014, of the data in the study.)

Critique 9: Why not use t-tests?

Response 9: T-tests may not be useful here because of skewed nonnormality of the data and significant nonequality of variance between the two groups (Mercury-retrograde periods vs. direct periods). At a minimum, detrending was needed to check for appropriateness for t-tests, and the objecting to the use of detrending is wrapped into critique 11, so there is some inconsistency within the critics’ thinking.

Critique 10: What about careful spelling of strange words? What about English vs. American misspellings?

Response 10: Again, in the subsequent but linked final publication, this was neatly resolved by selecting for only the top 100 most common, explicitly wrong words that are not in the computerized dictionary in the random sample, a technical choice that obviates these concerns. [3]

The word cloud in that referenced, final publication shows the complete list of these words. They include such classics as “recomend,” “definately,” and “recieved.”  None of these 100 are subject to European vs. American variants, as you can see for yourself. Any incidences of special spelling (e.g. due to a hypothetical brand name that may contain exactly the misspelled word) are arguably swamped by legitimate misspellings of these 100 wrong words across millions of reviews.

Critique 11: The use of a moving average

Response 11: The moving average was used to detrend the data, a technique so common it has its own acronym (DMA). [5] Such DMA data is indeed used in Fourier analysis, as I have done. [6]

Summary: That the critics favored reviewing the early throes of the study vs. the culminating publication and in doing so bypassed other, published stages and also skipped over prior, final publications of other strong studies [7] suggests to me an agenda, an ironic, carefully curated cherry-picking to suit a prima facie foregone conclusion, one stated in the first pages of their efforts: “Astrology is a war zone” and “Coping with confusion.”

As Maya Angelou wisely put it, “When someone shows you who they are, believe them the first time.”