What’s New?

Geocentrism Debunked is your “one stop shop” for arguments debunking neo-geocentrism, the belief that the Earth is the motionless center of the entire universe.

So what’s new? (Updated 07/31/2020)

  • When is an Apple Not an Apple?” by Dr. Alec MacAndrew is a critique of Dr. Wolfgang Smith’s attempt to explain the enigmas of quantum mechanics.

The Rest of the Site:

If you’re interested in what sort of folks believe in geocentrism and whether the leaders in this group seem qualified and trustworthy to be advancing such a novel view, be sure to check out, Some Background on the New Geocentrists.

Here are some additional titles that highlight their fundamental lack of competence:

The new geocentrists claim that their views are compatible with modern scientific observation and they certainly use a lot of sciency talk, but as the articles outlined in Geocentrism as Bad Science show, it’s all smoke and mirrors.  And here are some more articles to fill in just how bad this so-called “science” really is:

Now for those who know much about the movement, it’s clear that the vast majority of neo-geocentrists hold the view on account of their theological/biblical faith views; the science part then gets shoehorned in later.  But strict geocentrism is just as bad and wrong-headed from a theological and biblical perspective as laid out in the articles summarized in Geocentrism as Bad Theology.  Some articles focusing on specific details are:

Posted in Uncategorized | Comments Off on What’s New?

When Is an Apple Not an Apple?

Thoughts on Wolfgang Smith’s Metaphysical Approach to Quantum Physics

by Dr. Alec MacAndrew

Wolfgang Smith has initiated a collaboration with Rick DeLano culminating in their planned 2019 film The End of Quantum Reality. As I understand it, much of the film has its roots in Dr Smith’s earlier works on the intersection between physics, metaphysics and religion, such as his 1995 work (reissued and updated in 2005), The Quantum Enigma. Smith also espouses geocentric, Young Earth Creationist and other pseudoscience views, not to mention bizarre and syncretic ideas such as numerology, astrology, and Hindu esoterism, but I intend to concentrate in this piece on his metaphysical approach to quantum physics, as set out most comprehensively in The Quantum Enigma. He bases the quantum physics aspects of his 2019 book, Physics and Vertical Causation: The End of Quantum Reality on the earlier work, to which he adds the geocentric, YEC and anti-relativity views.

Whereas many of the neo-geocentrists are sadly unequipped to deal with the material they address, Dr Smith was educated in physics and mathematics, has published papers on mathematics, and has been on staff in some capacity at one or two good universities. Therefore, his opinion on the meaning of quantum mechanics at the most profound level is, at least on the face of it, worthy of consideration.

Cartesian Dualism and Modern Physics

In The Quantum Enigma, Smith presents what is, at root, a teleological thesis based on an idiosyncratic interpretation of quantum mechanics. Smith believes that the world started to go wrong with Galileo, and since then the ancient wisdom, the hylomorphic concept and the connection with the transcendent was lost[1]. He charges Newton, Descartes, Darwin and Einstein with these crimes, but he reserves his strongest vitriol for Descartes, and particularly for Cartesian mind-body dualism, or the ‘bifurcation’ of reality. He lays the foundation for the rest of the book with an exposition of what he believes to be this fundamental misconception in modern thinking. As we shall see, his solution introduces a fundamental and, as we shall see, unwarranted bifurcation of reality in its own right.

According to Smith, moderns follow the Cartesian error, which is to separate reality into mind and matter, the internal and external reality, res cogitans and res extensa. Cartesian philosophy implies that we cannot truly know the world, because we are trapped in our minds and we can never know whether the impressions our minds receive via our senses are true reflections of the external world (or indeed whether there is an external world at all). We can never know whether the impressions you receive correspond to mine, ultimately a question of what modern philosophers call qualia, the experience of qualities in the external world.

Smith never properly explains how accepting the idea of mind-body dualism (the Cartesian assumption as he calls it), either consciously or unconsciously, leads to all the errors of modern thinking that he claims to exorcise, and how abandoning it allows the scales to fall from our eyes and all that is paradoxical in quantum mechanics to become clear. Smith’s inability or unwillingness to set out his case clearly against “the Cartesian assumption” is frustrating. We see that he is railing against something, but he identifies neither the thing nor the rationale for his opposition, nor what benefits would accrue from giving it up. His chronic lack of clarity on this point dates back at least to the 1995 edition of The Quantum Enigma and is still present in the 2019 Physics and Vertical Causation. In fact, so far as the latter work goes, a reader attempting to understand Smith for the first time with no exposure to his earlier works will struggle to discern a coherent train of thought that commences with the fact of Descartes’s philosophy and concludes with Smith’s assertion that eliminating it with the help of quantum mechanics would solve any number of philosophical and scientific puzzles faced by modern thinkers.

Of course, one recognises the controversy in modern philosophy regarding Cartesian dualism, expressed in various ways by critics such as A N Whitehead, David Griffin and Paul Churchland. Smith conjures with the name of Whitehead[2] without describing or otherwise engaging in Whitehead’s process philosophy or exploring how Whitehead proposed to resolve the question of Cartesian dualism. For Smith, Whitehead is more valuable as a stick with which to beat Descartes than a philosopher whose ideas have value in their own right. Of Griffin and Churchland, there is no mention. So, although substance dualism lacks neither critics nor difficulties, which we are free to explore for ourselves, we are bound to read Smith as he writes, and, despite castigating “the Cartesian error”, he is vague in The Quantum Enigma, to the point of incoherence, about the philosophical problems that he believes are entailed by Cartesian thinking.

For example, one struggles to see how the phenomenology of Husserl, whose name Smith also wields like a magic wand3, can be invoked as a talisman against Cartesian thinking, since Husserl takes Descartes’s methods as a spring board, and creates an entire philosophy of phenomena from first principles, much as Descartes created an epistemological philosophy from first principles[3]. Perhaps Husserl can be regarded as having created a devastating critique of “the Cartesian premises”, and perhaps not. Smith shines no light on the question. And this characteristic of Smith, rarely to engage properly with others and never to engage with those who raise difficulties for his thesis, pervades all his work.

On this matter of Cartesian dualism and its relevance to the interpretation of quantum mechanics, Smith follows, to a great extent, the argument set out by Werner Heisenberg in his 1962 book, Physics and Philosophy[4]. The differences between them are that Heisenberg’s exposition of the issue is infinitely clearer and better constructed than Smith’s; that Heisenberg examines and finds wanting the Cartesian proposition of the separation between the observer, the I, and the rest of the world, rather than the mind-body, res cogitansres extensa dualism which is its consequence; that Heisenberg concludes that the empirically determined facts of quantum mechanics undermine the idea of an objectively existing external res extensa that can be known fully and without ambiguity; and that Heisenberg, unlike Smith, is clear that abandoning Cartesian thinking does not, to any extent, resolve the fundamental paradoxes of quantum mechanics.

It turns out, anyway, that Smith is tilting at windmills, because Cartesian dualism is far from being the predominant paradigm amongst modern physicists and philosophers. Amongst the scientific community, realism and monism prevail. Realism is the doctrine that our senses, with or without the help of instrumentation, give us a more or less accurate representation of a world that exists in reality and independently of our perceiving it, and that truth is the correspondence between any proposition or model in question and reality. Unless one believes that both direct perception by the senses and measurement using instruments provide knowledge about the state of reality, it seems pointless to do physics at all. Certainly, it would be hard, if not impossible, to justify a solipsistic physics, and so scientists overwhelmingly subscribe to realism rather than idealism.

Monism, as conceived by modern scientists and philosophers, stands opposed to Cartesian dualism and holds that mind and body are not separate entities. A popular version of monism, which relies on the concept of emergence, describes mind as an emergent property or process of body, or more precisely of brain (the philosophical stance known as physicalism). According to this hypothesis, mind is what brain does. Smith seems to think that modern scientists increasingly hold that mind is quite separate from matter[5], quoting the neurosurgeon Wilder Penfield. In any case, he fails to acknowledge that much expert neuroscientific opinion does not now support Penfield or the dualism hypothesis, and the evidence is extensive and growing that the activity of the mind proceeds sufficiently from neural activity, based on the observation of strict supervenience, the effects of neurosurgery and brain pathology, and other neuroscientific evidence. His belief that physicists are overwhelmingly in thrall to Cartesian dualism cannot possibly arise from any actual acquaintance with people working in this field.  One wonders how he arrived at the idea that Cartesian thinking has modern physics and philosophy in its grip? He can have reached that conclusion only with the help of dated and secondary sources, Whitehead and Heisenberg perhaps, as he certainly has not tested it for himself.

Although most working scientists hold realism, emergentism and monism pragmatically, without examining them critically or thinking through the implications of their stance, nevertheless there exist well-argued and well-developed philosophical grounds for them. Realism in the philosophy of science has been explored by Russell, Wittgenstein and Popper. Neutral monism, a metaphysical school based on the work of Mach, James and Russell, is continued today by, for example, the work of Chalmers, and Ladyman and Ross, and especially Erik Banks who has developed neutral monism in a study in which he refers to his updated version as realistic empiricism. Emergentism has a history that dates back as far as John Stuart Mill. More recently, C. D. Broad is widely accepted as developing the standard, modern description of the position. Various forms of emergentism are currently being proposed and vigorously debated by metaphysicians such as Kim, McLaughlin, O’Connor and Humphreys. This is not the occasion for a discussion of these ideas in detail; it is sufficient to note that this extensive literature exists and that Smith does not refer to any of these influential thinkers; in fact, does not even acknowledge the existence of these schools, which are obviously opposed to his assertion that substance dualism prevails in modern thinking. Let us understand that I am not seeking to take sides in the dualism-monism debate, to defend either substance dualism or monism here. It is not a matter of which of these alternatives in all their variations are true – that is not the point. It is enough to realise that the claim that Cartesian dualism currently prevails with philosophers and scientists is simply untrue. So, it transpires that Smith’s entire critique of “the Cartesian assumption” is uncalled for and misdirected, leaving the reader with a powerful sense that the first two chapters of the book have gone awry.[6]

I note in passing, that on the question of dualism, he seems to be arguing against himself, revealing a surprisingly muddled way of thinking about the problem of the processes that enable perception. Having examined the “ghost in the machine” or the “Cartesian theatre” doctrine of visual perception, the idea that there is a separate mind which observes an image in the brain produced by the visual neuro-system, and having rightly found that concept wanting, he concludes, “…the missing piece of the puzzle must be strange. Call it mind or spirit or what you will…”[7], thereby coming full circle and plumping for a solution that is indistinguishable from mind-body substance dualism of a distinctly Cartesian kind.

The Physical and the Corporeal

Smith then turns his attention to the concepts of quality and quantity. He characterises the quality of objects as being those attributes which are directly perceived, such as colour, smell and so on. Conversely, a quantitative attribute is measurable which results in a mathematically or at least numerically based result, such as the reading of a pointer on a scale. (In discussing this point, Smith conflates the concepts of mass and weight, as defined within physics, a surprising thing for a “physicist” to do. His claim that mass is a contextual attribute is also wrong – non-relativistic mass is an invariant and non-contextual attribute of an object.) Qualities are not measurable and must be directly perceived. He then proceeds, at some length, to propose that quantity is an attribute of physical objects, and these are the proper and only objects studied by physics, whereas qualities are the properties of what he calls corporeal objects and can be directly perceived through an intellective, and in his view, momentous act. To use his jargon, the intellective faculty perceives reality instantly and directly, as opposed to the rational mind which reasons from data to conclusions about reality. It is important to recognise that the term ‘perception’ extends to all the senses and is not limited to visual perception, although the bulk, if not all, of Smith’s examples turn on vision.

This distinction, between the physical and the corporeal, is, in his scheme, a categorical one, his claim being that physical and corporeal objects are ontologically distinct. So far as a single object goes, for example an apple, there is both a physical object (which he denotes by ‘SX’) and a corporeal object (which he denotes by ‘X’) which are correlated, in that the corporeal object is the manifestation or presentation of the physical object on a higher ontological plane. Physics exclusively studies physical objects and their quantities and is incapable of discerning qualities which pertain only to corporeal objects. He associates the physical with Aristotle’s matter and the Scholastics’ substance; and the corporeal with the matter given form, the substantial form according to the hylomorphic principle. His claim is that physics is incapable of acknowledging or studying qualities and can therefore know nothing of the essence or “quiddity” of a thing, which is manifested in its qualities. It cannot tell us what a thing is because its essence and its qualities are outside the scope of physics. For Smith, therefore, true higher knowledge of a thing can be gained only by consideration of its qualities by direct perception without the aid of instruments[8]; physics can only access quantities and is limited to consideration of the substance or the matter of things, their essence and form being inaccessible and invisible to physical methods. Smith’s distinction between the physical and the corporeal, between quantity and quality, is crucial for his scheme, as he goes on to argue that the collapse of the quantum mechanical wave function, which I will discuss below, is nothing other than the physical becoming manifest in the corporeal plane, and so proceeding from potency to act.

Does this distinction between ‘physical’ and ‘corporeal’, as defined by Smith, stand up to scrutiny? Well, he does rather gloss over the point, not by devoting insufficient words to it – words there are aplenty – but by failing to develop his proposition with clear, unambiguous definitions and with lucid illustrations that generalise his point. He is obviously happier to exploit the distinction than to establish its existence in the first place. His muddle can be illustrated by reference to his own examples of which attributes constitute quantities and which constitute qualities.

For him a physical attribute is anything quantifiable and measurable by physicists, in every instance by using an instrument, even if it is merely a tape measure. To illustrate this definition he offers, as an example, the attribute of mass. Physicists measure mass, he says, by placing an object on a set of scales, the necessary instrument, and by reading a pointer. (Actually, in this example what is being measured is weight, and mass is inferred from Newton’s second law.) Weight is not directly perceptible, or so he claims. But surely, we can directly perceive the difference between the weight, and hence the mass, of a feather and of a cannonball, although our direct perception of weight is not quantitatively precise (as are all direct perceptions more or less quantitatively imprecise). Indeed, our ability to discriminate between different weights becomes better when two objects are in question that can be compared side by side. Because of the possibility of direct perception of weight by hefting an object, using the kinaesthetic and other senses, then, by his own definition, mass must be a quality as well as a quantity[9].

When a baker is preparing bread, he might weigh the ingredients, flour and water, with kitchen scales, an instrument. An experienced baker could equally well prepare a weight of the ingredients which he directly perceives to be right, kinaesthetically, without using scales. According to Smith’s proposal, in the first case the weight of flour and water is measured and quantitative and thus a physical attribute, and in the second case the weight is directly perceived and qualitative and thus a corporeal attribute, and the two are ontologically separate. On the face of it, this is ridiculous. So, it is the case that weight is an attribute which can be measured, and which can be directly perceived.

His example of a quality is colour, specifically that of a red apple. He states quite clearly that a colour is a quality because

“…redness…unlike mass, is not something to be deduced from pointer readings, but something, rather, that is directly perceived. It cannot be quantified, therefore, or entered into a mathematical formula, and consequently cannot be conceived as a mathematical invariant.”

Well redness can obviously be directly perceived and distinguished from other colours, provided one is not colour blind. Even for a “healthy” observer, where red shades into orange, there might be some dispute as to how the object is more accurately described, whether it is better described as red or orange; where the red is not fully saturated, the disagreement might be between red and pink; whereas for objects that reflect blue as well as red light, the discussion might be whether the object is red or purple (magenta). So, redness is not a single trivially perceived quality, but it merges imperceptibly (and sometimes controversially) into other colour qualities, such as pink, orange, and purple. The reason for these gradual transitions is that objective colours lie on a continuum[10] as do the wavelengths of light which correspond to the colours.  As in the case when comparing the masses of objects, the colours of objects can be more easily discriminated by direct perception if they can be compared side by side. And crucially, and in direct contradiction to his statement above, the colour of an object can be more accurately determined by quantifying it with an instrument and deducing it from pointer readings, or their equivalent.

The colour of an object can be accurately determined by a spectrophotometer, which measures the intensity of light reflected from it as a function of wavelength and presents the result as a (Cartesian) graph of reflectivity versus wavelength. The perception of colour is contextual – a white object will appear red in a red light, but here again, by measuring the reflected light, the instrument will accurately determine the colour of that object in that context. Colour is directly correlated with wavelength(s) of electromagnetic radiation: for example, a 633nm laser produces red light, a 442nm laser produces deep blue and 528nm green.  Photographers are aware of how important colour calibration is throughout the photographer’s workflow in order to ensure the photographic print is as faithful a copy of the original scene as possible. Photographers, and digital photographic equipment, use various quantitative schemes to represent colour, including RGB (red-green-blue) and HSL (hue-saturation-luminance) values. I can say the apple is dark red, or I can say it is #7d122b in hex RGB. The latter is much more precise than the former, but both describe the objective attribute of the apple’s colour.

In one sense, colour can be regarded an objective attribute, that is, it belongs to the object. Colour is also perceived and so, in this sense, can be regarded as subjective. There is an obvious correlation between the two aspects. In the human colour vision pathway, different colours stimulate the three types of cone receptor to different extents, and the absence of one or more types of cone causes defective colour vision, manifested as an inability to discriminate colours (most commonly red and green) which are clearly differentiated according to observers possessing all three types of cone. The effective quantification of colour within the human visual system is therefore seen to be a crucial part of the process of visual perception, which culminates in the brain state we refer to as red or whatever colour we perceive. According to this perspective, colour is quantified not just with instrumentation, but within the visual system as an essential feature of colour perception. The colour perceived therefore depends on the objective colour attributes of the thing perceived, and on the physics and biology of the visual pathway; and is analogous in this respect to the perception of any objective qualitative attribute via the relevant sensory pathway. Nevertheless, the objective colour, the attribute possessed by the object, is not affected by the inability of someone with defective vision to perceive it “correctly”, and, in fact, we can be sure that someone has defective vision if they are unable to distinguish quantifiably distinct colours that can be distinguished by those with normal vision; and moreover, defective vision diagnosed by testing is usually found to be caused by an underlying physiological defect, in this case the lack of a type of cone receptor. Because of the possibility of measuring and quantifying objective colour, then, by Smith’s own definition, the attribute of colour must be a quantity as well as a quality.

It seems, from the foregoing, that the cases of mass and colour are equivalent. Mass can be measured quantitatively but can also be directly perceived. Colour can be perceived directly but can also be measured and quantified. The critical distinction that is at the foundation of Smith’s thesis is therefore a distinction without a difference, on the grounds of his own examples. In fact, we are done here, having reached page 12 of a book of more than 100 pages – if the foundation is rotten the building cannot stand.

It is true, of course, that there are real attributes of objects, which, as humans with our human sensory apparatus, we cannot directly perceive. Examples include the magnetisation and electrostatic charge of objects, which can be observed with instruments, but which we cannot sense directly. This limitation, however, lies not in the ontological status of the attributes, but in the sensory apparatus that has evolved in human beings. Other creatures do have the ability to perceive these attributes directly. For example, homing pigeons can perceive the direction of the Earth’s magnetic vector and use this percept to navigate. Many species of fish, as well as the platypus, directly perceive electric fields, and the perturbation in electric fields caused by the presence of objects. So, the issue turns out to be related purely to the available sensory apparatus of the organism, rather than to a fundamental ontological difference between the different sorts of attribute. Moreover, Smith’s essential claim is that qualitative attributes cannot be studied by physical methods, and it is this limitation of physics which he puts forward to uphold his view that corporeal objects, which, according to him, alone possess qualitative attributes, are on a different and higher ontological plane from that plane which is studied by physics. However, we have seen that the idea that physics cannot study qualities is wrong – physics is able to study any objective attribute, colour, timbre, taste, smell, tactile feel and so on, that he asserts to be purely qualitative.

Let us consider a potential criticism of my analysis above. It is too facile, too simple to be taken seriously, the critique goes. I am making a category error in claiming that redness can be measured and quantified, because redness is a subjective property, it is, in modern parlance, a quale, the experience of redness, and no-one knows in detail how qualia arise and whether one person’s experience is similar to or corresponds to another’s. However, that is not a criticism that Smith or the Smithians can raise, because to do so would be to fall headlong into the philosophy they deplore, the Cartesian pit. When Smith discusses quantities and qualities, we can take it that he is referring to objective attributes, attributes that belong to the object, which can be, but need not be, perceived, and which persist in the absence of perception. In fact, he explicitly states this to be so. He writes: “So far as objectivity and observer-independence are concerned, therefore, the case for mass and for color stand equally well; both attributes are in fact objective and observer-independent in the strongest conceivable sense.”[11] So, introducing the complication of qualia, the ineffable personal experience of attributes, or any other observer-related consideration, makes no difference to my argument above.

Next, let us look at colour from a scientist’s point of view, and see how a scientist’s perspective of, say, a red apple informs us about its attributes, and what that knowledge tells us about the essence of the thing. Of course, the scientist can perceive the colour of the apple directly just like the next person, so that quality is obviously available to him along with its aesthetic and symbolic associations in all their richness, but he is able to see much more. He understands, for example, that the colour arises from the reflection of a particular part of the visual electromagnetic spectrum (or to put the inverse case, by the absorption of the part of the spectrum other than red), by certain pigments in the skin of the apple. The pigments in question are anthocyanins, which appear in many varieties of ripe fruit (and incidentally in the leaves of some deciduous plants in the autumn, giving them that rich New England fall colour). The scientist knows the molecular structure of various anthocyanins in detail and understands how the interaction of that structure with white light results in the reflection of red light and the absorption of the other colours by resonance at the absorbed frequencies. He knows that different anthocyanins have somewhat different structures, resulting in electronic resonances which cause the pigments to reflect different shades of red and purple. He understands that the perceived colour is also a function of the concentration of the pigment molecules in the skin of the fruit. He understands the biochemical pathways that the plant uses to make anthocyanins, and he even understands how DNA, which is active in every cell of every apple tree, dictates the production of different anthocyanins in different strains of fruit, and which genes are responsible. He knows that fruit has evolved to aid seed dispersal and that the colour of ripe fruit has evolved as a signal to animals (and man) that the fruit is ripe, thereby optimising the seed dispersal. He understands that anthocyanin synthesis pathways originally evolved in plants to protect them from light-induced damage at various stages of their growth, so it seems that the protective pigment was co-opted to a new signalling purpose later in evolution. It seems incontestable that understanding these other aspects of the redness of apples can only add a deeper and richer knowledge of the apple to that provided by direct perception of its qualities, and that this additional knowledge tells us more about the process, the end and the essence of an apple; that is, how it works, what it is for, and what it is.

Richard Feynman eloquently made this point about flowers[12], and Richard Dawkins’s Unweaving the Rainbow[13] is an extended essay on how science enriches rather than impoverishes one’s relationship to and understanding of the natural world (the title is an ironic quotation from Keats’s poem Lamia, where it is claimed that Newton’s discovery of how a rainbow is formed robs it of its mystery: “Philosophy will clip an Angel’s wings, Conquer all mysteries by rule and line,  Empty the haunted air, and gnomed mine – Unweave a rainbow”. This sort of misapprehension also seems to have been at the root of Goethe’s demonstrably erroneous theory of optics, which was conceived as a crusade against Newtonian optics).

In a passage critical for Smith’s distinction between corporeal and physical objects, he tells the parable of the billiard ball[14]. The billiard ball that we perceive is a corporeal object, X, he claims, and associated with it there is a physical object, SX, which can be represented by, for example, “a rigid physical sphere of constant density” (by “constant density”, I take it that he means “uniform density” which means something quite different). He continues: “The crucial point, in any case, is that X and SX are not the same thing. The two are in fact as different as night and day, for it happens that X is perceptible, while SX is not.” In order to prove this point, he attempts to show that SX, the physical object, is not perceptible, because “It can also be represented in many other ways. For instance as an elastic sphere…”. He goes on to make a further claim, that the physical object is not perceptible because it is composed of atoms or subatomic particles, and collections of atoms cannot be perceived. He offers no evidence or argument for the latter statement but states it as a truism, obviously intending the reader to take it as self-evident. What we perceive, he insists, is an object, the identity of which is indisputable, a red or green billiard ball (and it seems that here, and elsewhere, he limits perception illegitimately to the visual sense).

But this is to beg the question: before Smith can characterise corporeal and physical objects and discuss the distinction between them, he must show that there are, in reality, two associated but ontologically distinct objects, a task which he shirks. For when we perceive a billiard ball, we know it as such, precisely because it is, and is perceived to be, within practical limits, a smooth spherical ball, of a certain uniform density, with a certain hardness and elasticity, and with a certain diameter. In other words, in modal terms, these attributes, shape, density, uniformity, size are essential rather than accidental properties of the billiard ball. A ball that is not spherical, or that has non-uniform density, or is 3mm in diameter, or is made of sponge or iron, is not a billiard ball at all and a person familiar with billiard balls would not perceive it to be one. In order to perceive the ball as a billiard ball, with an identity beyond dispute, one must perceive that it possesses these essential attributes. The attributes, sphericity, density, uniformity, size do not constitute a separate object on a different ontological plane – they are essential attributes of the one perceptible object. Nor are they objects in their own right, as Smith seems to treat the rigid sphere. As for his diktat that collections of atoms are not perceptible, one reflects that everything we perceive is, indeed, a collection of atoms, and that therefore collections of atoms are, after all, perceptible as the object they constitute, just as collections of grains of sand are perceptible as a beach. For all the bluster of the parable of the billiard ball, what is left is a bare assertion, decorated with the conjuring words corporeal and physical, amounting to no more than a magical spell or incantation which evidently holds his disciples in thrall.

It seems that Smith was influenced in his ideas that the physical sciences can access only quantity, corresponding to substance, by the French esotericist and metaphysician, René Guénon [15]. Guénon, was one of the founding triumvirate of the philosophia perennis school along with Ananda Coomaraswamy and Frithjof Schuon. He is known as an arch-Traditionalist, hermeticist, gnostic, freemason, Sufi, symbolist and numerologist, whose over-arching notion was that the world goes repeatedly through a multi-millennial cycle and has just now reached the lowest point in the aftermath of the Age of Reason and the Enlightenment. Guénon sneers at what he calls “profane science”, apparently a degenerate residue of the “ancient traditional sciences”, and “profane arithmetic” and geometry in the modern sense. His sacred geometry is the foundation of arcane symbolism, his sacred number science is nothing more than rank numerology and his ancient traditional sciences include astrology, alchemy and other activities associated with hermeticism and occultism. This is all mumbo-jumbo, hardly worthy of serious consideration. To the extent that he relies on it, it taints Smith’s argument[16]. But then Smith is foremost an esoteric Traditionalist in the mould of Guénon and only secondarily a Christian philosopher interpreting Thomism for the 21st century – it is unsurprising to discover that he subscribes to and incorporates into his work much of this nonsense[17].

So, Smith’s argument for his imagined distinction between the physical object and the corporeal object, such as it is, turns out to be illusory. There can be no ontological distinction between these objects, because there is only one object, to which his own examples attest, and his idea that a corporeal object is the presentation of a physical object on a higher and distinct ontological plane is seen to be a mere fancy, a superfluous bifurcation of his own. In every case, including the examples proffered by Smith, there is only one object, with attributes of many kinds, some of which we perceive directly through our senses, some of which we observe and measure with instruments, and some of which, including his own examples of mass (or weight) and colour, we can directly perceive and measure with instruments. The exact correlation between our direct sense data and the results of measurement of any attribute confirms that our senses and instruments are accessing the same attribute of an ontologically single object. We kinaesthetically perceive feathers to be light and cannonballs to be heavy and this is borne out by the position of the pointer on the scales. We see a red object and the spectral peak of its reflected light lies between 625nm and 740nm. So, the scientists’ instruments can be regarded as tools that extend the reach of the senses, much as tools of manipulation, levers, knives, saws, hammers and so on extend the reach of the arms, with neither category of tool conferring any special ontological status onto the objects on which they act.

Introducing the Quantum World

How can we get to the crux of Smith’s book if it falls at such an early hurdle? Could it be argued that what distinguishes the physical from the corporeal object is not that the physical object is devoid of qualities, and is pure quantity, as Smith would have it, but by some other distinction, such as the size of the object. Might this be a way to save Smith’s project? Could it be that the critical distinction that Smith is searching for is between the quantum and the macroscopic world? In Physics and Philosophy, Heisenberg explores this distinction and proposes that the relationship of the quantum domain to the macroscopic domain is very like the Aristotelian relationship of potentiae to actualities[18].

Following Heisenberg, can we identify the macroscopic world with Smith’s corporeal plane, and quantum objects with the physical plane? With some reservations, this might be a valid distinction. It is, at least, worth exploring. Of course, Smith does not accept that the physical and the corporeal can be classified in this way, insisting that the distinction between physical and corporeal extends to macroscopic objects, that there exists for every directly perceived corporeal object X, a physical object SX, the potential not-thing that is studied by physics.[19] He quarrels with Heisenberg’s conflation of X and SX for macroscopic objects, or rather, Heisenberg’s implicit rejection of the existence of SX. But we have seen that his definitions of physical and corporeal, which encompass macroscopic objects, as well as quantum objects, are incoherent, so I am bound to find a definition that makes some sort of sense if we are to continue to follow his argument.

The chief reservation in accepting this amended definition of the physical and corporeal is that there is no distinct size-related boundary on one side of which objects behave as quantum objects, and on the other side as macroscopic objects. The key question here is whether there is a maximal limit to the size of quantum objects. Objects as large as molecules of 114 atoms have been observed to behave as quantum objects in the Young’s double slit experiment described below. It seems that there is no limit in principle to the size of an object which can be made to behave as a quantum object (although, in practice, this becomes more difficult the larger the object, and for macroscopic objects, practically impossible), and quantum theory supports the view that quantum behaviour is not limited in principle by size except by interaction with the environment. Smith implicitly acknowledges that this perspective is correct by discussing the meaning of the de Broglie wavelength for a macroscopic object. Be that as it may, we can continue to follow his argument if we define the physical domain to be restricted to obvious quantum entities such as photons or electrons, which fall indisputably on one side of the ill-defined boundary, which clearly behave unlike classical macroscopic objects, and which possess attributes which cannot be directly perceived, such as inherent spin, isospin, parity and colour charge, all of which are quantised attributes and therefore tightly linked to their quantum nature. This does not do any violence to Smith’s argument, as he relies on examples of the behaviour of just such quantum particles to develop his hypothesis.

A summary of Smith’s hypothesis goes as follows: he identifies quantum objects, and the quantum world in general with the scholastic concept of potency; and the collapse of the wave function[20] with a change of state from potency to act through a transformation from the quantum or physical plane to the corporeal, directly perceivable plane. We shall expand on these points later. One major interpretational difficulty that quantum objects present is known as the measurement problem, and Smith relies on the single-particle Young’s double slit experiment to illustrate it. A description of this pivotal experiment, using light as an example, follows. The principle applies to any pure quantum object – fundamental particles such as electrons, neutrons and protons, atoms, atomic nuclei, small molecules and so on.

In the classical Young’s double slit experiment a beam of temporally coherent monochromatic light is passed through two parallel slits. The width of each slit is a few times the wavelength of the light or less. With both slits open, a pattern of dark and bright bands or fringes, in the same orientation as the slits, can be detected on a screen placed beyond the plane of the slits. The separation between the fringes can be shown to be inversely proportional to the separation of the slits (the closer the slits are, the broader the fringes are), and proportional to the distance from the slits to the screen, and the phenomenon can be modelled precisely by interpreting the fringes to arise from the constructive and destructive interference of wavefronts arising from the two slits.  If we close one of the slits, so that light can pass only through the other, then the fringe structure disappears and is replaced by a single bright area without fringes. This area is coincident with the region of the screen where the fringes previously appeared[21]. The disappearance of the fringes is readily explained by the fact that, in this case, the wavefront incident on the screen arises from only one slit, so that interference no longer occurs.  This experiment demonstrates the fact that light can be considered to consist of waves, which is perfectly consistent with classical electromagnetic theory in which light is regarded as an electromagnetic wave.

But as Einstein demonstrated, light is quantised, it comes in minimal packets or quanta called photons, each of which have energy proportional to the frequency of the waves mentioned above times a constant (known as Planck’s constant). So, light can also be regarded to consist of a stream of particles. If we reduce the intensity of light in Young’s apparatus to a value so low that individual photons can be detected, and replace the passive screen with a screen that records the arrival of each photon by, for example, a localised flash, then we will see what appears initially to be randomly located flashes as each photon traversing the apparatus is detected at the screen. However, if we record the position of each flash, then we find that, over time, there is a greater concentration of flashes in those areas of the screen where the classical interference fringes were previously bright, and fewer or no flashes where the classical interference fringes were dark. In fact, after a large number of flashes have been recorded, the area density of flashes (number of flashes per unit area) follows the same function versus position across the screen as the intensity in the classical case[22].  Furthermore, if we close one slit, (or acquire “which path” information by observing which slit the photons pass through, which can only be done by absorbing the photons at one or other slit, in effect, blocking it) the fringes disappear as in the classical case. So, it appears that light particles have wave-like properties that allow them to interfere with one another.

If we reduce the intensity even further, so that we are sure that only one photon is present within the entire apparatus at any one time, surprisingly, we observe the same thing as before. After a large number of photons are detected, fringes appear exactly as before as a modulation of the area density of detected photons; and disappear if one or other slit is closed (or we measure through which slit each photon passes, which can be done only by effectively blocking one of the slits). It seems that a photon can interfere with itself but can only do so if both slits are open and we do not know which of the slits each photon passes through. If we put a detector at the slits, then we only ever detect a photon passing through one slit at a time. Yet the fact that the fringes disappear when one slit is closed so that all the photons pass through the other, seems to indicate that photons “know” when passing through one slit whether the other is open or closed, and that when both are open, they somehow pass through both, even though, when detected, they are only ever localised in one or the other. When the position of a photon is measured it is localised, but before it is measured it appears that one cannot say anything definite about its position, and in fact it seems that it is not a meaningful question to ask where the photon is before it is measured. This is the measurement problem, as conceived according to the Copenhagen interpretation of quantum mechanics.

The single photon Young’s experiment does not have a simple classical explanation. Quantum particles, such as photons, electrons and so on, do not behave in a way that can be described or explained in purely classical terms. Quantum objects display several other strange effects, including entanglement (measuring one attribute of a member of an entangled pair of quantum particles, fixes the attribute in the other particle, regardless of how far apart these particles are), the apparent loss of direct causality in quantum processes such as nuclear decay (individual instances of the decay of atomic nuclei with the emission of particles, such as electrons, alpha particles or electromagnetic radiation, does not appear to have a proximate cause, although it does have a precise half-life which precisely quantifies the statistical probability of decay), and quantum tunnelling whereby quantum particles can “tunnel” through potential barriers in a way that is forbidden in the classical world. One possibility is that these apparently strange effects can be explained by local hidden variables, also referred to as local reality. (Local hidden variables are classically deterministic effects which affect the particles, but which cannot be detected experimentally and are not accounted for in theory. Local hidden variables would imply that there are physical effects not described by quantum mechanics, making it an incomplete theory). John Bell’s theorem distinguishes between the predictions of classical physics combined with local hidden variables and those of quantum mechanics. Experiments have demonstrated almost, but not quite without doubt that the predictions of quantum mechanics are correct, and that therefore local hidden variable interpretations are ruled out.

The results of experiments which all but confirm the predictions of Bell’s theorem mean that either deterministic reality (the ability to speak meaningfully of the state of a quantum object before it has been measured, known in the trade as counterfactual definiteness) must be abandoned, or that deterministic reality is non-local (influences propagate faster than the speed of light). Currently, a majority of physicists choose the former option, because the latter, in its most simple flavours, violates the axioms of special relativity. Those who choose to abandon counterfactual definiteness, also abandon the notion that, knowing the state of an attribute of a system at any time, it is possible in principle to predict its future and past states to an arbitrary level of precision.

The Interpretations of Quantum Physics

The most popular interpretation of quantum mechanics, the Copenhagen interpretation, abandons determinism and counterfactual definiteness. According to the Copenhagen interpretation, a quantum object (photon, electron and so on), before it is detected or measured, is in a superposition of states, and the Schrödinger wave equation describes the evolution of the probability that it will be in any given state. When the quantum particle is detected or measured, the wave function is said to collapse to a single state (an eigenstate) with a probability density given by the square of the wave function at that time – so in the case of the Young’s double slit experiment, the photon or electron is detected at one or other slit or at a particular location at the image screen with a probability which matches the intensity or energy distribution of the classical Young’s experiment. There is a problematic role for the observer, or at least for the measurement apparatus, in causing the collapse. Smith’s interpretation of this phenomenon is that the state of superposition corresponds to the scholastic state of potency and that the collapse of the wave function corresponds to the state being actualised, a transition from potency to act, from physical to corporeal. Smith argues that the quantum world is not in act, does not actually exist, and that therefore physics, in this sense, studies entities which do not exist, but which are merely potentiae.  

Here, and elsewhere, Smith erroneously equates quantum physics with physics in general – and he commits the fallacy of composition in ascribing features of quantum physics to the whole of physics. When one studies statistical thermodynamics, electromagnetism, geometrical or physical optics, plasma physics, classical mechanics, relativity or astrophysics, one is undoubtedly doing physics, but little or no consideration of weird quantum effects is needed.

Be that as it may, Smith’s argument is that physics is powerless to study the world as it is, what he calls the corporeal world, the world that we perceive directly, because the physical world that physics does study can now be seen to be non-deterministic or non-local, or both. According to him, the project to explain the world fully, initiated by Galileo, and pursued by Newton, Descartes and countless physicists since, is doomed to failure because the actual world is on a different and higher ontological plane from the objects studied by physics and because the events in the world can no longer be seen as the consequence of deterministic interactions of fundamental particles (atoms in the philosophical sense). This claim depends, of course, on whether Smith has given a correct and comprehensive description of the relevant aspects of quantum mechanics on which he relies.

There are notoriously many interpretations of quantum mechanics, few of which make unique predictions that can be tested by experiment, and which therefore are not scientific hypotheses in the strict sense. This has led some commentators to lose patience with attempts to interpret the underlying meaning of the very accurate quantitative predictions of quantum mechanics, prodding Feynman to make his famous quip, “Shut up and calculate”; or Hawking to remark, “When I hear of Schrödinger’s cat, I reach for my gun”. Nevertheless, interpretations abound, and although the Copenhagen interpretation on which Smith bases his thesis is the most popular (it is also known as the “standard” interpretation), it is far from being the only one. There are other interpretations, equally conforming to the predictions of non-relativistic quantum mechanics, which do not depend on the superposition of states and the collapse of the wave function[23].

Take the de Broglie-Bohm interpretation (also known as the pilot wave interpretation). According to this interpretation the quantum object is always in a single definite state (it is counterfactually definite), it is deterministic, and there is no role for an observer in collapsing a superposition of states (there is, in this scheme no superposition of states – quantum objects are always in a defined state). Smith does not discuss this interpretation at all in The Quantum Enigma (except to identify the Bohmian concept of a universal wavefunction, which is a necessary element of Bohmian mechanics, with what he calls Nature, the underlying ground of reality, without acknowledging that a universal wavefunction forms no part of the Copenhagen interpretation on which his argument rests) and fails to notice that the de Broglie-Bohm interpretation does not comport with his metaphysics. I note in passing that this is Smith’s normal method of argumentation – he ignores or gives short shrift to facts that stand against his proposition[24]. He does mention the de Broglie-Bohm interpretation in his 2019 book, Physics and Vertical Causation: The End of Quantum Reality, but dismisses it in less than a paragraph on the grounds that the collapse of the wave function is instantaneous, thus outside time, and therefore cannot be described by the Bohmian “differential  equations”. In doing so, he rather misses the point that Bohmian mechanics does not rely on this collapse, this event outside time, at all.[25]

Other interpretations which do not call for superposition and which are compatible with counterfactual definiteness include Everett’s Many Worlds interpretation, Cramer’s Transactional interpretation and Nelson’s Stochastic interpretation. The first two of these are also deterministic. I don’t propose to discuss these in any detail – a full discussion of all the interpretations of quantum mechanics with their ongoing developments would fill an entire book or more. There is a vast literature on the subject. For our purposes, it is enough to note that any viable interpretation must and does make the same correct empirical predictions as the Copenhagen interpretation, for example with regard to experiments such as the Young’s double slit, the quantum eraser, delayed choice experiments, and EPR type (entangled particle) experiments. The fact is that these interpretations cannot be distinguished empirically, and so are not strictly physical theories[26]. Of course, people are attempting to develop interpretations which can be distinguished empirically, but as things stand, the choice of interpretation is largely a matter of personal preference. Smith has chosen to build his metaphysical thesis on the Copenhagen interpretation, and there is nothing inherently wrong with that, but he fails to expose this limitation to his readers, most of whom will not be aware of it. It does not matter for our purposes which, if any, of these interpretations is correct. What matters is that interpretations exist which satisfy the empirical constraints, but which do not comport with his metaphysics. Building his thesis around one interpretation lessens its import – if his argument were to follow necessarily from the observations, rather than from one interpretation amongst many, it would carry more weight than it does. As it stands, it is little more than the Copenhagen interpretation spiced up with some Heisenberg and Aquinas – an interpretation of an interpretation.

Decoherence

Furthermore, Smith entirely ignores the relevant phenomenon of decoherence, the existence of which is uncontroversial. Decoherence is the loss of the phase relationship between different states of the quantum subsystem by interaction with the environment, or with the measurement apparatus. It is the bane of quantum computing which requires the phase relationships to be maintained in the face of thermal and other environmental perturbations. The effect of decoherence is to reduce the quantum probabilities of the system to classical probabilities, and the theoretical basis of the process is well understood via, for example, von Neumann’s density matrix description. To be clear, because decoherence results from the entanglement of the quantum system with the environment, it doesn’t solve the measurement problem per se. Nevertheless, instantaneous and discontinuous wave function collapse as envisaged in the Copenhagen interpretation does not occur during decoherence, for example when a quantum particle is absorbed by a measurement apparatus. Instead, the entire system, the quantum subsystem plus the environment can be regarded as being entangled in a superposition of states with a vastly higher number of degrees of freedom. However, decoherence does explain the appearance of wave function collapse, since the unitary evolution (the uniquely defined evolution from a past to a future state) of the quantum subsystem described by the Schrödinger equation or the density operators, is interrupted by interaction of the quantum subsystem with the environment (or absorption of the quantum subsystem by the environment or detection apparatus) in a non-unitary manner. The pure quantum state of the quantum subsystem is therefore irreversibly lost, and the quantum subsystem falls into a mixed state. This interaction can be described either by the wave function or the density matrix formalism, and results in the quantum probabilities that are described by the subsystem wave function before environmental interaction, being reduced to classical probabilities after interaction, a process known as einselection. It also explains how a quantum subsystem entangled with a measuring apparatus, or any macroscopic object interacting with its environment, behaves as a classical statistical ensemble rather than a quantum superposition and thus appears to have collapsed into a state with a precise value for measured observables for each element of the subsystem. The localisation of macroscopic objects resulting from decoherence rapidly approaches the de Broglie wavelength with increasing object size, and the localisation occurs extremely rapidly, so that the superposition of states for a macroscopic object (or for a system of quantum objects absorbed by a measurement apparatus) cannot be practically observed either in time or space.

Although the precise significance of decoherence for resolving some of the philosophical problems of quantum mechanics is still a matter of debate, the fact remains that decoherence is an empirically verified physical phenomenon which creates difficulties for Smith’s description of reality. For example, as we have seen, the combination of the assumption that macroscopic objects obey quantum laws (albeit that they possess very large degrees of freedom), and the effect of decoherence on transforming the quantum probabilities of an ensemble to classical probabilities, explains the localisation and other classical observations of precise attributes of macroscopic objects (the localisation and other attributes become more precise and occur more quickly the larger the object, occurring in timescales and with precision indistinguishable from the classical case for objects larger than a few tens of nanometres). This is in stark contradiction to Smith’s suggestion that physical and corporeal objects are ontologically distinct, and that the wave function collapse is the actualisation of a potency, which results in the instantaneous change of ontological plane from the physical to the corporeal. Instead, what we see is that the same quantum description applies to both quantum and macroscopic objects, or to physical and corporeal in Smith’s language, with the classical probabilities and the localisation of corporeal objects explained by interaction with the environment and the influence of decoherence. It is a pity that Smith chose not to reveal this difficulty to his readers and declined to address the problem that the phenomenon poses for his thesis.

Do nucleons, electrons and atoms exist?

Let us now turn to another question, according to which Smith proposes that what he calls corporeal entities are not constituted by particles at all – that quantum particles cease to exist as particles once they are incorporated into a corporeal object[27]. I find this argument startling and entirely unconvincing. Take, for example, common salt, sodium chloride. A naturally occurring mineral crystal of sodium chloride, a halite crystal, more than large enough to be seen and therefore a corporeal object in Smith’s terms, has a basically cubic shape discernible by eye, which is one important and defining attribute of its substantial form. This is not an arbitrary property dictated from on high; there is a reason for it to be so, and the reason is that within the crystal, the sodium and chlorine ions are organised as two interpenetrating face centred cubic lattices, so that the nearest neighbours of each ion of one species are six ions of the other species arranged halfway along the lattice cell edges of the first species. This arrangement is expected because of the strong electrostatic attraction between the ions of the two species. That electrostatic attraction arises from the respective valency of atomic sodium and chlorine, which in turn is a consequence of the electronic structure of the atoms – the number of electrons in the outer or valence energy level of the atom[28]. Far from it being the case that the particles disappear on being incorporated into a corporeal object, it is the arrangement of the particles within the corporeal object, based on their properties, which gives rise to one of its key attributes – its shape.  This arrangement can be probed and demonstrated by, for example, X-ray diffraction. Moreover, it has become possible in the last decade to image atoms in a lattice directly, including imaging the migration of individual atoms within the lattice in real time, using instruments such as the electron scanning tunnelling microscope (itself relying on a quantum effect), the field ion microscope, and the ptychographic electron microscope which renders extremely high resolution images down to the sub-atomic level. So much for particles disappearing in corporeal entities.

As an aside, we can consider another of the attributes of common salt crystals, an undeniable quality, its salty taste, which we perceive as a consequence of the presence of sodium cations from the salt dissolved in water being detected by dedicated cells on the tongue which make use of a cation channel, the epithelial sodium channel (encoded by four genes in humans, which are specific arrangements of a molecular structure in space on the sub-microscopic scale – see below for a further discussion of DNA). This is another example, to add to that of colour that we touched on earlier, that refutes Smith’s claim that qualitative attributes cannot be studied by physics.

While we are considering whether physical objects disappear or are subsumed into corporeal objects, let us consider the case of the structure of DNA, famously discovered by Crick, Watson, Wilkins and Franklin in 1953. It is a triumph of physics and biology that we understand the molecular basis of heredity, which is present in every cell of our bodies, and that of every cell in every other living creature on Earth. The famous last sentence of Watson and Crick’s 1953 Nature paper goes: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material”, and has proved not only to be accurately prophetic, but also links the functionality of heredity with the structure (the organisation in time and space) of an entity that Smith would presumably consider to be physical (in his terms – i.e. studied by physics and not perceptible to our unaided senses). Time and again, physicists and other scientists show that the sub-microscopic structure of things is not only real, but explains and determines the attributes, the qualities, the essence, the quiddity of the so-called corporeal objects which it constitutes. It seems after all that the world is built bottom up, not top down.

Smith’s Vertical Causation

In the final chapter of The Quantum Enigma, Smith introduces what he believes to be a new idea related to causality, which he calls vertical causation (VC). He contrasts VC with horizontal causation, which is that kind of causality studied by physics (and the other natural sciences) in which events in spacetime cause other events in spacetime according to the discoverable regularity described by the laws of physics. He proposes a categorically separate and higher form of causality, which operates outside time, and which constitutes his explanation for the instantaneous collapse of the wave function. His main argument in support of VC relies on the supposed instantaneity of the collapse, his assumption being that the collapse must be caused, but that the cause does not appear within the physical theory of quantum mechanics itself, and it must lie outside time, so it must therefore proceed from a transcendent plane which lies above and beyond normal reality. He likens it to an act of creativity. According to him, vertical causation is incapable, by definition, of being recognised or studied by physics or any form of natural science. Of course, all manner of charlatans[29] seek to place their claims beyond science, hoping to smuggle them past proper scrutiny. This is not to say that Smith is a charlatan, at least not knowingly, but any claim of this sort should be treated sceptically, should be defeasible and must be justified on some warrant other than its own assertion. Smith’s vertical causation does not meet these criteria.

We have already seen that Smith bases his argument on just one interpretation of quantum mechanics, in which he exploits the philosophically problematic collapse of the wave function, but that he ignores other, equally predictive interpretations that do not require this state selection. The notion that vertical causation is a concept which proceeds naturally and necessarily from empirical and theoretical quantum mechanics is therefore at best enfeebled, at worst defeated. Vertical causation is an idea that purports to solve the philosophical problems arising from one interpretation of quantum mechanics by the rather arbitrary introduction of a form of causality that not only lies beyond perception but cannot be recognised or studied by science. In truth this is nothing more nor less than a magical “explanation” explaining nothing. One understands the belief of some theists that the temporal and spatial world is constantly brought into being and is held in being by a transcendent non-temporal (and spatially unbound) entity, by God. Nevertheless, one would argue that Smith has not made the case that this continual putative act of creation is specifically manifested in the collapse of the wave function, the transition from potency to act, as he would have it, in the quantum domain; and he has certainly not developed an argument that is cogent on purely metaphysical rather than on a priori religious grounds.

We are accustomed to think of causality in the natural world, horizontal causation in Smith’s jargon, where one event causes the next according to discernible natural laws, backwards in a great chain to the start of time and reaching forwards into the distant future. The idea that a different form of transcendent causality lies at the heart of the natural world results in a startling epistemological crisis. According to Smith, every event in which quantum potentiae are actualised is caused vertically and instantaneously by an act of creation outside time and thus ever present. It is but a small step, or no step at all, to occasionalism, the doctrine which denies horizontal causation altogether, and which holds that every event is directly caused by God, and that the appearance of natural causality in the world is a consequence of God acting according to custom, but that it is possible for him to do otherwise. For example, Al Ghazali, the Islamic philosopher who first stated this position, gave the example of cotton in a fire – the cotton burns, not because of the fire, or the fact that the fire is hot, but because God directly causes it to burn, and this is so for every apparent causal chain. Natural cause is an illusion. The Catholic philosopher, Nicolas Malebranche, independently proposed occasionalism as a response to Cartesian dualism. On this idea, physics would not study the regularities of the physical world, and how one event causes the next, but instead, the habits or customs of God.

However, this epistemological crisis is avoided in the case of VC, because we see that the idea of a vertical cause of the collapse of the wave function is arbitrary and unnecessary. Decoherence, which we have discussed above, is not an instantaneous process, or a process outside time, as Smith claims for the apparently uncaused collapse of the wave function. A quantum subsystem can be completely coherent, or completely decoherent, or partially coherent. For example, Haroche and collaborators, in a seminal paper, were the first to report the measurement of a quantum subsystem in transition from complete coherence to decoherence[30]. It is clearly a physical process, amenable to physical experiment, which can be described by a physical theory based on various different but equivalent mathematical formalisms. The fact that the appearance of wave function collapse can be modelled and measured in time undermines Smith’s claim that it is a discontinuity which occurs instantaneously and outside time and that therefore explicable only as a creative, transcendental act.

Although believers are free to find their teleology wherever they will, the history of “God-of-the-gaps” arguments is not a happy one. In essence, Smith’s solution to the measurement problem and the other philosophical problems arising from quantum mechanics is, to put it grossly, “God does it”, to succumb to occasionalism, or something very like it. This is an unsatisfactory explanation for observations of the natural world, which, if accepted would dismantle the foundation of science. His assertion that the true solution lies beyond the remit of physics should not and will not cut the mustard. Theorists will continue to develop interpretations and extensions to quantum mechanics in the reasonable expectation that a quantum mechanical field theory will be found that is consistent with relativity. Whether a naively realist interpretation of quantum mechanics will ever prevail is still an open question, but that seems unlikely – there is no guarantee that human minds, which have evolved to deal with the everyday macroscopic domain, will prove capable of visualising the quantum domain on similar terms. While Smith is right to question whether physics is a project which can ever understand the world without residue, it does not follow that the residue de jour should be identified with the supernatural. It certainly does not follow that quantum collapse itself is to be associated with a supernatural act, or, as Smith would put it, with vertical causation.

In Conclusion

So, there are several flaws in The Quantum Enigma. From the outset, Smith’s campaign against Cartesian dualism is ill targeted, since the actual default stance of scientists today is not substance dualism at all, but a form of Realist Monism. His attempt to establish an ontological distinction between the “physical” and the “corporeal” fails on the terms of his own examples, and its failure undermines the entire edifice of his argument. He conflates one subset of physics, quantum mechanics, with the entire discipline, which further confounds the ontological distinction. In direct contradiction to his ideas, we are forced to conclude that the apple as perceived by the intellect, with all its rich associations, and the apple as studied by physics and biology with measurement and reason, is one and the same ontological entity. Next, his proposition that quantum objects in a state of superposition are equivalent to objects in a state of potency, and that the collapse of the wave function is an instantaneous change of ontological plane, from the physical to the corporeal, from potency to act, suffers from the weakness that it depends on one interpretation of quantum mechanics amongst many, and that it does not proceed naturally and necessarily from observation, from empirical physics. Although it is not precluded by the physics, Smith gives us no compelling reason to accept his interpretation of an interpretation over any other. Furthermore, Smith does not acknowledge the existence of the phenomenon of decoherence, which explains the appearance of wave function collapse in purely physical terms and does not acknowledge the difficulties it presents for his thesis. Finally, the concept of vertical causation, which is built on the notion that the wave function collapse is outside of time, also depends on one interpretation of quantum mechanics amongst many, is refuted by the phenomenon of decoherence, and is, moreover, a “God-of-the-gaps” argument.

Ultimately, The Quantum Enigma is disappointing, because Smith declines to engage in detail both with relevant ongoing philosophical discourse and with the science[31]. He limits his discussion only to those threads of philosophy and science from which he believes he can weave his cloth, but the weave turns out to be a fatally loose one. The scholarly tone of his book hides a polemic tract under a superficial gloss. Despite its opaque language and dense construction, it never properly engages with potential counterarguments or with the extensive literature dealing with its various matters. In the final analysis, it is simply lightweight. This view of Smith’s work is reinforced by his more recent publications, and his claim to present a radical, transformational thesis remains unrealised.


[1] Although Smith is a Roman Catholic, he is a member of the traditional, esoteric and perennialist metaphysical school (alongside philosophers such as René Guénon, Frithjof Schuon, Anand Coomaraswamy, Harry Oldmeadow and Hossein Nasr). This school condemns all aspects of modernity and values

pursuits such as arcane symbolism, numerology, sacred geometry, alchemy, astrology and other secret “knowledge” accessible only to initiates.

[2] The Quantum Enigma, 2005, p20

[3] Husserl, Cartesian Meditations, available on-line.

[4] Heisenberg, Physics and Philosophy, first published 1962, Penguin Classics edition 2000, p39.

[5] The Quantum Enigma, 2005, p24

[6] Smith is so opposed to Descartes’ notions that he claims, later in the book, that Descartes invented analytical geometry to destroy the idea of potency and act in mathematics by coordinatizing the continuum, and that Descartes’ primary motivation was to “extirpate” the continuum, which Smith sees as the material principle, in the sense of scholastic matter, in the quantitative domain. Whatever, the merits or otherwise of Descartes’ philosophy, the idea that analytical geometry was invented primarily as an attack on traditional metaphysics is grotesque.

[7] The Quantum Enigma, 2005, p26

[8] At this point in his argument, and as an aside, Smith proposes that direct perception, without instrumentation, of corporeal objects and their essence is the foundation of traditional sciences, for example the five elements of the ancient cosmologies, or the five bhudas of Hindu doctrine.

[9] There is an extensive science dedicated to understanding the psychophysical aspects of direct weight perception – see, for example, Jones (1986), Perception of force and weight: Theory and research, Psychol. Bull. 100, 29–42.  The integration of visual and touch perception in lift planning has been studied, for example, Jeannerod et al (1995), Grasping objects: the cortical mechanisms of visuomotor transformation, Trends Neurosci. 18, 314–320

[10] In fact, not a one dimensional, but a multi-dimensional continuum, since pure spectral colours are mixed in most naturally occurring colours

[11] Smith, The Quantum Enigma, p14

[12] Feynman’s monologue on the subject is available in many places on the web, for example: https://www.brainpickings.org/2013/01/01/ode-to-a-flower-richard-feynman/

[13] Richard Dawkins, Unweaving the Rainbow; Science, Delusion and the Appetite for Wonder, 1998

[14] Smith, The Quantum Enigma, p34

[15] René Guénon, The Reign of Quantity and the Signs of the Times, first published in French, 1945; third edition in English, Sophia Perennis 1995.

[16] The fact that Smith declares an overwhelming preference for Guénon’s philosophy, as it pertains to modernity, over the philosophy of Jacques Maritain, will tell all educated Catholics what they need to know about Smith’s predilections – Smith, Science and Myth, 2010 revised 2012, Angelico Press/Sophia Perennis p31.

[17] For an example of Smith’s adherence to numerology and astrology, see Smith, Science and Myth, Chapter 6

[18] Heisenberg, Physics and Philosophy, first published 1962, Penguin Classics edition 2000, p22

[19] Smith, The Quantum Enigma, p74

[20] Smith refers to the collapse of the “state vector”. The state vector and the wave function are different but related mathematical concepts which are used to describe the behaviour of quantum objects, and for our purposes the phrases “collapse of the state vector” and “collapse of the wave function” are synonymous and refer to the same event – I prefer the latter phrase because it is in more common use.

[21] Smith’s description of the Young’s experiment is technically incorrect in one respect important for a correct understanding of it. As his error does not fundamentally affect his argument or my response, but only the understanding of his readers, we needn’t explore his error in detail, except to register surprise that a physicist would make such an elementary error in a publication.

[22] If one applies the Schrödinger equation, which describes the evolution of the probability distribution for the state of a quantum particle over time, to the interaction of particles with Young’s slits, one recovers a probability distribution for the location of a particle at the detection screen which matches, exactly, the wave interference intensity in the classical wave case (and this is true for any interaction of quantum particles which have a classical wave analogue described by classical diffraction and interference theory. This is a necessary condition for the Schrödinger equation to be an accurate description of the probability of particle location, since the classical case can be regarded as a very large ensemble of particles arriving at the screen at every moment). Note that there is a technical issue with naïvely using the Schrödinger equation to model the behaviour of photons and other relativistic particles, but that need not trouble us here.

[23] Note that there are also different mathematical formulations of quantum mechanics. The Schrödinger formulation and the Heisenberg matrix mechanics were the first complete formulations of non-relativistic QM. Later formulations include Feynman’s path integral. These, and other, formulations can be shown to be fundamentally equivalent, but each are useful in solving different problems. Formulations are not the same as interpretations, the former being transactional and the latter philosophical, but different formulations emphasise different aspects of various interpretations.

[24] Smith’s lack of proper attention to facts or views that oppose his position, which I note here, extends across much of his discourse, and includes his silence on the philosophies of realism, monism and emergentism, modern neurophysiology and other aspects of the science of consciousness, interpretations of QM other than the Copenhagen interpretation, and decoherence in quantum theory. It extends to ignoring the devastating criticism of the inconsequential rabble whom he quotes in his support, which includes such luminaries as Berthault, Popov, Gentry, Humphreys, Johnson and their ilk. In a striking display of projection, in Science and Myth p194 he accuses Stephen Hawking of the same offence, listing a raggle-taggle band of pseudoscientific fellow travellers whom he thinks Hawking should have noticed. On the other hand, the science and philosophy he sweeps under the carpet are espoused by the leading scholars of the day in the appropriate disciplines.

[25] If Bohmian mechanics is deterministic and counterfactually definite, why is it not the preferred interpretation of QM? The answer is that, at least in its original form, the trajectories of the particles are distinctly non-Newtonian. Furthermore, it is non-local and therefore cannot easily be reconciled with special relativity. Versions of Bohmian mechanics which are Lorentz covariant and built on a Riemannian space-time have been and are being developed, but the success of these extensions remain in dispute.

[26] Objective Collapse theories, such as the Ghirardi-Rimini-Weber interpretation regard quantum mechanics as an incomplete theory and, to that extent, are actual physical theories, since they hypothesise extensions to it.

[27] Smith, The Quantum Enigma, p118

[28] The outer shell of sodium contains one electron, so the sodium atom readily gives up that electron to form a positive ion. The outer shell of chlorine contains seven electrons, so readily accepts one electron to complete the shell thus forming a negative ion. Sodium chloride is an ionic crystal in which the chlorine atom accepts an electron from a sodium atom to form an electrostatically bound lattice of positive sodium and negative chlorine ions.

[29] For example, adherents of homeopathy, astrology, psychic phenomena, crystal healing, Reiki and so forth attempt to bypass scientific scrutiny by declaring that their claims work in ways that are inaccessible to scientific validation.

[30] Haroche et al, Observing the Progressive Decoherence of the “Meter” in a Quantum Measurement. Phys. Rev. Lett. 77 (24): 4887–4890

[31] Smith provides an appendix in The Quantum Enigma in which he lays out the mathematical formalism of quantum mechanics based on the Schrödinger approach. It is not clear what he hopes to achieve by this – it offers no value to those equipped to understand it since it is entirely derivative and extremely elementary – there is nothing that cannot be found in the first few pages of any relevant undergraduate text; and clearly it offers no value to those unequipped or unwilling to engage with it. Since The Quantum Enigma depends on interpretations of quantum mechanics, Smith would have been better advised to set out a comprehensive comparison of the various interpretations, discuss their philosophical implications, and face up to those implications for his thesis, instead of pretending that only one exists. He should also have confronted the phenomenon of decoherence.

Posted in Uncategorized | Comments Off on When Is an Apple Not an Apple?

For the Fifth Time – The Roman Catechism Does Not Teach Geocentrism

In his new book Scientific Heresies Bob Sungenis has repeated his long-refuted argument that the Roman Catechism, also known as the Catechism of Trent, teaches strict Geocentrism as a doctrine of divine faith, that is, something that Catholics must believe as a matter of faith.  Sungenis goes so far as to speak of “the Roman Catechism’s dogmatic assertion of geocentrism” (p. 344.)

For this to be a “dogmatic assertion of geocentrism” at least two criteria would have to be met. First, the Catechism would have to make clear that it is presenting the physical system of strict Geocentrism as something divinely revealed and of necessary belief.  Second, it would have to do so in language that obliges the faithful to hold this view only and no other (see e.g. Catechism of the Catholic Church §88ff.)  Has Sungenis shown this to be the case?  Not by a long shot.

Sungenis claims that this Catechism “says the Earth ‘stands still’” (Scientific Heresies, p. 343).  And that would be at least a plausible argument in his favor, if only it was accurate.  The fact is that the Catechism does not use these words at all, even though Sungenis puts them in quotes.  I have challenged him publicly to show us where this Catechism says that the Earth “stands still” (see here and here; I know he’s seen these challenges because he’s responded to other parts of them.)  He can’t, because it doesn’t.  It’s unfortunate that he would for years continue to mislead readers with this long-refuted assertion.

Image credit: X-ray: NASA

Sungenis also cites a few passages from the Tridentine Catechism that speak of the heavenly bodies.  One passage says that the heavenly bodies “are endowed with fixed and regular motion”, another speaks of “the stars by their motion and revolutions”, while a third fuller text refers to “the celestial bodies in a certain and uniform course, that nothing varies more than their continual revolution, while nothing is more fixed than their variety”.  This language is sufficiently generic that it could be safely pronounced by subscribers to almost any cosmological system.  It could have applied, for instance, at the time it was written, to the non-geocentric cosmological systems of Bishop Nicole Oresme and Cardinal Nicholas Cusa.  It could likewise be endorsed by any modern Catholic today who affirms the motion of the Earth around the Sun and rotating on its axis.  [I am not claiming that this Catechism affirms or denies or accommodates any particular cosmological system.  I’m saying its language is generic enough that it can be affirmed by adherents of different cosmological views.]

There is, however, another passage from the Roman Catechism that Sungenis relies on most heavily. He claims that this passage will “expel [sic] any doubt about what objects are revolving the catechism adds that the sun, moon and stars have a ‘continual revolution’.”  Here it is:

At vero terram etiam super stabilitatem suam fundatam Deus verbo suo iussit in media mundi parte consistere, effecitque ut ascenderent montes, et descenderent campi in locum, quem fundavit eis; ac, ne aquarum vis illam inundaret, terminum posuit, quem non transgredientur, neque convertentur operire terram. Deinde non solum arboribus, omnique herbarum et florum varietate convestivit atque ornavit, sed innumerabilibus etiam animantium generibus, quemadmodum antea aquas et aëra, ita etiam terras complevit (link).

The earth [terram] also God commanded to stand in the midst of the world [mundi], rooted in its own foundation, and made the mountains ascend, and the plains descend into the place which he had founded for them. That the waters should not inundate the earth [terram], He set a bound which they shall not pass over; neither shall they return to cover the earth [terram]. He next not only clothed and adorned it [the terram] with trees and every variety of plant and flower, but filled it, as He had already filled the air and water, with innumerable kinds of living creatures.

I have explained to Sungenis at least four times now  (once in private correspondence and three times in public; see here, here, and here) why this text does not establish strict Geocentrism as a matter of faith.  Yet he has never really addressed my main argument, which is this: in the context of the passage, the Catechism is using terram = earth as it’s used in Gen 1:10, namely, to designate “dry land”, rather than the entire globe, and it can only mean this precisely because in this Catechism passage the terram is contrasted with the “air” and “water”.

While mundus can mean “universe”, it can also just mean “world”, e.g., “Euntes in mundum universum prædicate Evangelium omni creaturæ,” “Go ye into the whole world and preach the gospel to every creature” (Mark 16:15).  The immediate context shows that the Catechism is using the word “earth” (terram) here to mean “land”, as distinct from the “air” and the “water”, and is using the word “world” (mundus) to mean the whole globe.  (This echoes the wording of Gen 1:10, “And God called the dry land [aridam], Earth [terram]”.)  Thus, when the Catechism speaks of the earth as being “rooted in its own foundation”, it means that the land is fixed in place with relation to the water, not in relation to the cosmos.  I argue that this is the only reasonable exegesis of this passage because of the wording of the last phrase, “He next not only clothed and adorned [the terram] with trees and every variety of plant and flower, but filled it, as He had already filled the air and water, with innumerable kinds of living creatures” (my emphasis.)

The passage clearly distinguishes “air” = aëra and “water” = aqua from terram.  As such, I argue, the terram cannot be the entire globe because it makes no sense to say that the entire globe is something distinct from the atmosphere and the oceans.  But it makes perfect sense to say that the “dry land” = terram is something distinct from the atmosphere and oceans.  This is precisely how the word terram is used in Gen 1:9-10 – “And God said, ‘Let the waters under the heavens be gathered together into one place, and let the dry land [arida in the Vulgate] appear.’ And it was so.  God called the dry land Earth [terram in the Vulgate], and the waters that were gathered together he called Seas.”  Here it is the “dry land”, the aridam, that is then called terram, “earth”.  It cannot be the entire globe, because it is something distinct from the air and the waters.

As many times as Sungenis has interacted with me on this text, he has never once engaged this specific argument.

Sungenis’s only counter-argument in written replies to me has been to insist that because this Catechism says that the terram was placed in the “midst” of the mundus (world), this must indicate that the terram was placed in the exact center of the mundus and therefore refers to the Earth being placed in the exact center of the universe.  But this doesn’t follow of necessity.  The Catechism, in this section, is drawing from the language of Genesis 1:9-10.  The earth was entirely covered in water and Gen 1:9 says that God gathered the waters in one place and the dry land (terram) appeared.  So, it is reasonable for the Catechism to say that the land was placed in the midst (in media) of the world.  That no more implies that things have to be in the exact center of the earth than me saying “I vacationed in the midst of the mountains” has to mean that I was at the mountains’ exact center.  Sungenis’s argument does not in any way nullify the most crucial point, which he has never addressed, namely that this Catechism clearly distinguishes the terram from the “air” and the “water”.  The only way that can be true is if terram here means “dry land”, not “entire Earth”.

This passage doesn’t represent a description of the globe’s place in the universe and it has no application to geocentrism.  I should note that the English version of this Catechism by J. A. McHugh and C. J. Callanon which appears in many places on the Internet (e.g. here) has the heading “Formation of the Universe” over this section.  This is a mistranslation of the Latin, De terrae creatione, which is correctly translated “Creation of the earth” (as in, e.g. the translation by J. Donovan; link).  It is perhaps this mistranslation—along with an insufficient attention to context—that has misled certain modern geocentrists to read this as if it addressed the earth’s place in the universe.

Sungenis calls this passage “One of the clearest official and authoritative statements from the Catholic Church defending the doctrine of geocentrism” (ibid., p. 340.)  If this is the clearest statement he’s got he’s in trouble, because it’s his burden to show that his interpretation is right and mine is wrong.  Otherwise, he’s effectively admitted that he’s building his case on a very flimsy foundation.  I‘m certain Sungenis will deploy a lot of words trying to “answer” this.  I’m equally confident he will not be able to exclude the view presented here and therefore his assertion that this passage is a clear, “dogmatic assertion of geocentrism” is ruled out as untenable.

In light of these facts, I think it is no accident that nobody during the seventeenth-century controversy over strict Geocentrism – not the extremely astute Cardinal Bellarmine, nor the theological consultants to the Congregation of the Index in 1616, nor Fr. Melchior Inchofer the theological consultant to the Holy Office in 1633 – pointed to the Roman Catechism as an authoritative text proclaiming strict Geocentrism to be any part of the Catholic faith.  Remember that the Council of Trent and its Catechism were almost as close to them in time as Vatican II is to us.  Surely if, as Sungenis claims, this is, “a dogmatic assertion of geocentrism”, this would have been the first place they would have looked and it would have been the very centerpiece in the original Galileo controversy.  And yet this source was never brought up by the Congregation of the Index or the Congregation of the Holy Office or, as far as I have seen, by any individual before, during, or immediately after the Galileo affair.  Somehow, nobody thought to deploy what Sungenis now insists is a central argument in favor of Geocentrism being a doctrine of Faith.  The silence is deafening. That silence can be easily explained by realizing that the language of the Catechism is generic and therefore does not establish Geocentrism doctrinally.  In short, Sungenis’s reading of the text is a forced interpretation, not admitted by the text itself or the main players in the Galileo case.

So now that it has been shown to him yet a fifth time that the Roman Catechism does not teach strict Geocentrism, I hope that Sungenis will cease using this dead argument.  Given his history, however, I won’t hold my breath.

Posted in Credibility, Magisterium, Science | Comments Off on For the Fifth Time – The Roman Catechism Does Not Teach Geocentrism