Einstein magician

How Much of Modern Physics is a Fraud?

© Phil Holland and Raeto West 1998, 1999, 2000, 2001, 2013, 2023


    In July 2023 I added lost audios of Phil and I talking. More than ten hours.
"Casting  false pearls before real swine" - anonymous lecturer on his students     v. 22 July 2023
  1. Atomic Bomb: proof of the correctness of modern physics?   [E-mails with R Kiehn]
  2. ‘Superfluid Helium’: a Lucrative Fraud?   [2.1 Outline | 2.2 Open University Dissected]
  3. Is the Speed of  Light a Limit?   May 2023: with emphasis of my main point which people still do not understand!   [E-mails with M Leigh]
  4. Is quantum theory credible?
  5. Heisenberg's absurd probability error
  6. How much trust can be put in particle detection techniques?   [6.1 | 6.2 | 6.3 | 6.4 | 6.5]
  7. Do particle accelerators give useful results?
  8. What is wrong with relativity?   [8.1 Introduction by RW | 8.2 G B Brown's paper | Endnotes]
  9. Big Bang? [Includes link to Kurt Johmann piece]
  10. Failures in Weather Modelling
  11. Ineffectual Fightbacks: The Façade of Physics [Wallace | Hitchens | Atkins | Greenfield | Hawking | Penrose | UCL ]
  12. 'Higgs boson' - Another Time-wasting money-maker   June 2013
  13. Short Note on Dimensions


  Back to main site   Big-Lies Home Page



1. Atom Bomb: Proof of the correctness of modern physics? Or merely empiricism?

[Back to Start]

Note added April 2012: at the time I wrote this, long before 2012, I had no idea that nuclear explosions were just another fraud, and no idea the frauds were arranged by Jews and adopted by sheeple, including most 'whites'.   On all this, browse this conserved forum from March 2012:   Nuke-Lies.org and note that must of this section 1 is therefore wrong!

You may like to download my long video—more than 200 minutes—Lord of the Nukes.  Open this link in a new tab, then right-click on the three dots to download it.  I recommend it.  Three voices, including mine.
- Rae West.



In the popular mind, there is a firm link between Einstein and the mushroom cloud, encouraged by mediocre education, media, and science writers. For example, I remember being assured by a biology experimenter that ‘e=mc2’ is “commonsense”. Although there are a few opposing voices, for example, C P Snow’s essay on Einstein, which explicitly stated there was no connection, most people think that what they’re told is physics—vague speculation, impenetrable mathematics, paradox—led to the atom bomb.

The thesis of this piece is that, in fact, the invention of the atomic bomb was almost entirely empirical. Fairly simple new concepts of the nucleus, electrons, neutrons, atomic weights etc. sufficed. Specifically, ‘e=mc 2 ’, quantum ideas, uncertainty in measurement and the more elaborate mathematics had no effect on the discoveries leading to the invention; these discoveries each came as a complete surprise. The link with ‘modern physics’ is a myth. If Einstein had never lived, atomic weapons could have been developed exactly as they were.

[Note on words: because scientific understanding has (so far) been incomplete, there’s no firm line between empiricism, which is something like trial-and-error, and science. Empiricism means something that works, even if it’s not understood. A lot of technology has a high empirical content. Consider metal smelting:—before the discovery of oxygen, oxides, and so on, metals were made by recipe: you mix reddish ore with charcoal and make the mixture hot, and out comes iron. Or consider electricity generation:—this looks much more scientific, far more than (say) a windmill, but arguably is just as purely empirical: Faraday found that a metal thing moved in a magnetic field gives an electric current—nobody knows why—and this is what a generator does. Technology may be scientific, or it may be trial-and-error, or a mixture: thus crystallography theory is mostly scientific, while flight is mostly experimental technology, and metallurgy and weather forecasting are a mixture. I haven’t attempted here to define these terms precisely. The point is, important discoveries can be made by pure chance.]


What follows is confined to fission, as in the atom bomb, not the ‘hydrogen bomb’. Fusion came later, and in any case depended on the chance discovery of fission. The facts about fusion (if it exists) are largely censored, as is probably reasonable in view of the dangers. I’ve listed below, in approximate sequence, most of the key discoveries which led to the atom bomb. The sources are mainly popular—Ronald Clark’s The Greatest Power on Earth (1980), George Gamow’s little book The Atom and its Nucleus (1961), Thomas Powers’ Heisenberg’s War (1994), H T Pledge's Science since 1600 (1939, 1966) and many others including Fermi’s autobiographical writings. However, some evidence is Phil Holland’s, drawn from his long experience of nuclear power.

  1. Discovery of the first particles, electrons, in 1895 by Thomson. Since these don’t travel through air, vacuum technology of pumps and airtight vessels had to have been invented; so this discovery could not have happened before the end of the 19th century. As with ‘Becquerel rays’ (a photographic plate or plates happened to be fogged) this was pure accident. The effect of a magnet on the rays was noticed, leading to the division between positive and negative (and, later, neutral) particles. But no theory accounted for these discoveries, although a link was made with electricity and charged ions.
  2. Discovery of radium by the Curies. In 1903 radium was found to give off 100 Calories per gramme per hour. 1g was estimated to give 1M Cals before decaying. This was completely unexpected, and incidentally allowed guesses as to the age of the earth to be extended enormously back in time. It was not called ‘nuclear energy’—this was before the ‘nucleus’ had been found. Nor I think was there any link with e=mc squared, which was only popularised fifteen or so years later. Sommerfeld seems to have popularised the idea of a ‘different order of magnitude’ of energy being ‘locked up’ within ‘the atom’.
  3. Rutherford’s suggestion, about 1910 , that the atom must have a nucleus, concentrated in a tiny proportion of space, was made when it was found that only one positively charged particle out of very many was deflected when passing through gold foil. (This was followed by years of puzzlement as philosophers tried to grasp the idea of matter which was mostly space). Rutherford also discovered the splitting of nitrogen nucleus with alpha particles. He was ‘.. completely astonished..’
  4. In 1913 and 1914 , H G J Moseley is credited with establishing that the number of positive charges in the nucleus is the 'atomic number', which gave a firm foundation for arranging elements in the periodic table. He seems to have used X-ray crystallography, which has a theoretical basis (Bessel numbers), based on straightforward wave theory, for which Bragg became well-known. Nothing in Moseley's work, so far as I can tell, had any content based on 'modern physics.' He was killed in 1915.
  5. Discovery of isotopes (the word means ‘the same place’—meaning the same place in the periodic table, insofar as this existed at the time) by the mass spectroscope, mainly the work of Aston, who for example used chlorine. The technique works by separating fast-moving molecules of the sample into a sort of spectrum, the heavier ones being more difficult to deflect. All this was completely empirical.
  6. Discussion well into the 1930s was of the nucleus being made of hydrogen nuclei and helium nuclei (as neutrons were not yet discovered).
  7. Joliot, with Marie Curie’s daughter, used polonium with beryllium, presumably, again, in a purely empirical way, and found the combination gave what came to be called neutrons. Chadwick in 1932 formally announced his discovery of the neutron. This was important, because, being neutral, these particles could penetrate the nucleus more easily. Possibly Chadwick expected this discovery, since the fact that isotopes exist makes things like neutrons an obvious possibility—since they allow the nucleus to be made heavier without changing its charge.
  8. Proton bombardment in 1930-2 was supposedly encouraged by Gamow’s calculations re waves [Clark]—which made the nucleus appear not quite so charged as was thought, because it might be made of waves in some sense. A famous experiment by Rutherford and others was interpreted as lithium capturing a proton and splitting. The calculations perhaps led to the experiment being tried—one of the few examples of the influence of 'modern physics'. However, this experiment seems to have had little importance, since neutron penetration of the nucleus turned out to be important.
  9. The discovery of fission in uranium was purely by accident. Fermi, working through elements methodically to see what happened when they were ‘bombarded’ with neutrons, expected to make new isotopes, but in 1934 was puzzled by his results with uranium, and probably dismissed what he found as a contaminant. Only in 1939 did Hahn & Strassmann identify barium (and krypton?). Then Lise Meitner and Frisch provided the ‘liquid drop’ model of fission of nucleus into two parts. [Powers]
  10. Szilard noticed fission fragments must emit neutrons if they split; H G Wells's chain reaction idea, based on the ideas of Frederick Soddy ( The Interpretation of Radium , 1907, revised later as The Interpretation of the Atom ), in The World Set Free (1914), became a possibility [Clark]. Again, this was empirical—it was found that elements with high atomic numbers have proportionally more neutrons than low ones. Nobody had any idea why. But, clearly, if a heavy element split, there would be surplus neutrons.
  11. Uranium-235 isotope fission was proved to happen by experiment; it was guessed, and proved, that U235 was the portion of uranium which was most liable to fission. Nobody knew (or knows now) why it differed from U238, except perhaps in the sense that an odd number was expected to behave differently from an even number.
  12. 1939: Bohr and Wheeler at Princeton realised fast free neutrons were produced during fission. In 1939 Joliot, and Fermi, showed ‘two or more’ free neutrons came out with each fission of U 235. This encouraged speculation about a possible chain reaction. But, again, this was a purely experimental result.
  13. Plutonium, a new element of mass 239, was discovered in a cyclotron; again, purely by chance. In 1940 it was suggested it might be fissionable.
  14. Fermi discovered entirely by chance that neutrons could be controlled: the difference between a marble bench and a wooden bench suggested that ‘light atoms, comparable in size to the neutron’ [sic; Gamow] were best to slow down neutrons. Hence the use of graphite. [PH. There was a similar incident in which Fermi decided for no particular reason to try a block of paraffin wax.]
  15. The critical mass (the amount varies with shape and surroundings) had to be determined. Nobody had much idea what it was. In 1940 Frisch and Peierls calculated (wrongly) the critical mass. Various other wrong values were obtained. The actual values found were found empirically by many experiments over many years, ‘some of which led to unexpected criticality incidents. I know of one incident at Windscale.. some incidents in USA did a lot more radiation damage.’ [PH]. When in 1941 plutonium 239 was found to be even more fissionable, another project was started to separate this in the US [about this time was the well-known event of Slotin ensuring his own death when he separated masses with his hands.] Another incident (p. 167 in Clark) describes a man simply leaning over U235 pieces, causing them to approach danger point.
  16. Fermi worked on the atomic ‘pile’ with graphite to slow neutrons down—so they didn’t just move fast out of the equipment—and cadmium a moderator [found empirically to absorb neutrons! Nobody knew why; possibly because there are many of isotopes of cadmium]. In Chicago in December 1942 , the pile was found to get hot. This was ‘the boiler’, not yet ‘the bomb’.
  17. The separation of U235, again, was an empirical engineering problem. Uranium hexafluoride, for use in gas separation, is corrosive and the problems were considerable. Even then, the theory of gases was wrong and separation occurred the other way round from what was expected with some isotopes. [PH]
  18. Before the first bomb test, in 1945 , there were doubts about ‘igniting’ the atmosphere, or the hydrogen in water, suggesting, what is perhaps obvious enough, that there was considerable doubt as to the processes at work. Even some calculations as to explosive yield were wildly wrong.
Conclusions: E-mails exchanged with R. Kiehn.
This piece (above) provoked a reply by a man who spent much of his life working on atomic power and weapons, and who says e=mc 2 was important in developing nuclear weapons. Judge for yourself:- click here (Short - about 10K) to read the full exchange of e-mails. Watch him evade the issue. Phil Holland comments: “I enjoyed this correspondence—it is the usual support for e=mc 2 , (1) Don't try to prove it but suggest it must be true. (2) Don't quote experimental results but hint that some experiment must have proved it.”


[Note: I owe Theo Theocharis the impulse to burrow into this subject.]
[Back to Start]

2. ‘Superfluid Helium’: a Lucrative Fraud?

Q: Why is 'superfluid helium' an important issue?
A: Because it is one of the cornerstones of quantum theory. It is one of the few phenomena supposed to show quantum effects on the macro scale, i.e. visible in fairly ordinary circumstances. Consequently, if 'superfluid helium' turns out to have been a blunder, a large part of the rationale for quantum theory is destroyed.

        2.1 Historical Outline and Brief Explanation of Probable Truth.
        2.2 Deconstruction of the Concept: an Open University Programme Dissected

2.1 Historical Outline and Brief Explanation of Probable Truth.



Following nineteenth century investigations into the thermodynamics of gases, it became clear that compressed gases, allowed to cool, would fall in temperature when expanded into a vacuum. The technology of compression and evacuation was available towards the end of the nineteenth century. So in the twentieth century ordinary atmospheric gases were liquefied and sometimes solidified, starting with carbon dioxide and working downward through gases with ever-lower boiling-points. The domestic fridge, and the availability of liquid oxygen and nitrogen in cylinders, were two of the outcomes. This work was associated with Dewar and others: Kelvin seems to have originated the idea of absolute zero, the lowest possible temperature at which, according to the kinetic theory, all atomic movement ceased. Ever more elaborate equipment, with heat exchangers and other refinements, was devised to lower temperatures towards this limiting point.

An interesting anomaly was discovered when helium (a very light gas - the second element in the periodic table) was cooled. I haven't checked who the discoverer is generally said to be, though it's amusing to note that Kapitza was awarded a Stalin Prize for this. There seems some evidence that the discovery was treated as so baffling that the observation was suppressed for years: the Open University TV programme discussed below says ".. it [in this case, very low temperature gradient, believed to be zero] was first observed in 1908; but it was nearly 25 years before physicists dared publish an explanation based on infinite thermal conductivity."

Almost every fairly simple chemical has a melting and boiling point (complicated ones tend to fall apart), and naturally it was thought that helium, after being cooled until it liquified, would turn into a solid on further cooling. However, what happened was that liquid helium, unmistakably a clear substance which anyone would regard as a liquid - it would swirl around its container, for example - when cooled further became very fluid, and continued to swirl. It didn't look solid - it didn't form crystals or a lump. In this way the fact, or, as we'll argue here, the legend, of  'superfluid helium' was born.

It's not surprising such a legend was invented: 'superfluid helium' looks liquid, and is much clearer than water, for example, presumably because the individual particles have a lower refractive index than the complex slightly-charged molecules of water. It was also much more obviously fluid than water in the sense of keeping its level.

The fun began when the fluid was discovered to have strange properties. For example, when put in a porous container, presumably unglazed porcelain or something similar, the substance was found to drip through. Normal liquids have far too much surface tension for this to happen; so it must be a 'superfluid'! Another property was supposed to be the ability to flow uphill: in a test-tube-like container, the stuff was seen to slowly disappear, and slowly drip down from off the bottom of the tube. Another superfluid property! it's important to realise that this substance has been investigated since the 1920s; it is still an academic industry; there are still laboratories, for example at Lancaster, England, specialising in it.

There's a theory to explain this, leaning on quantum mechanics. Thus in John Gribbin's Q is for Quantum (1998) we find: Superfluidity The way liquid helium flows without friction at very low temperatures. This is purely a quantum, phenomenon. It happens because at very low temperatures the atoms of helium in the superfluid behave like a boson gas.. They all occupy the same energy level, and can be described in terms of a single wave function moving effortlessly as a single unit. ..' Sadly, the entry under Bose-Einstein condensate to which we're referred isn't entirely helpful: A group of bosons which are all at the same quantum state, and behave like a single entity. In 1995 physicists .. in Boulder, Colorado etc. ..' and a 'boson' is 'A particle which obeys Bose-Einstein statistics. ..'

'Superfluidity' is a cornerstone of quantum theory: P.W. Atkins's Physical Chemistry says for example ‘.. the Helium atoms are so light that they vibrate with a large amplitude motion even at very low temperatures, and the solid simply shakes itself apart..’ [See below].

For the first time ever, we can now present the alternative view. It is on record that Phil Holland wrote, each time the editorship of a journal changed, to the new editor, asking please could they print his letter on this subject? And each time he was turned down. Remember you heard what follows here first:

The point is that helium is an 'inert' gas, one in which the atoms are unreactive, like neon and argon. This is explained in current theory by the electron shells of each atom being full, so that each atom cannot achieve greater stability by shared electrons with other atoms. Whether this theory is correct or not, helium is certainly inert in the traditional sense, a substance very rarely found in chemical combinations. Assuming that helium behaves like everything else, liquid helium, when cooled, would turn into solid helium: however, being, presumably, monatomic, the particles would all remain separate, making, not a crystal or conventional solid, but a fine powder—in fact, the finest powder in the universe. [Strictly, the isotope helium 3 presumably makes a slightly finer powder.] So, inside a porous pot, of course the individual particles can find their way down by gravity, and appear to flow through the pot. No mystery at all! And the supposed creep up the inside of a vessel [Rollin film]—which it was actually hoped to use to power little wheels, in a sort of perpetual motion waterwheel—may be just a misunderstanding of sublimation: some of the finely divided atoms sublime directly into gas, which recondenses on the sides of the vessel, and outside it, giving the impression of creep up the sides and of moving over the top rim and down the outside.

Because ‘superfluid helium’ is solid helium in the form of a very fine amorphous powder.
[Back to Start]
 

2.2 Deconstruction of 'Superfluid Helium': An Example from the Open University.



‘S272 Superflow’ Open University TV programme —dated 1986 but still being shown ten years later, as part of their stock physics course. Material below in quotes is more or less verbatim.

This film shows the low temperature lab at St Andrews University, Scotland; Jack F. Allen, a Canadian, (retired from the chair of physics) demonstrated equipment, including the set-up he used to make films of  'superfluid helium'. Some introductory matter by Shelagh Ross; John Walter (middle-aged, bearded) provided the official theory, introducing phase diagrams, graphics with 'fermions' and 'bosons' and so on.

Shelagh Ross, at the end, said “.. Many of the things below 4 degrees K.. seem quite contrary to our intuitions about how matter ought to behave.. really strange place.. quantum mechanics which manifests on a macro scale...”. It's amusing to contrast the woman's role with the man's, who explained, or rather quoted, the official explanatory theory.
    All this is part of the ideology of  'superfluid' helium: it's mysterious, and incomprehensible except to an elite familiar with 'Cooper pairs' of electrons and so on. It's an important part of the entire construction of quantum theory.

To present the alternative view as simply as possible, I've listed in sequence all the supposedly surprising things about 'superfluid helium' taken from this Open University programme:-
  1. “It remains liquid down to absolute zero.. This is quite unique to helium.. it needs at least 25 atmospheres before it solidifies..” Actually, the finely-divided powder, misinterpreted as a liquid, is in the same state down to absolute zero; it's already a solid! The point about pressure is that any powder under enough compression will appear to be solid: think of coffee packaged in vacuum-sealed bags, where the one-atmosphere external air pressure makes it seem 'solid'. The point about ‘at least’ 25 atmospheres pressure is that the dividing line from the supposed solid is vague, so of course there's no precise figure for the pressure needed. For years physicists have puzzled over the way inert gases behave when frozen, without realising the explanation.

  2. “There's a complete absence of bubbles of vapour [when superfluid].. evaporation is only from the surface. The reason is.. [it is] incapable of maintaining a temperature gradient.. it has more or less infinite thermal conductivity.” The substance in fact consists of tiny solid particles, like a fluidized bed—well-known to be very efficient heat exchangers. This is why any thermal gradient rapidly disappears. Evaporation takes place only from the top, unless there's forced heating, in which case the 'fountain effect' occurs (below).

  3. “.. jeweller's rouge plug.. the gaps are probably a few hundred atomic diameters. It is impermeable for liquid at room temperature.. Impervious to liquid helium I. At He2 [i.e. 'superfluidity'], immediately the plug starts to leak.. slowly at first, then faster.. the liquid just drains straight out at a constant rate.. quite different to other liquids, where the flow rate depends on the driving pressure.” Again, the explanation is simple. Powders can find their way through gaps which liquids cannot. (For example, a sieve of 50 microns will not allow water through—molecular weight 18—but will allow polythene powder (molecular weight 1000+).) The situation is similar to an egg-timer, where the sand falls through in a way dependent on the aperture, not the amount of sand above. Hence, the rate of flow is constant, “quite different to other liquids.” Presumably the speeding-up is due to the fine particles filling gaps and establishing routes through the 'superleak'.

  4. “.. the temperature of the liquid below the superleak [i.e. jewellers rouge plug] cools, while what's above it warms up.” It's difficult to comment on this, since the method of temperature measurement isn't given—though the programme otherwise assumes that pressure is exactly correlated with temperature. But no doubt some explanation in terms of greater mobility of colder particles applies.

  5. The ‘Fountain Effect’ This is credited to Prof. Allen in the film. A smallish upright glass cylinder has a small electric coil sealed into it; this is lowered into the liquid helium in its container (of Monax glass). When the electricity is turned on—in small amounts—the helium surface rises in the tube. Or, with sufficient heat, helium spurts out in a jet, its shape depending on the way the top of the tube is shaped. The correct explanation seems to be along the lines that a small amount of heat increases the fluidisation of the small particles, creating the appearance of reduced density in the 'superfluid'. More heat causes sublimation, the great increase in volume [gas atom occupying 12,000 volumes of solid atom] which forces powder to be ejected.

  6. [Two items on superconductivity, rather than superfluidity, omitted here, leading to:]

  7. “Lo and behold, the magnet [placed on a disk of tin, which is regarded as a superconductor] levitates.. a little higher with each burst of pumping.. [i.e. as the temperature is slightly reduced]” The usual explanation is that tin becomes superconducting at the low temperature of  'superfluid helium'; when this happens, the metal 'excludes the magnetic flux, so the metal levitates.'
        The question here is whether magnetism has anything do with the effect. Would it happen if the tin were replaced by (say) glass? Or if the magnet were replaced by a non-magnet? As late as 1998 the O.U. said they had no intention of testing any of these possibilities.
        But another property of fine powders explains this effect as well as superconductivity and magnetic fields. The effect is the segregation in powders by size (and other characteristics), rather than density alone as in liquids. The best demonstration is a steel ball put at the bottom of a beaker of tiny polythene spheres. Astonishingly, if the beaker is tapped a few times, the ball rises to the top. You can get a similar effect with a marble in sugar, or simply by shaking an instant coffee jar to see the larger particles rise. [See the illustration below—a bit of kitchen table physics to rub the point home]. The point is that when helium turns to a finely-divided solid, the fluidised atoms can have the same effect, and 'levitate' objects within them.


drawing showing separation by size of particle, not density
Is superconductivity needed to explain the ‘levitation’ of a magnet? This simple kitchen table experiment illustrates our possible explanation for ‘levitation’, with superfluid helium as a monatomic solid.

Other phenomena supposedly specific to superfluid helium appear in the text-books. Thus P.W. Atkins's Physical Chemistry states that ‘.. the entropy of the liquid is lower than the solid, and melting is an exothermic process’ of Helium-3. The correct explanation appears to be that the latent heat of sublimation of the fine solid is less than the latent heat of boiling.

Remember you heard it here first!

Sept 2000: I received some rather feeble e-mails from a person who had perhaps better remain anonymous, saying that Helium 3 (i.e. unusual and exceptionally light isotope of helium) doesn't show superfluidity. This is what Feynman says. So I must be wrong! In fact, of course, one would expect a light isotope to have a lower freezing point that the heavier equivalent. The temperature presumably was not reduced enough to make Helium-3 freeze, into the superfluid form.
[Back to Start]

3. Is the Speed of Light a Limit?

Q: Why is the supposed limiting speed of light considered an important issue?
A: Because the whole theory of relativity rests on this idea; relativity was an attempt to reconcile the idea of light speed as a limit with what were regarded in the 19th century as the laws of physics. Consequently, if the idea proves false, the whole theory of relativity is put in doubt.

Everyone knows, or at least believes, that nothing can travel faster than light. How did this idea originate? The point to grasp is that ordinary physical objects are subject to air resistance even at relatively quite slow velocities, for example, those of bullets. It was found at the end of the nineteenth century—when vacuum technology was good enough—that submicroscopic particles travelled much faster than any normal man-made objects. So experiments on particles were necessarily confined to electrons and other emissions believed to be small particles.

The question then is:
how are such particles accelerated? In some way, energy has to be put into them; and in practice this is done electronically, typically by electromagnets, as in a cyclotron. This is the only controlled way to make the things really move.

So we have a situation in which (say) a charged electron is made to accelerate by applying a charge, which is supposed to repel or attract it, depending on whether it's negative or positive. When such experiments were carried out, and relying on estimates of the mass of an electron derived from Millikan's oil drop experiment, it was found that, as more energy was put in, the electron's speed increased, but not as much as would be expected. So it must be getting heavier! And moreover the limit was the speed of light!

Sadly, there appears to be a defect in reasoning here, pointed out by Phil Holland. [Though I don't know if this argument is original with him—RW.] The point is that electromagnetic radiation itself has a velocity, namely the speed of light in the medium it's travelling in. Since energy can be transferred to an electron, presumably, only when a wave of energy catches up with it, obviously it's impossible for the electron to ever reach the speed of the wave influencing it.

If you can't see this immediately, consider these everyday models of the situation, which I've tried to make as varied as possible to get the point across.
  1. Imagine a wave-making machine in a swimming-pool, and a floating toy boat which is pushed along by the waves. However big the waves are, the boat won't go faster than these waves.
  2. Or consider a boy throwing stones, every second, at the same speed, at a floating piece of wood; however heavy the stones, the piece of wood will never travel faster than the stones. (Or at any rate once it travels faster than the stones, the stones can't catch up with it). But nobody would imagine the piece of wood must be getting heavier as it picks up speed, because it moves less when it's hit.
  3. Or imagine a children's roundabout, the sort turned by hand. If an adult regularly swings his arm to turn the roundabout, as it approaches the rate the arm's swinging at, will never speed up beyond the velocity of the arm.


It seems physicists, looking at electrons and measuring their speeds as they vary with energy, ignore this simple fact. They interpret the result as the particle getting heavier, with limiting speed that of light, without realising that the limit is imposed by their equipment. They assume in one part of their minds that electromagnetism travels at infinite speed.

E-mails exchanged with Matthew Leigh.
Rather typical e-mails on this topic. Click here (Short - about 13K) to read the full exchange of e-mails.


[Back to Start]




4. Is quantum theory credible?

Quantum theory originated in the attempt to explain the photoelectric effect, in which some substances, e.g. selenium, gave a small electric charge when exposed to light, as came to be used in a photographic exposure meter. The theory was extended to try to explain spectra of elements—the well-defined peaks, which appear as lines in spectra. The difficulty we'd like to raise here is in connection with the spectrum of hydrogen. This has many lines; as the frequency increases, the lines become more common, until eventually they fuse into a mass of close lines. How is it possible for the single electron of hydrogen to move into so many different layers as to generate all these lines?
[Back to Start]




5. Heisenberg's absurd probability error

“There is, then, a definite probability of finding the photons either in the one or the other part of the divided psi-wave packet. Now, if an experiment finds the photon in the reflected part, say, then the probability of finding it in the other part immediately becomes zero. The experiment at the position of the reflected part thus exerts a kind of action, a ‘reduction of the wave packet’, at the distant point occupied by the transmitted part. And one sees that this action is [propagated with a velocity greater than that of light.” [1933, in Chicago]
Thus Heisenberg, on light and half-silvered mirrors. He evidently had little grasp of probability, considering in effect that, after the event, the probability of the numbers on a winning lottery ticket must have been 1 before the event. He continued, in the lecture, to comment on the speed of this retrospective action being greater than that of light! Possibly also he was puzzled by the philosophical idea of determinacy. At any rate, this mistake has continued to be propagated with amazing fidelity; for example it occurs in Oppenheimer's The Flying Trapeze, and in 1996 I heard a lecture by Prof Hiley at Birkbeck, London, in the department of physics (since closed) which included references to Schrödinger's cat, a similar probabilistic error. Basil Hiley on Einstein, Relativity, and Strange Properties of Space-Time is my not very professional recording, showing the sort of thing.
[Back to Start]




6. How much trust can be put in particle detection techniques?

1950s bubble chamber photo with annotations. Scale etc not given

6.1 Detection techniques

    We won't look in detail at the Geiger counter, or scintillation techniques. The techniques we'll concentrate on are those assumed to make the paths of elementary particles visible.
    Textbook methods are the cloud chamber (earliest), the bubble chamber (invented in 1950s), the spark chamber, and (newest) techniques which use computer processing to generate the images.
  1. Cloud chambers were invented by Charles Wilson, who liked hiking in Scotland and tried, at least according to the story, to make a misty atmosphere. The idea is that a saturated dust-free atmosphere, like a supersaturated solution, needs only a tiny stimulus to precipitate.
        A typical demonstration was in Prof Frank Close's 1993 Royal Institution Xmas lecture. After assuring the audience that ".. radioactivity is perfectly natural.. radioactivity is all around us.. we evolved with it", his assistant Bryson Gore demonstrated a cloud chamber, on a trolley; in the centre of the clear box we see little jets around a central bit of stuff; very attractive demonstration, with several little tracks appearing about every second, travelling at a leisurely rate easily within visual capacity to follow, then dispersing away.
  2. Bubble chambers rely on local evaporation of liquids like liquid nitrogen, which are temporarily kept at low pressure. To quote 'Encarta', because the density of the liquid is much higher than that of air, more interactions take place in a bubble chamber than in a cloud chamber.
  3. Spark chambers use a principle a bit like early radio. 'Encarta' says: 'the spark chamber, evolved in the 1950s. In this device, many parallel plates are kept at a high voltage in a suitable gas atmosphere. An ionizing particle passing between the plates breaks down the gas, forming sparks that delineate its path.'
  4. Modern methods: to quote Frank Close, "..particles shoot in at each end.. inside, matter and anti-matter emerge.. huge magnets bend the particles.. this allows the speed and charge to be known.. designing detectors is a challenge in its own right.. ephemeral particles.. information goes to a computer.. which turns them into visible trails.."

6.2 Universal Assumption that Detectors Show Exact Paths of Particles

This assumption—that the tracks indicate exactly where the particles go—certainly seems universal; I don't recall ever seeing doubt expressed about it. It's an unconscious assumption which seems very plausible; after all, you can see tracks form, and obviously something tiny must have caused them.
    To take some quotations which happen to be on my computer, J B S Haldane believed this. So did Russell, in his ABC of Atoms: ".. water vapour.. each electron [sic] collecting a little cloud which can be made visible with a powerful microscope.."
    Thomas Kuhn wrote: 'We do not see electrons, but rather their tracks or else bubbles of vapor in a cloud chamber... But the position of the man who has learned about these instruments and had much exemplary experience with them is very different,... viewing a cloud chamber he sees (here literally) not droplets but the tracks of electrons, alpha particles, and so on. Those tracks are if you will, criteria that he interprets as indices of the presence of the corresponding particles, .." Kuhn is doing his best to be sceptical, but it doesn't occur to him that the paths outlined by droplets could have anything artefactual about them.
    And the author of an Encarta article: ".. cloud chamber.. where water droplets condensed on the ions produced by the particles during their passage."

6.3 An Arithmetical Problem with this Assumption

    Let's try to quantify the situation. According to Avogadro's hypothesis, there are 6x10^23 atoms per atomic weight in grammes. To take a concrete example, consider a smoke alarm with 1 microgram of Americium—a literally microscopic amount. (The demonstrations usually involve e.g. radioactive lead in larger amounts).
    Putting the atomic weight of Americium at about 240, then 240 grammes contain 6x10^23 atoms. A microgramme therefore contains 2.5x10^15 atoms. The half life is supposed to be 500 years; therefore a microgram of completely new and completely un-alloyed Americium will have 3x10^23 atomic decays in 500 years. In one second this averages to about 150,000 decays. Even allowing for particles which may be absorbed or go in the wrong direction to be detected, and impurities in the metal, and slowing down at later periods in the half-life, this is a large number. Is it certain that the tracks do follow the exact paths of particles?

6.4 Physical Problems with this Assumption

  1. It's supposed to be true that one single particle (say, an electron) can generate a track of water globules many centimetres long. Granted that the atmosphere is supersaturated, nevertheless in terms of scale this seems like a fish swimming the Atlantic and changing the state of every molecule on the way.
  2. If it is true that the apparatus is so sensitive, given that there are supposed to be fantastic numbers of free electrons in the earth, as well as ultraviolet and other radiations, how come the apparatus is so comparatively stable?
  3. If it's true that only very few molecules of water (in a cloud chamber) become ionised, how come there's a line? Wouldn't it be much more likely that there'd be a dotted line, with an enormous distance between the dots?
  4. How can a single charged particle ionise the enormous numbers of molecules in its path as it passes?
    I'd suggest these pieces of equipment work in a different way from what might be assumed at first sight: the nearest analogy I can think of in everyday life is lightning, where charges accumulate over quite long periods, after which a path of ions forms along which charges can be conducted. In the same way, bombardment of the cloud or liquid or plates generates an ionised volume which eventually flips, forming a line. The shape of the line seems much more likely to be due to the way the molecules of the substance behave when changing state under radiation than to one single particle travelling through.
    Presumably something similar may be true of thick photographic emulsions which are, or were, used to detect particles. With regard to modern computer-processed images there's the further complications that the image depends on the way the machine is programmed; if the dots are joined in the wrong way, presumably the results are worthless.

6.5 Can Inconsistent and Odd Results be Explained by this Artefactual Error?

Could this be why symmetrical particles aren't found; the properties are mainly in the detector, not the physical objects supposedly being studied? I don't know. But here are a few quotations:
W. Heisenberg, Physics Today, 29(3), 32(1976). The nature of elementary particles : ...Before this time it was assumed that there were two fundamental kinds of particles, electrons and protons, which, unlike most other particles, were immutable. Therefore their number was fixed and they were referred to as "elementary" particles. Matter was seen as being ultimately constructed of electrons and protons. The experiments of Anderson and Blackett provided definite proof that this hypothesis was wrong. Electrons can be created and annihilated; their number is not constant; they are not "elementary" in the original meaning of the word.... A proton could be obtained from a neutron and a pion, or a hyperon and a kaon, or from two nucleons and one antinucleon, and so on. Could we therefore simply say a proton consists of continuous matter?... This development convincingly suggests the following analogy: Let us compare the so-called "elementary" particles with the stationary states of an atom or a molecule. We may think of these as various states of one single molecule or as the many different molecules of chemistry. One may therefore speak simply of the "spectrum of matter."... [Quoted, I presume correctly, by Bryan Wallace]
Nancy Cartwright's rather unreadable How the Laws of Physics Lie (1983):
'But as elementary particle physicist James Cushing remarks, When one looks at the succession of blatantly ad hoc moves made in QFT [quantum field theory] (negative-energy sea of electrons, discarding of infinite self energies and vacuum polarizations, local gauge invariance, forcing renormalization in gauge theories, spontaneous symmetry breaking, permanently confined quarks, ...) ...'
Allan Franklin's Experiment: Right or Wrong (1990) lists a number of anomalies, and is also rather unreadable—in each case, the authors assume the 'facts' fed them are correct, and unsurprisingly get into tangles. Prof. Frank Close said he's spent 20 years working on quarks; he has a love/hate relationship with them. He added, with staggering unoriginality, "It looks as if god is a mathematician."
[Back to Start]




7. Do particle accelerators give useful results?

The following quotation (I'm grateful to Ivor Catt for drawing this to my attention) was published in May 1972 by Lynn Trainor, Professor of Physics at Toronto; so far as I know he's still there, but I don't know if it still represents his views:
In many fields there are certain things in vogue at a given time. Nearly everything published in high energy physics, for example, is junk. It has nothing to do with reality - it's a whole castle of cards. Yet you are on safe ground if you publish a paper according to the currently accepted style. You will be published, especially if you make some curves and graphs that make it appear that you did some calculations. The fact that it is all a house of cards with very little reality to begin with is somehow ignored.
Something similar has been said [information from Bryan Wallace's The Farce of Physics , on Internet] by Carlo Rubbia, the Nobel prize-winning physicist who ran CERN, to the effect that accelerators generate so much in the way of artefacts that the only way to check is to compare the results of one accelerator with a different one. (I don't know whether the project of constructing a duplicate CERN was ever considered). These quotations suggest that particle accelerators are subject to artefacts in the same way that some biological techniques are, for example electron microscopy. They also suggest that, if true, establishing the truth would be considerably more difficult than for biology, given that there are far fewer particle accelerators than there are electron microscopes in biology labs.
[Back to Start]




8. What is wrong with relativity?

        8.1 Introductory Notes.
        8.2 Paper by G. Burniston Brown.

8.1 Introductory Notes.



What follows is a fairly long, and little-known, paper (about 6000 words) by G Burniston Brown, a physicist who also wrote on the history of science. It was published in 1967 and is reproduced with permission of the Institute of Physics, who were under the impression that correspondence followed until 1969. When I checked this, I found that, in fact, not one single letter had been published in reply. It may seem odd that a paper could be relevant after 30 years or so; in fact, it often happens that a book or paper which is unanswerable, or difficult to answer, never receives any answer; the same thing happened, for example, to Peter Duesberg on AIDS. So I make no apology for including it here. First a few notes:
[Back to Start]

8.2 Paper by G. Burniston Brown.

From the BULLETIN of The Institute of Physics and the Physical Society, pp. 71-77, March 1967
The Institute of Physics headquarters is 76 Portland Place, London W1. Tel 0171 470 4800.
Reproduced here by permission of The Institute of Physics.

What is wrong with relativity? 1


G. BURNISTON BROWN
 
Genuine physicists – that is to say, physicists who make observations and experiments as well as theories – have always felt uneasy about ‘relativity’. As Bridgman said, “if anything physical comes out of mathematics it must have been put in in another form” . The problem was, he said, to find out where the physics got into the theory (Bridgman 1927). This uneasiness was increased when it was clear that distinguished scientists like C. G. Darwin and Paul Langevin could be completely misled. Darwin wrote a fatherly letter to Nature (Darwin 1957) describing the simple way in which he explained ‘relativity’ to his friends: the simplicity, however, was due to the fact that, with the exception of a quoted formula, there was no relativity theory in it at all. Langevin, likewise, gave a supposedly ‘relativistic’ proof of the results of an optical experiment by Sagnac, but as his countryman André Metz said, although “assez élégant” , it was not relativity (Metz 1952). There were other disturbing features: the fact that Einstein never wrote a definitive account of his theory; that his first derivation of the Lorentz transformation equations contained velocities of light of c-v, c+v and (c 2 -v 2 ) 1/2 , quite contrary to his second postulate that the velocity of light was independent of the motion of the source; and that his first attempt to prove the formula E = m 0 c 2 , suggested by Poincaré, was fallacious because he assumed what he wanted to prove, as was shown by Ives (Ives 1952).

It is not surprising, therefore, that genuine physicists were not impressed: they tended to agree with Rutherford. After Wilhelm Wien had tried to impress him with the splendours of relativity, without success, and exclaimed in despair “No Anglo-Saxon can understand relativity!” , Rutherford guffawed and replied “No! they’ve got too much sense!” 2 Let us see how sensible they were.
First of all, a little history. There is no need to repeat the accounts, now given in many textbooks, of the unsuccessful attempts to detect the aether. The simplest hypothesis, namely that the aether did not exist and that we were thus left with action-at-a-distance or ballistic transmission, was held to be unacceptable. Instead, Poincaré preferred to raise this failure to a ‘principle’ – the principle of relativity – saying: “The laws of physical phenomena must be the same for a ‘fixed’ observer as for an observer who has a uniform motion of translation relative to him, so that we have not, and cannot possibly have, any means of discerning whether we are, or are not, carried along by such a motion.” As a result there would perhaps be “a whole new mechanics, where, the inertia increasing with the velocity, the velocity of light would become a limit that could not be exceeded” (Poincaré 1904).

In the next year, 1905, Einstein re-stated Poincaré’s principle of relativity and added the postulate that the velocity of light is independent of the velocity of its source. From the principle and the postulate he derived the Lorentz transformation equations, but in an unsatisfactory way as we have seen. Another curious feature of this now famous paper (Einstein 1905) is the absence of any reference to Poincaré or anyone else: as Max Born says, “It gives you the impression of quite a new venture. But that is, of course, as I have tried to explain, not true” (Born 1956).
In 1906 Planck worked out the ‘new mechanics’ predicted by Poincaré, obtaining the well-known formula



and the corresponding expressions for momentum and energy. In the next year he derived and used the mass-energy relation (Planck 1906, 1907).

In 1909, G. N. Lewis drew attention to the formula for the kinetic energy



and suggested that the last term should be interpreted as the energy of the particle at rest (Lewis 1909). Thus gradually arose the formula E=m 0 c 2 , suggested without general proof by Poincaré in 1900.
It will be seen that, contrary to popular belief, Einstein played only a minor part in arriving at the main ideas and in the derivation of useful formulae in the restricted, or special, theory of relativity, and Whittaker called it the relativity theory of Poincaré and Lorentz, pointing out that it had its origin in the theory of aether and electrons (Whittaker 1953). A recent careful investigation by Keswani confirms this opinion; he summarizes Poincaré’s contribution as follows:

“As far back as 1895, Poincaré, the innovator, had conjectured that it is impossible to detect absolute motion. In 1900 he introduced ‘The principle of relative motion’ which he later called by the equivalent terms ‘The law of relativity’ and ‘The principle of relativity’ in his book Science and Hypothesis published in 1902. He further asserted in this book that there is no absolute time and that we have no intuition of the ‘simultaneity’ of two ‘events’ [mark the words] occurring at two different places. In a lecture given in 1904, Poincaré reiterated the principle of relativity, described the method of synchronization of clocks with light signals, urged a more satisfactory theory of the electrodynamics of moving bodies based on Lorentz’s ideas and predicted a new mechanics characterized by the rule that the velocity of light cannot be surpassed. This was followed in June 1905 by a mathematical paper entitled ‘Sur la dynamique de l’électron’, in which the connection between relativity (impossibility of detecting absolute motion) and the Lorentz transformation, given by Lorentz a year earlier, was recognized. 3
      In point of fact, therefore, Poincaré was not only the first to enunciate the principle, but he also discovered in Lorentz’s work the necessary mathematical formulation of the principle. All this happened before Einstein’s paper appeared”
(Keswani 1965).
Einstein’s attempt to derive the Lorentz transformation equations from the principle of relativity and the postulate that the velocity of light is independent of that of the source would (if it had not involved a contradiction) have made Lorentz transformations independent of any particular assumption about the construction of matter (as it had not been in Lorentz’s derivation). This feature, of course, was pleasing to the mathematically minded, and Pauli considered it an advance. Einstein said that the Lorentz transformations were “the real basis of the special relativity theory” (Einstein 1935), and this makes it clear that he had converted a theory which, in Lorentz’s hands at any rate, was a physical theory (involving, for instance, contraction of matter when moving with respect to the aether) into something that is not a physical theory in the ordinary sense, but the physical interpretation of a set of algebraic transformations derived from a principle which turns out to be a rule about laws, together with a postulate which is, or could be, just the algebraic expression of a fact – the independence of the velocity of light of that of the source (experiments already done appear to confirm it but more direct evidence is needed). We see, then, that ‘relativity’ is not an ordinary physical theory: it is what Synge calls a “cuckoo process” ; that is to say, Nature’s laws must be found first, and then they can, perhaps, be adapted to comply with the overall ‘principle’.
      “The eggs are laid, not on the bare ground to be hatched in the clear light of Greek logic, but in the nest of another bird, where they are warmed by the body of a foster mother, which, in the case of relativity, is Newton’s physics of the 19th century” (Synge 1956).

The special theory of relativity is therefore founded on two postulates
(a) a law about laws (Poincaré’s principle of relativity)
(b) an algebraic representation of what is, or could be, a fact (velocity of light constant, independent of the velocity of the source)
and its application to the physical universe is
(c) a cuckoo process.
This basis of the theory explains a great deal that has mystified many physicists and engineers. They could not understand how Einstein could sometimes speak as though the aether was superfluous (Einstein 1905) and at other times say “space without aether is unthinkable” (Einstein 1922). This was due, of course, to not starting with physical terms – matter, its motion, and its interactions (force). A physical theory which included radiation would have to start by stating whether an aether, action-at-a-distance, or ballistic transmission of force was being postulated. It explains, also, how mass and inertial force get into the special theory which is founded on a geometrization of uniform velocities, for it is well known that inertial forces do not appear when the velocities are uniform. Formulae which purport to give the relation between measurements in one state of uniform velocity and those made in another state of uniform motion cannot logically throw any light on what happens during the change from one state to the other. This is only possible by using the cuckoo process – assuming Newton’s second law and the conservation of momentum, and then modifying them. It also makes clear how Einstein could call Tolman’s account of theory (Tolman 1934) definitive, and also praise Bergmann’s treatment (Bergmann 1942), when the former author thought length contractions real and in principle observable, whereas the latter seems to have thought it only an appearance.
The fact that Einstein asserted that the Lorentz transformation equations were the basis of the special theory, and these are, of course, purely mathematical, means that, in so far as the theory is considered to have any physical implications, these implications must be the result of the interpretation of mathematical expressions in physical terms. But in this process there can be no guarantee that contradictions will not arise, and, in fact, serious contradictions have arisen which have marred the special theory. Half a century of argumentation has not removed them, and the device of calling them only apparent contradictions (paradoxes) has not succeeded in preventing the special theory of relativity from becoming untenable as a physical theory.
The most outstanding contradiction is what the relativists call the clock paradox. We have two clocks, A and B, exactly similar in every way, moving relatively to one another with uniform velocity along a line joining them. If their own interaction is ignored and they are far removed from other matter, they continue to move with uniform velocity, and so each clock can be considered as being the origin of a set of inertial axes. The Lorentz transformations show that the clock which is treated as moving goes slow. The principle of relativity, however, asserts that, as A and B both provide inertial frames, they are equivalent for the description of Nature, and all mechanical phenomena take the same course of development in each. Referred to A, B goes slow; referred to B, A goes slow. It is not possible for each of two clocks to go slower than the other. There is thus a contradiction between the Lorentz transformations and the principle.

This contradiction can be seen clearly in a diagram which prevents the confusion which arises when the expression ‘as seen from’ is allowed to enter the argument (e.g. ‘the time at B as seen from A’). In figure 1, two long lines of clocks are passing close to one another with uniform velocity V. At a given instant, two clocks opposite one another, A and B, are set to read the same time. All the A series of clocks are then synchronized with A by the method of reflected light signals, suggested by Poincaré and accepted by Einstein and other relativists. In a similar way all the B series of clocks are synchronized with B.

In the upper diagram the A clocks are taken to be at rest and the B clocks to be moving to the right. After a time interval, the clock B has travelled a distance d , say: its reading is then compared with that of the clock C momentarily opposite to it. C, however, has been synchronized with A so that the comparison is in effect a comparison of B with A. According to the Lorentz transformations, the moving clock B goes slow, and its reading, therefore, behind that of C(= A) as shown. In the lower diagram the B clocks are taken to be at rest and the A clocks to be moving to the left. When A has travelled the distance d , its reading is compared with that of clock C', momentarily opposite to it. But, as before, C' has been synchronized with B so that we have, in effect, another comparison of B with A, and this time A’s clock goes slow, so that B’s reading is in advance of A’s as shown. The two comparisons should yield the same result according to the principle of relativity. It is obvious that they do not.
A more intriguing instance of this so-called ‘time dilation’ is the well-known ‘twin paradox’, where one of two twins goes for a journey and returns to find himself younger than his brother who remained behind. This case allows more scope for muddled thinking because acceleration can be brought into the discussion. Einstein maintained the greater youthfulness of the travelling twin, and admitted that it contradicts the principle of relativity, saying that acceleration must be the cause (Einstein 1918). In this he has been followed by relativists in a long controversy in many journals, much of which ably sustains the character of earlier speculations which Born describes as “monstrous” (Born 1956).

Surely there are three conclusive reasons why acceleration can have nothing to do with the time dilation calculated:
(i) By taking a sufficiently long journey the effects of acceleration at the start, turn-round and end could be made negligible compared with the uniform velocity time dilation which is proportional to the duration of the journey.
(ii) If there is no uniform time dilation, and the effect, if any, is due to acceleration, then the use of a formula depending only on the steady velocity and its duration cannot be justified.
(iii) There is, in principle, no need for acceleration. Twin A can get his velocity V before synchronizing his clock with that of twin B as he passes. He need not turn round: he could be passed by C who has a velocity V in the opposite direction, and who adjusts his clock to that of A as he passes. When C later passes B they can compare clock readings. As far as the theoretical experiment is concerned, C's clock can be considered to be A’s clock returning without acceleration since, by hypothesis, all the clocks have the same rate when at rest together and change with motion in the same way independently of direction. 4
One more contradiction, this time in statics, may be mentioned: this is the lever with two equal arms at right angles and pivoted at the corner. It is kept in equilibrium by two equal forces producing equal and opposite couples. According to the Lorentz transformation equations referred to a system moving with respect to the lever system, the couples are no longer equal so the lever should be seen to rotate, which is, of course, absurd. Tolman tried to overcome this by saying that there was a flow of energy entering one lever arm and passing out through the pivot, just stopping the rotation! Overlooking the fact that energy is a metrical term and not anything physical (Brown 1965, 1966), there would presumably be some heating in the process which is not considered. Statics provides insuperable difficulties for the physical interpretation of Lorentz transformation equations and this part of mechanics is avoided in the textbooks – in fact, Einstein omits statics in his definition: “The purpose of mechanics is to describe how bodies change their position in space with time” (Einstein 1920, p. 9).

The three examples which have been dealt with above show clearly that the difficulties are not paradoxes (apparent contradictions) but genuine contradictions which follow inevitably from the principle of relativity and the physical interpretations of the Lorentz transformations. The special theory of relativity is therefore untenable as a physical theory.
Turning now to the general theory of relativity, Einstein tells us in his autobiography (Einstein 1959) how, at the age of 12, he began to doubt Bible stories. “The consequence was a positively fanatic (orgy of) free-thinking coupled with the impression that youth is intentionally being deceived by the State through lies; it was a crushing impression. Suspicion against every kind of authority grew out of this experience, a sceptical attitude towards the convictions which were alive in any specific social environment – an attitude which has never again left me.”

This sceptical attitude towards prevailing convictions possibly explains why Einstein was not satisfied with the relativity theory of Poincaré and Lorentz which stopped short of including accelerating systems, thus still leaving something apparently ‘absolute’. He still seemed to be affected by this word ‘absolute’, but it is difficult to see what it could mean except with regard either to the Sensorium of God (Newton) or an aether pervading all space. He pushed on, therefore, with an attempt to show that natural laws must be expressed by equations which are covariant under a group of continuous coordinate transformations. This group, which Einstein took as the algebraic expression of a general principle of relativity, included, as a subgroup, the Lorentz transformations which Poincaré had taken as the algebraic expression of the restricted principle.
To overcome the physical difficulty that acceleration produces forces (inertial) whereas uniform velocity does not, Einstein was led to assert that these forces cannot be distinguished from ordinary gravitational force, and are therefore not an absolute test of acceleration. This contention Einstein called the principle of equivalence. In trying to support this contention, he imagined a large closed chest which was first at rest on the surface of a large body like the Earth, and then later removed to a great distance from other matter where it was pulled by a rope until its acceleration was g . No experiment made inside could, he claimed, detect the difference in the two cases. But in this he was mistaken, as I have shown (Brown 1960). In the first case, if two simple pendulums were suspended with their threads a foot apart, the threads would not be parallel but point towards the centre of mass of the Earth (or a point somewhat nearer allowing for their mutual attraction). The angle between them would, in principle, be detectable by the Mount Palomar telescope. When accelerated by a rope, the threads would be parallel if it were not for the small mutual attraction. If now, the threads were moved so as to be further apart, the angle between them would increase in the first case, but in the second case the threads would become more parallel so that the angle would therefore decrease. The principle of equivalence is therefore untenable. It is gratifying to find one theoretician who states that the principle is false (Synge 1960): “In Einstein’s theory there is a gravitational field or there is none, according as the Riemann tensor does or does not vanish. This is an absolute property: it has nothing to do with the observer’s world-line.” The principle of equivalence is made plausible by the use of the expression ‘gravitational field’, overlooking the fact that this is a useful conception but cannot be demonstrated. All we can do is place a test particle at the point in question and measure the force on it. This might be action-at-a-distance. As soon as the term ‘field’ is dropped and we talk about the gravitational force between bodies at rest, we realize that the force is centripetal, whereas the force of inertia is not. This is an important difference obscured by the use of the word ‘field’. Relativists now admit that the principle of equivalence only holds at a point; but then, of course, we have left physics for geometry – experiments cannot be made at a point.

This contact with the physical world having gone, we are left in the general theory only with the principle of covariance - that the laws of physics must be expressed in a form independent of the coordinate system, and the mathematical development of this condition which Einstein did with Grassman and others. Unfortunately, given sufficient ingenuity, almost any law of physics can be expressed in covariant form, so that the principle imposes no necessary restriction on the nature of these laws. The principle is therefore barren, and Einstein had to regard it as merely of heuristic significance (by considering only the simplest laws in accord with it (Einstein 1959 , p. 39)). Also the number of problems which can be completely formulated, let alone solved, is extremely small. Some relativists look on it rather as an encumbrance (Fock 1959).
The three consequences stemming from Einstein’s theory of gravitation, that are usually brought forward as supporting it, are also not impressive. The movement of the perihelion of Mercury was known before and can be explained in various ways (Whittaker 1953). The ‘bending of light’ round the Sun had been suggested before, and the much advertised confirmation in the eclipse of 1919 involved assuming Einstein’s law of ‘bending’ to obtain the ‘scale constants’, with the help of which the results were derived which were supposed to prove it. The deflections of stars that moved transversely or in the opposite direction to that predicted were omitted. The mean deviation and its direction varied from plate to plate during the eclipse, suggesting refraction in a turbulent diffuse ‘atmosphere’. Nevertheless a mean value was obtained “in exact accord with the requirements of the Einstein theory” (Lick Observatory Bulletin 1922, No. 346). Later attempts have given different values. This must be one of the most extraordinary self-deceptions in the whole history of science (see Poor 1930). The gravitational red shift of light now appears to be confirmed, but this follows from Mach’s hypothesis 5 that inertial forces are due to interaction with the distant bodies of the Universe 6 , and does not require ‘relativity’ as the author has shown (Brown 1955).

We see, then, that the general theory is based physically on a fallacy (principle of equivalence) and on a principle that is barren (covariance) and which is also, mathematically, almost intractable. Genuine physicists may well agree with Fock that it is not a major contribution to physics.

The whole subject of ‘relativity’ is extremely interesting looked at from the point of view of scientific method. Western science long ago involved the rejection of the view that Nature’s ways can be found by just taking thought, or by the adoption of principles based on reason alone, or beauty, or simplicity. The idea of perfection in the heavens, as we know, held back astronomy with epicycles and caused sunspots to be explained away.
Newtonian method consists in first establishing the facts by careful observation and experiment, and then proceeding to attempt an explanation of them in physical terms – matter, motion and force – then from such a theory to derive, by logic and mathematics, various principles (e.g. conservation of momentum) as well as further consequences which can be put to experimental test. Natural science is concerned with causes: logic and mathematics are only tools. Newton made this clear when, after giving the first satisfactory explanation of the tides, he said: “Thus I have explained the causes of the motion of the . . . Sea. Now it is fit to subjoin something concerning the quantity of those motions.” But relativists now assert that “The dignity of pure theoretical speculation has been rehabilitated . . . based on a process of the mind with its own justification” (shades of Descartes!). Relativity “has saved science from narrow experimentalism, it has emphasized the part which beauty and simplicity must play in the formulation of theories of the physical world” (Mercier 1955).

The disadvantages of systems of theoretical speculation based on a process of the mind with its own justification – well understood by Bacon and the early founders of the Royal Society – are very evident in ‘relativity’. Uncomfortable facts have to be forced into the system by specious reasoning, as in the case of the right-angled lever mentioned above, or ignored altogether, as in the case of Römer’s one-way determination of the velocity of light. This method is not mentioned in books by relativists although it is a famous determination, being the first historically, and known to Newton in his later years. Römer’s method is worth examining in detail because it nullifies Einstein’s contention, repeated by Eddington and others, that we only know the out-and-return velocity, not the one-way velocity, so that the time of arrival of a signal at a distant point is never known from observation but can only be a convention.
Römer measured the durations of the eclipse of one of Jupiter’s satellites. These time periods increased when the Earth was moving away from Jupiter, and decreased again when the Earth was moving towards it. A knowledge of the size of the Earth’s orbit, and thus the distances moved during the eclipses, allowed a calculation of the velocity of light which had only travelled one way. Modern photometric observations at Harvard University yield an excellent value which remains constant with the varying changes of direction as Jupiter moves round in its orbit.
      Now the timing of the eclipses on the Earth’s surface is not open to criticism, since measurement of time is defined for observers on the Earth. But relativists might say that the assumption of uniform rotation of the satellite, based on Newton’s laws, and the use of astronomical triangulation applied to moving bodies (which is necessary to determine the Earth’s orbit) both involve knowledge of the one-way velocity of light, and that this is constant – which is just what we are trying to determine.
      Although the high accuracy of astronomical observations and the general agreement with theory over long periods of time is a sufficient proof that the velocity of light does not fluctuate, the best way to avoid this criticism is to notice that the experiment could be carried out in principle (we are only concerned with the relativist assertion that is impossible in principle) on the surface of the Earth. The periodic eclipses could be replaced by a flashing beacon B (figure 2) controlled so as to flash at whatever are defined to be equal intervals, and this equality can be judged from the distant point A with one clock. The observer is carried round on the edge of a circular rotating table (corresponding to the Earth’s motion in its orbit), and makes a mark on the stationary surrounding rim every time he sees a flash (this could be done automatically).


These marks get further apart between E 1 and E 2 , corresponding to the increase in eclipse time periods in the Jupiter case. The clock A, at rest with respect to the beacon, the centre of the table and the stationary rim, makes marks on the table edge, the distances between which can be used as a test of uniform rotation, and also serve to convert the distances between the stationary rim marks into time intervals. The distance E 1 E 3 is measured with the metre bar. The one-way velocity is calculated, as in the astronomical case, from the data. In this way we can avoid using the properties of light in order to determine the length E 1 E 3 , and there is only one clock. With modern techniques this method might possibly be used to test the effect of movement of the source on the velocity of light.
Belief in principles because of their mathematical elegance, or cogency, leads also to a distortion of physics, its purpose and its history. Most of the discussion about observers and their imagined measurements is remote from anything that physicists do. Having to call force a fiction, which it cannot be by definition, since we have a special set of deep-seated nerves for detecting it, and asserting that it can be removed by a mere transformation of axes illustrate distortions of physics which are common. Even distortion of mathematics occurs in Einstein’s later attempt to derive the Lorentz transformation equations from the principle of relativity together with algebraic expression of the constancy of the velocity of light. In this proof he is forced, as Essen has pointed out (Essen 1962), to use the same symbol for two different quantities, and later he derives a dimensionally impossible equation by putting a length equal to unity (Einstein 1920). 7 It is difficult not to repeat Keswani’s comments on Einstein’s first (1905) proof: “The steps taken have a curiously compensating effect and apparently the demonstration was driven towards the result” (Keswani 1965).

The distortion of the purpose of physics has already been exemplified by Einstein’s definition of mechanics which leaves out statics. “The object of physics is to predict the results of given experiments concerning stated events” , says McCrea (McCrea 1952), but the business of physicists is with “the causes of sensible effects” , as Newton said - causes , not just rules and predictions. The distortions of the history of physics are too common to be worth detailed mention: many papers and broadcast lectures begin with a travesty of Newton’s views.

Einstein’s own part in the development of ‘relativity’ is particularly instructive from the point of view of scientific method. The early adolescent suspicion of all authority, and consequently of anything called ‘absolute’ – resulting in the desire to prove all frames of reference equal – led to proofs having to be forced and contrary facts ignored. As so often happens in other spheres, some frames turned out to be more equal than others (inertial frames). The attempt to extend ‘equality’ to accelerated axes led to invoking a principle (equivalence) whose application gradually shrank to a mathematical point, and to a postulate (covariance) which turned out to be barren. His final years devoted to trying to obtain a unitary mathematical treatment of gravitation and electrodynamics ended in failure. It is difficult to think of a more convincing demonstration of the dire effects of abandoning Newtonian method.
What then remains of the theory? The Lorentz transformations have proved not to be the necessary formulation of the principle of relativity, as Poincaré believed, since physical interpretations of them have contradicted the principle. When applied, perspicaciously, to Newtonian physics they produce formulae which are certainly superior to the ‘classical’ ones at high speeds. But the Lorentz transformation equations were first derived and used by Voigt in 1887 in connection with elasticity, and later, again, by Lorentz in connection with the electron theory of matter, and do not depend on ‘relativity’ for their derivation. 8 The placing of the Lorentz term (1-v 2 /c 2 ) 1/2 under m , the mass, following Poincaré’s prediction of a velocity c that cannot be exceeded by matter, has been supported by experiments with accelerators (relative to the machine). Once again, however, interpretations of algebra are not a substitute for genuine physical theory: the interaction of a particle with distant matter (force of inertia), tending to infinity when v approaches c , is not the only physical interpretation; it may be that interaction with nearby matter (the accelerating force) may tend to zero when v approaches c. This hypothesis, for example, avoids the supposition of an enormous amount of matter in the Universe for which there is no evidence (Brown 1955, 1957, 1958, 1963). The general theory has been well summed up by Fock: “It is . . . incorrect to call Einstein’s theory of gravitation a ‘General theory of relativity’ all the more since ‘The general principle of relativity’ is impossible under any physical condition.”

“The general covariance of equations has quite a different meaning from the physical principle of relativity; it is merely a formal property of the equations which allows one to write them down without prejudging the question of what coordinate system to use. The solution of equations written in generally covariant form involves four arbitrary functions; but the indeterminacy arising from this has no fundamental importance and does not express any kind of ‘general relativity’. From a practical point of view such an indeterminacy even represents something of a disadvantage” (Fock 1959).

It is still too soon to attempt a final judgment on ‘relativity’, but certainly we can say that ‘relativity’ has not provided convincing justification for adopting a new scientific method which involves “processes of the mind which are their own justification” , and rejecting Newton’s continual plea for more experiments as “narrow experimentalism” . Nor does it justify substituting the derivation of physical theory, by interpretation of an algebraic representation of a postulated general principle, for the derivation of general principles from the algebraic representation of a physical theory.



References

BERGMANN, P. G., 1942, Introduction to the Theory of Relativity (New York: Prentice-Hall), preface.
BORN, M., 1956, Physics in My Generation (London: Pergamon Press), p. 193.
BRIDGMAN, P. W., 1927, The Logic of Modern Physics (New York: Macmillan), p. 169.
BROWN, G. B., 1943, Nature, Lond. , 151, 85-6.
– 1955, Proc. Phys. Soc. B., 68, 672-8.
– 1956, Sci. Progr. , 44, 619-34.
– 1958, Sci. Progr. , 46, 15-29.
– 1960, Amer. J. Phys. , 28, 475-83.
– 1963, Contemporary Physics , 5 .
– 1965, LP.P.S. Bulletin , 16, 319.
– 1966, LP.P.S. Bulletin , 17, 22.
CAPILDEO, R., 1961, Proc. Camb. Phil. Soc. , 57, 321-9.
DARWIN, C. G., 1957, Nature, Lond. , 180, 976.
EINSTEIN, A., 1905, Ann. Phys., Lpz. , 17, 891 (English translation in The Principle of Relativity (New York: Dover, 1922)).
– 1918, Naturwissenschaften , 48, 697-703.
– 1920, Relativity. The Special and the General Theory (London: Methuen), appendix I.
– 1922, Sidelights on Relativity (London: Methuen), p. 23.
– 1935, Bull. Amer. Math. Soc. , 41, 223-30.

EINSTEIN, A., 1959, Albert Einstein: Philosopher Scientist (New York: Harper & Brothers).
ESSEN, L., 1962, Proc. Roy. Soc. A., 270, 312-4.
FOCK, V., 1959, Theory of Space, Time and Gravitation (London: Pergamon Press), p. 401.
IVES, H. E., 1952, J. Opt. Soc. Amer. , 42, 540-3.
KESWANI, G. H., 1965, Brit. J. Phil. Sci. , 15, 286-306; 16, 19-32.
– 1966, Brit. J. Phil. Sci. , 17, 234-6.
LEWIS, G. N., 1909, Phil. Mag. , 28, 517-27.
MCCREA, W., 1952, Nature, Lond. , 179, 909.
MERCIER, A., 1955, Nature, Lond. , 175, 919.
METZ, A., 1952, J. Phys. Radium , 13, 232.
PLANCK, M., 1906, Verh. dtsch. phys. Ges. , 8, 136-41.
– 1907, S.B. preuss. Akad. Wiss. , 13, 542-70.
POINCARÉ, H., 1904, Bull. Sci. Math. , 28, 302 (English translation: Monist , 1905, 15 , 1).
POOR, C. L., 1930, J. Opt. Soc. Amer. , 20, 173-211.
SYNGE, J. L., 1956, Relativity: The Special Theory (Amsterdam: North-Holland), p. 2.
– 1960, Relativity: The General Theory (Amsterdam: North-Holland), p. 2.
TOLMAN, R. C., 1934, Relativity Thermodynamics and Cosmology (Oxford: Oxford University Press), preface.
WHITTAKER, SIR E. T., 1953, History of the Theories of Aether and Electricity , Vol. 11 (Glasgow, London: Nelson).
ENDNOTES

1. The substance of lectures given to the Royal institute of Philosophy, University College Chemical and Physical Society, The Institute of Science Technicians, etc. [Back]
2. Quoted from the Rutherford Memorial Lecture to the Physical Society 1954 by P. M. S. Blackett (Year Book of the Physical Society 1955). [Back]
3. Gravitational waves with velocity c and the velocity addition formula should be included (Keswani 1966). [Back]
4. I am indebted to Lord Halsbury for pointing this out to me. [Back]
5. Einstein and others call it Mach’s principle, but it is not a principle – it is a physical hypothesis. [Back]
6. Newton considered this possibility (see Brown 1943). [Back]
7. Relativists seem to be rather shaky on dimensions: has not Eddington told us that the mass of the Sun is 1.47 km, and have we not been favoured with a revelation from Ireland that 1° centigrade = 3.804 x 10 -76 seconds (Synge 1960)? [Back]
8. They can be derived without the principle (see Capildeo 1967). [Back]
The following item (April 1967) is the only reference to G Burniston Brown's article printed in the BULLETIN, at least until the end of 1969, when I stopped checking, so, as far as I know, the article by Hermann Bondi, author of Relativity and Commonsense , 1964, never appeared, nor was any correspondence ever printed by this learned society - RW.
[Back to Start]

Letters to the Editor
WHAT IS WRONG WITH RELATIVITY?
Following on the article ‘What is wrong with relativity?’ which was published in the March issue of the Bulletin we have received a large number of letters expressing views often differing from those of Dr. Burniston Brown and in many cases expounding them clearly and in some detail. It is regretted that there are too many letters to publish, particularly as by the very nature of the subject there is a great deal of overlap among them.
      It is hoped, however, to publish at a later date an article by Professor [Hermann] Bondi.
          ED.

[Back to Start]




9. Big Bang?

[Back to Start]




10. Failures in Weather Modelling



Atmospheric and cloud physics provides an interesting example of the way in which failure to grasp simple principles has corrupted an entire subject. My piece How the Global Warming Scare was Generated provides an inside story into computer modelling, using both inadequate models, and inadequate computers.
      Part of the problem lies with the difficulty of making observations: as examples, (i) temperature measurement is not as simple as it sounds; (ii) tornadoes are difficult to observe—I'm told, for example, that a Japanese researcher spent a decade or so in the US studying tornadoes, but never actually saw one, because they disappear by the time the researcher has driven his car miles when one is reported; (iii) assumptions tend to be built into the instruments used, for example that radar measures wind speed when in fact this may be different from the water that's actually being measured.
      We're in the position in which it is seriously suggested that a butterfly's wings may lead to a hurricane; and in which weather forecasting is believed to be scientific when important events—flooding, exceptionally high winds—have proven the methods an embarrassing failure.

      But the most important error has been the failure to understand clouds. Unfortunately I'm not at present in a position to be willing to present the truth. I have however contacted a few interested parties (e.g. the British Met Office) and if anything happens may say so here. But don't hold your breath. Read on:-

Putting Weather Forecasting on a Scientific Basis...

... for the first time.

Fri 17 November, 2000: My email to them, opening the issue.

Mon 27 November 2000: Email reply received from Alan Thorpe, the Director of Climate Research, Bracknell. Saying that all research is published openly. In fact, the Met Office is a branch of the Ministry of Defence.
      (Of course, it's usually a political mistake for officials to reply in this way; their usual strategy is to keep quiet. Whether Thorpe is aware of this, I don't know).

Thur 30 Nov 2000: My E-mail reply making the Met Office a modest offer (including future percentages) with proviso that, if it turns out they've investigated my ideas already, then nothing is payable.

No reply as yet: next phase probably to repeat the offer in writing, the point being that, if ultimately it's sold to (say) Germany, they can't claim they were never approached. —> > ... contd. —> >
Our new logo is ambitious and energetic. It emphasises movement and energy, implying we are entering a period of change. Dark blue is a colour which has been found to inspire trust, but it is also generally associated with the weather, sky and the sea. Green is generally associated with the natural environment. The waves are inspired by the multi-layered 'geological' formation of the Earth's surface; they could also represent wind, hills, valleys, or mathematical functions. The waves could also be likened to the rivers, or the sea, representing our heritage - the Met Office was formed in 1854 by Admiral FitzRoy to provide sea-current forecasts to mariners.
    Different people will see different things in the design of our new logo, but we think it sums up the main goals we're working towards in our vision for the future of the Met Office
- Quoted in Private Eye's Pseuds Corner #1021
Cautionary e-mail: My experience with academia -- I got a PhD -- taught me a few things. For example, my one paper, about the main result of my dissertation research (on the obscure subject of precise interprocedural dataflow analysis), spent two years in the pipeline at a good journal, but was ultimately rejected in highly abusive terms by a new assistant editor, who, as my co-author professor eventually admitted to me, had it out for him, because of an incident roughly 10 years previous in which my professor had written a letter of complaint against that guy. So, in other words, as I know from my own experience, politics rules in academia, and the actual situation is very far from the ideal of a noble place of learning where any good idea can get a fair hearing. ...
      In other words, in my opinion, and based on my own limited experience and understanding, you are wasting your time, because it does not matter whether your ideas are correct or not; it does not matter whether your ideas represent an advance in weather knowledge or not; all that matters, is that you are an outsider to the weather establishment, and anything that could possibly go to you, the outsider, would come from them, the insiders, which means that whatever you have will be dismissed a priori, out of hand.
      I don't mean to be a downer, but I consider any attempt to play with the establishment a complete waste of time.
...
    The approach you seem to be trying to take, which is to try to get payment from the establishment for an idea, is, based on everything I know, impossible. Note that even if you had a patent on it, and the idea actually had some commercial value, it would still be very difficult to have it pay off (many people get patents, very few make money off their patents). ...
Kurt Johmann
Click if you'd like to see the Met Office Website ; their last published figures give annual turnover of about £150M, about the same as their expenditure; it's unclear to me whether the revenues received by the Met Office are voluntary, or a transfer payment from other government departments, or an unavoidable governmental imposition, like a tax. Their assets are valued at £150M. Information on amounts spent on computing, on what they call research, and on information collection by satellites, ships etc, is hard to find. From the point of view of this article, the important thing is that they are willing to spend £150M on their feeble systems, but unprepared to spend anything investigating new ideas. In view of their failures to predict extreme (and expensive) conditions such as floods, high winds, and droughts, there's a case for considering their work to be negligent and/or fraudulent.

A few years ago, a hurricane in North America, plus the long tail risks of an asbestos action, nearly bankrupted Lloyds of London, the insurers. Better weather forecasting could be extremely valuable.
——> > contd. —> > ..
30 Nov 2000 (copy to Ivor Catt, independent monitor)
Dear Alan Thorpe,
Thanks for your e-mail. I've given some thought to this problem and, since it's up to me to take action, I make the following proposal:
[1] I have an approach to mathematical modelling of weather which in my judgment will put weather forecasting on a proper scientific basis for the first time. In principal [sic; oops!] this is of enormous scientific and commercial importance.
[2] This initial approach is being made to the British Met Office; this offer to the Met Office stands for three months.
[3] I will not divulge any details to any party except on terms similar to the following:
[4] [**Omitted bit - RW **] Experience since the time of Harrison suggests a proactive approach to government organisations is essential. Accordingly I propose:]
[5] [**Omitted bit-RW**] ... legally become mine, UNLESS the Met Office can demonstrate that they have considered the ideas before, and found one or more flaws in them.
[6] Investigation and implementation of the ideas will be the responsibility of the Met Office and its numerous experts, though I'm happy and willing to provide input.
[7] The Met Office will undertake to take out patents or such other measures as will keep control of the methodology to the Met Office and its agents.
[8] There is, I suppose inevitably, some legal input into the above. The Met Office will pay for legal advice which I have to take.
Reply from Professor Paul J Mason FRS, Chief Scientist [I presume of the Meteorological Office]. Dated 9 Jan 2001.

Dear Rae [sic]
You wrote to one of my directors, Alan Thorpe, concerning a research proposal and seeking a financial return for that support. Alan replied to you explaining our open basic science policy. I am now writing to confirm that we are not willing to enter any financial arrangement with you.
Yours sincerely
Paul Mason
Chief Scientist


Dear Paul Mason,
Thanks for your letter dated 9 January 2001. I'm disappointed that you haven't bothered to read my proposal, since I'm not seeking backing for a research proposal; the ideas already exist. So I enclose a repeat of the communication I sent to Thorpe. I would suggest that in view of your responsibility for hundreds of millions of taxpayers' money, and also for the not very impressive record of weather forecasting, that you consider the proposal seriously. This is quite apart from the fact that presumably you regard yourself as having some obligation to forward scientific progress.
Yours sincerely
Rae West


[Back to Start]



11. Ineffectual Fightbacks: The Façade of Physics

The subtitle was suggested by Bryan G Wallace's short online 1993 book, The Farce Of Physics which is interesting but not very satisfactory, since it avoids tackling the supposedly most deep technical issues. (I don't know if the piece has been updated or modified, or indeed whether the author is alive 20 years later). Another group of dissidents is, or was, the Natural Philosophy Alliance, NPA. This group, and, as far as I know, every other dissident group, has failed to address any of the issues brought up in nuke-lies.org or the general issue of weapons and their use, so they must count as lightweight, fake opposition. I would not recommend wasting time on them except perhaps to simulate contact with critical thought.

What is the real point of physics? In 2000, and for many years, the main point has been weaponry. This of course is a largely censored subject (some of the biggest frauds must have taken place in this field, though of course serious investigation is almost impossible). To take a typical example of what goes on, we might look at the V22, a marginal military thing supposedly due to cost $20 billion. (By comparison, a $10 bn 50-km ring in Texas, which the particle physics lobby wanted to be funded, was cancelled in the early 1990s).
      Now look at the façade:—
The Façade
Atkins. 1992. The Origin of Space, Time and the UniverseHawking. 1988. Intro by Carl Sagan Atkins ( Creation Revisited. The Origin of Space, Time and the Universe ) has a rehash of the unproven speculations of other people. He is a chemist, with a commendable interest and excitement in his subject, but seduced into promoting rubbish. As a double whammy, he is married to (or whatever) Susan Greenfield, who promotes the most vacuous biological equivalent. Hawking is well-known (the TV film Hitchens refers to shows rather laughable scenes of his Church of England first wife). Hawking repeats the usual stuff, e.g. the surface of the earth as supposed analogy to curved space. I have it on the authority of Steve Jones that Hawking wanted to remove the final sentence about 'knowing the mind of God', for which the naive might imagine Hawking had provided evidence, but his publishers insisted on retaining it, correctly, of course, from the point of view of sales. (A subsequent book by Jones also had the word 'God' in the title. Jones, an atheist, said: "God may not do much, but he sells books!")


Here we have Christopher Hitchens. [Broadcast BBC Radio 6th Nov 1999 as a 'Sound of the Century' lecture]. Hitchens teaches at, or at least is paid by, the 'New School of Social Research' in New York—the title alone allows one to guess that it's a late 19th century establishment. He describes it as a 'good school' for graduates, despite the fact that he concedes they don't know much. These are his words:-
      "We live in a time when physics is far more awe-inspiring than any religion, and far more likely to disclose to us..
      ... 'shimmering DNA'.. 'our own constitutive identity', 'if the proper study of man is man'
      ... it's a cliché to say nuclear physics still threatens ourselves with annihilation .. a process of innovation and experiment that was inaugurated largely by humanistic Jewish refugees.
      How does physics pick up the tab?
      .. I'm sure a lot of people have seen that beautiful film made about the life of Stephen Hawking. It has him investigating, in his marvellous way, the possible origin of the universe. The EVENT HORIZON. It must be that if, if you could work back to the origins of the black hole [he means the 'Big Bang'] you'd get to a point which would so to speak be a, well, they call it the event horizon, a lip, over which you'd fall and in you'd go. And you wouldn't have time, alas. But if you did have time, you'd be able to see the past and the future. And Hawking has a colleague who he says if he knew he had a terminal illness, that's the way he'd want to commit suicide. It would be in the attempt to try and get to the event horizon. This is—now compare that to the tripe like the burning bush! [nervous laughter]. The event horizon is a genuinely awe-inspiring thing. And it's obviously not within our compass, we don't have to say that we are masters of the universe, we bloody well know we're not, it's only religion that ever claimed we were ..."

There are many things one could say about Hitchens, who is an entirely amiable English writer, happier with others' words than his own ideas, and with undeniably accurate views on the British 'old Labour' party. Why does he discuss these matters, of which he clearly knows nothing? His absurd Jewish references reveal that he knows which side butters his bread; he hasn't heard of Dictamnus albus ; the threat of annihilation is a cliché—but that's all right, we have an 'event horizon'. Hitchens is best known for writing for Vanity Fair , and I notice that Hawking's book has a review from the same magazine; perhaps they have a sideline in reviewing books they don't understand.
      These people are part of the veneer of the façade.

The 1992 TV film about Hawking (with family, friends etc) amid interminable anecdotes about motor neurone disease, included the following, pretty much verbatim from computer monotone:-
".. the very small, and cosmology, the very big.. elementary particles there is no theory; all they can do is classify them like in botany.. in cosmology, there is an accepted theory.. Einstein's theory of relativity.. Einstein proved the universe is expanding..
      "What distinguishes past from future is the increase in entropy or disorder in the universe.."
      "Collapse into a singularity.. but in a singularity the laws of physics no longer apply"
      "When the universe started to contract again, would we see the cup gather together and leap back on the table? Would we be able to make a fortune by remembering prices on the stock exchange?"
      "The universe has only two possible destinies; it may continue to expand or it may go into reverse.. into a big crunch.."
      Einstein said God does not play dice with the universe [sic]. It would seem Einstein was doubly wrong. Not only does he play dice but he throws them where they can't be seen"
      In 1967 an American coined the expression 'black hole' to replace 'gravitationally completely collapsed object'. Hawking thinks if time runs backwards, a singularity will expand into the universe! Hence the 'big bang'
[Roger Penrose, brother of the mathematician who devised a new form of tiling, is shown wondering about consciousness—"the future influencing the past, only in a small period of time mind you!, but perhaps a fraction of a second, so after death people may become someone else, in the past". Penrose's work unfortunately includes quantum speculations which he thinks may take place in parts of the brain he thinks are proven by electron microscopy. He was challenged on this point, but declined to debate].

[My notes include a 1992 TV programme on 'Stuart'—a young person 'obsessed by the universe'. We see him 'explain' about small black holes, the universe being either infinite or closed but also infinite, gravity travels at the speed of light as gravitons, nothing can travel faster than light or time would go backwards. It's painfully clear he was parroting things.]

University College, London: part of a flyer for Friday evening lectures, 2001, aimed at young people deciding about university courses, and teachers.
    There is some material in the vein of Faraday. But the bulk of the material is of dubious value. Unfortunately young people are trained to be docile; I've never heard any of them question any of this material.





12. Higgs Boson

E-mail communication 27 June 2013:–
... Higgs boson. Briefly it is the caesium nucleus, but it only has a half life of 10 to the minus 22 of a second, so I wonder why anybody should get excited about such a particle with such a short half life.
... as the atom is more complex than the nucleus, the proton is simpler than the nucleus of an atom so all the claims about there being lots of elementary particles being found at CERN is not in accordance with the rest of the matter on Earth.

[Caesium/cesium is a liquid metal, analogous to sodium and potassium. Its nucleus is the tiny very dense positively-charged centre, with 55 protons and a larger number of neutrons; without electrons, it is unstable, and can exist only for a tiny fraction of a second].




13. Dimensions

Just a short note. A lot of confusion was (and is) introduced by imprecise use of 'dimensions'. The general idea is to fix a location, or object, in a standardised way. If you have a cube, a single point inside it can be specified unambiguously with 3 measurements. If you're only interested in one point, measurements aren't needed; it's just there. Complications can arise under many circumstances where there are complications. Suppose you have a sphere inside your cube. This can be described with four dimensions: the center, and its radius. However, the thing itself is within the 3-D space. If you're interested in distinguishing the inside of the sphere from the outside, we have five dimensions, though one of these dimensions is discontinuous. 'Space-time' can be regarded as 'four dimensional', or as an ordinary 3-dimensional system over successive times, and, as the 'dimension' of 'time' is a completely different measure, it's straining the meaning of the construction to call it 'four dimensional'. It's just a pun, or category mistake, confusing dimensions with the number of variables which you find convenient to pin something down. For example, a word in a Biblical Chapter and Verse is 'four dimensional', but not in the ordinary sense.

[Back to Start]


Click here to e-mail Phil Holland or Rae West

Big-Lies Home Page



Another little joke: I see 'to cast pearls before swine' is an anagram of 'one's labor is perfect waste' (US spelling needed). Perhaps someone can supply an appropriate anagram of my longer version?

OLD NEWS! Physics hoax! There was a spurious physics paper on https://compbio.caltech.edu/~sjs/tew.html. It seems to have been removed, or moved. Don't get excited—it was dull and lacked the flair which a good hoax should, presumably, have. (But then so did Sokal's feeble and overhyped piece).—RW.
[Back to Start]
HTML Rae West This revision 99-11-26 (plus comments on the nuclear hoax which I didn't appreciate then). First uploaded 98-08-28. Slight corrections 99-04-18 Browser compatibility improved 99-04-28 Format changes 2000-05-25 Section 5 on particle detection 99-02-01 Atom bomb 99-06-16 (Media links 2000-07-10) Webring expt 2000-07-09 Big Bang, Façade 2000-08-04, 2000-09-20. Light e-mails 2000-10-27. Weather 2000-11-20, 2000-12-10, 2001-02-14. Higgs boson 2013-06-27. A few formatting changes for mobile phones 2016-10-122. A few small changes 20 Feb 2023 inserted because I was reviewing the film about Hawking, with Eddie Redmayne.