onsdag 15 oktober 2014
One of the most severe fallacies used by pre-publication peer review advocates is to claim that it would be wrong to brainstorm theories just because each single new theory in itself would have a lower chance of being correct than established theories. The fallacy is, of course, to judge methods by the chances of any one theory being correct in the first place. The entire concept of status and prestige must be totally abandoned, and replaced by a pursuit of truth where it does not matter at all who came up with what theory. Questioning must be totally free, as must access to observations and experiment results. Consider, for instance, the fact that it was good for science that Johannes Kepler stole Tycho Brahe's observation notes upon Tycho's death instead of letting Tycho's relatives lock them away. This does not mean that lying about the content of observations and experiments is somehow good. Real science must judge actions by their consequences for science, and not follow associations of "honesty" and "dishonesty" created in an unscientific (statusian) context. The truth is that observations and experiment results are far more timeless than are theories, and therefore collections of observational/experimental data free of theoretical interpretation are very valuable for the progress of science (in sharp contrast to what peer review journals assume when they demand that all papers must contain theoretical interpretations). Any assumptions that pointing out observational/experimental results must somehow be based on a "hidden agenda" just because those results happen to contradict a certain theory must be abandoned as the fallacy it is. Assuming "hidden agendas" behind all observations/experiments that contradict your theory has, for all purposes relevant to scientific testability, the exact same effects as conspiracy theories (even when they are not technically conspiracy theories, the distinctions invariably have zero relevance to scientific testability and therefore does not matter in an objective measure of scientificness).
lördag 11 oktober 2014
How to know what experiments have the potential to find absolute limits to the modifiability of theories? Let's start with analyzing cases where experiments did find absolute limits to the modifiability of theories, and contrast them to cases where apparent anomalies were not absolute falsifications. One of the paradigmist's favourite examples is when only a third as many neutrinos from the Sun as predicted was observed, some suggested that the first law of thermodynamics was wrong while others suggested that the Sun was cooler inside than thought, it was later found that the Sun only produced electron neutrinos but many of them transformed to muon or tau neutrinos on their way to Earth (implying that they had mass). The paradigmists who try to generalize that to all shifts of theories deny that the Michelson-Morley experiment falsified the aether, and try explaining it away by claiming that aether theory was never useful but relativity is very useful. That ignores the fact that aether theory did have some use (it was the only theory known back then to explain the wave properties of light, with some practical applications). While those applications pale compared to those of relativity, so would the applications of relativity pale if a theory of everything appeared. The key difference between the neutrino transformation example and the aether falsification is that in the case of neutrinos, the missing neutrinos could be calculated at a predictable mathematical percentage of neutrino transformation. The Michelson-Morley experiment, by contrast, discovered that the speed of light is the same in all directions, which due to the movements of especially the Earth could not be explained by a increase/decrease of the speed of light at a fixed ratio. So in order to find the absolute limits to the modifiability of theories and get science going again, experimentation must be focused at things that cannot be explained by generalizable ratios but instead either should be expected to or do vary in many ways at different measurements.
fredag 10 oktober 2014
Test runs comparing the chances of peer review publication shows no significant difference between serious papers and deliberate hoaxes. Examples of such hoaxes include the peer review publication of an article claiming that christian prayer reduced the risk of childbirth lethality in women regardless not only of their religion but also even if they did not know that someone was praying for them, and an article approved by many peer review boards claiming that a substance found in common lichens was a cure for cancer. Rather, it is part formal writing style and part random chance that determines what papers are peer review published and what papers are not. This means that, contrary to what pre-publication peer review advocates claim, pre-publication peer review is not a nonsense filter at all. It just slows down the flow of publications. No wonder that the belief that such systems are somehow "necessary" have caused and still keeps causing a stagnation of the generation of useful theoretical change! And the claim that it would "not be enough time" to debunk all nonsense without pre-publication peer review both committs a logical fallacy and is empirically wrong. The logical fallacy is to assume that the publication of nonsense must be some sort of constant. Ever considered that the usage of "you did not publish it in a peer review journal" instead of valid to the point criticism and/or assumptions that people who make a particular claim "must" have a particular agenda may be what creates antiscientific attitudes? The empirical error is to ignore the fact that just about all websites with content considered "crackpot" that are open for comments are spammed down with "sceptic" comments calling the content "crackpot". This clearly empirically shows that there is more than enough time. Just use that time more wisely: point out reasons why an idea is wrong instead of just calling the author an idiot, and be prepared for discussions that may end up showing that some of the ideas considered "crackpot" are actually true (not all, but some, though you cannot know which of them a priori).
torsdag 9 oktober 2014
Examples where theories have been modified when new observations have been made are often (ab)used in false generalizations for claiming that any theory could be modified to explain any observation, which is not the case. For example, the "paradigmists" talk a lot about how the geocentric model was modified with epicycles to explain the retrograde planet movements, and claim that it "proves" that their "Occam's razor" (preferring the "simplest" explanation, whatever the definitions of "simple" and "complex" happens to be) should be the only reason to reject geocentrism. They totally ignore the fact that it would have been totally impossible to explain the observations made by space probes geocentrically, no matter what epicycles was made up. Another example is the discovery that the speed of light is the same in all directions despite the movement of the Earth (the Michelson-Morley experiments), which the lumniferous aether theory could not explain. So although theories can sometimes be modified, there are absolute limits to the modifiability of theories. Of course ultraparadigmists can do their false generalizations and claim theories to be infinitely modifiable and say something on the lines of "the theory gives many useful predictions, so we ignore what it cannot explain", and even citing fallacies "disproving" things and generalizing them into falsely claiming that "everything can be proven wrong by some observation or logic" simply by conflating the fallacies with genuine evidence (one example is when they claim that "you can disprove thermodynamics by pointing out that there are no closed systems", which is a fallacious statement). But that ultraparadigmist rhetoric is full of fallacies. Absolute limits to the modifiability of theories refers to limits that cannot be exceeded without fallacies. "Occam's razor", or any other "prefer that-or-that a priori" rules, are recipes for stagnation of science. To get science going again, the top priority must be to create new kinds of observations and experiments with the potential of finding absolute limits to the modifiability of theories. That can generate true progress in theoretical physics, i.e. the kind of change that generates new predictions that can be used for purposes that the old predictions could not. One example would be a theory that explains why gravity is much weaker than the other forces of nature, so that it could be adressed (useful for modifying space-time). That is, as opposed to merely modifying the formulas in ways that makes no novel useful predictions, which is just about all what has been done since the tradition of considering pre-publication peer review to be the definition of scientific evidence began.
tisdag 7 oktober 2014
There are many historical examples of astronomers doing things that, according to mainstream psychology, should have been impossible for humans. The most famous ones were Annie Jump Cannon with her instant classification of stellar spectra and Clyde Tombaugh with his analysis of tiny movements and identification of the correct object (i.e. Pluto) among many, many thousands of others. However, they were far from unique: that kind of manual analysis of celestial objects was the standard procedure of astronomy at the time, practised by many thousands of other astronomers. Considering the smaller world population, poorer global communications and lesser social integration with the opportunities for academic careers at the time, there is no way such numbers of them could possibly be explained by "constant population "percentage"-fraction of extraordinary geniuses" biologism (i put "percentage" in quotes to avoid straw men assuming that I was ignorant about psychology's claim that such genius is much rarer than one percent, in fact I am not ignorant about that at all, but it is the basis for this entire blog post). What this means is that the purported "limitations" of human cognition that (mainstream) psychology "observe" is the result of some form of cultural pressure to be stupid, most likely related to the effects of prepublication peer review, and not a fixed "human nature". In other words, it is possible to get really smart science going again, by getting rid of the cultural pressure to be stupid. This gives hope for the future, but something must be done to get it done. It will not happen by itself just by passively waiting for it. Various ideas of what to do will be in future blog posts.
måndag 6 oktober 2014
What does "getting science going again" mean? In what sense have science stagnated? After all, lots of discoveries are announced, and lots of technological progress is done. However, all dramatic technological progress is made in areas where improved technology for purely practical reasons allows the construction of even better technology, without any new theoretical physics. Computers getting faster at an accelerating rate is the best example of this. The scientific discoveries made today are very limited in practical use, hardly ever producing any new practically useful predictions distinct from those of older theories. The theoretical physics used in modern technology (such as relativity and quantum mechanics) was originally discovered back in the days when pre-publication peer review was seen as optional (not as the official definition of scientific evidence) and the peer review journals that did exist lacked policies against redundant publication. While computers make massive progress due to purely practical improvement, medicine (as shown by average lifespan) makes progress at a much more modest rate. Medicine is approved by a trial and error type experiments under the control of authorities distinct from peer review, slowed down by patents but having no equivalent of the permanent snubbing caused by peer review redundant publication policies. But even that is much more progressive than the directly theoretical science-dependent field of spaceflight. Spaceflight technology have hardly progressed at all since shortly after world war 2, that is, since the tradition of viewing pre-publication peer review as the definition of science began and peer review journals instated the first policies against redundant publication. So far from supporting the assertion that peer review should be the cause of modern progress, the profile of what fields progress the most and what fields progress the least directly contradicts it. So the truth is: peer review as we know it is bad for science.