torsdag 18 december 2014
"Sceptics" often claim to fight back against pseudoscience when they list lack of "peer review" publications as a reason to reject the pseudoscience they are talking about. But since "peer review" is an institution, that kind of argument is a type of logical fallacy known as argument from authority. If that kind of "argument" is frequently regurgitated as if it was the main or even sole reason to reject an idea, it is fully understandable that good critical people are misled to believe that there are no true reasons to reject said pseudoscience. All lumping of evidence-based falsification without peer review with actual pseudoscience that merely uses scientific-sounding words is an authoritarian association fallacy that only the latter, actual pseudoscience benefits from. Truth is the loser in that game. To avoid this, pseudoscience should be refuted by valid arguments like empirical evidence or exposing flaws such as logical self-contradictions or lack of falsifiability. Not by arguments from any kind of authority whatsoever.
onsdag 17 december 2014
One main cause of scientific stagnation is the redundant publication policy in pre-publication "peer review", which effectively means that discoveries and theories are snubbed by the journals just because of where they were originally published. But it also offers a possibility to use the system against itself. I do not know exactly how, but it may be possible to generate a lot of articles on the Internet that makes the digital redundant publication detectors recognize most if not all submitted articles as redundant. If successful, it would smash the myth of the "fantastic" pre-publication "peer review" and allow us to do something better for science.
fredag 28 november 2014
It is often claimed that the number of other "peer review" articles that cite an article is a measure of the former article's quality, but that's circular reasoning. That others cite an article only proves that those who cite it think it is good, it does not prove that it is good. What causes others to cite an article may be an irrational dogma passed on in the education that is required to become egible for reviewer status in that journal. The education was originally created by people who had not themselves go that education (not before they created it). "Peer review" advocates often say that "crackpots" cite other "crackpots". But logically, that implies that "peer review" is just another "crackpot" organization, separated from other "crackpot" organizations by politization alone.
torsdag 27 november 2014
Believers in "peer review" claim that it stops nonsense from being published, but the truth is that it simply stops whatever one or more of the reviewers disagree with from being published, and that varies depending on what people happens to be the reviewers. If a theory is never published in any "peer review" journal, that does not mean that it is nonsense, it just means that opinions that disagree with that theory are very widespread in the institutions from which the reviewers are picked. The claim that "peer review" is "immune to corruption" because multiple reviewers must approve an article for it to be published ignores negative corruption. Preventing a correct theory from being published is not any better for science than publishing a false one. "Peer review" makes it easy for anyone to dishonestly stop publication of theories, preserving old incorrect ones. "Peer review" is thus just a dictatorship without dictator, in which the dictator was removed in such a way the lack of dictator did not reduce oppression. No wonder the generation of new useful theories have stagnated!
tisdag 18 november 2014
Another fatal flaw in the so-called "argument" that "there is too much nonsense around to debunk it all" (used by pre-publication "peer review" advocates) is the fact that most nonsense is merely repetitions of a few themes. For instance, denial of anthropogenic global warming relies on claiming that there is "no evidence for CO2 being a greenhouse gas" (despite spectroscopic analysis showing that CO2 does trap infrared heat radiation). So that denial is easily debunkable by anyone who have read an elementary spectroscopy book. Btw there are many environmentalists who are very concerned about chemical and radiological hazards and critical to big business for it but deny anthropogenic global warming (probably due to simple ignorance of the evidence), while some high-emission companies promote extreme alarmism of the "it's too late to do anything" type to paralyze emission reductions (and it is very bizarre that "sceptics" otherwise so very sceptical to anything conspiratory assume political motives instead of ignorance). The various claims that are somewhat arbitrarily lumped under the umbrella term "holocaust denial" all have their fatal flaws (with the exception of applied effects of philosophical disputes on whether or not humans can be considered capable of truly having any plans at all): all those millions of jews who vanished cannot be covered by the much smaller influx of people elsewhere, chemical analysis shows gas doses lethal to humans in the gas chambers, and documents show that various officers and administrators did order total eradication and that Hitler was aware of it, respectively (btw why would antisemites want to keep denying the holocaust even after such denial came to be associated with antisemitism? more likely deniers are simply ignorant). And so-called "irreducible complexity", usually in creationism/"intelligent design", uses examples that are easily debunked. For instance, simply channeling ions out of a bacterium provides some propulsion, and additions to make it more efficient can produce a bacterial flagellum. A few light-sensitive cells is better than no vision at all, a curvature gives some image (as opposed to just seeing light and dark) and the curvature can develop into a ball, a membrane at the front of the ball protects the eye, an ability to "press" the eye gives some ability to zoom, which can be improved by thickening and reshaping the membrane into a lens. The only example of "irreducible complexity" that is valid, the one about social behavior, can be explained by the intelligence of the organisms themselves and is thus not evidence for an "intelligent designer" in the sense that made "intelligent design" a cover for religious nonsense. Well, there are a few more examples of basic nonsense themes than these three, (> but not >>), but all claims that there is "too much nonsense to debunk it all" is based on a distortion by counting all applications of nonsense in slightly different applications as if they were separate basic nonsense concepts (which they are not). Most if not all proper nonsense is just a few easily-debunked concepts that neither requires "peer review" nor government regulations to debunk.
onsdag 15 oktober 2014
One of the most severe fallacies used by pre-publication peer review advocates is to claim that it would be wrong to brainstorm theories just because each single new theory in itself would have a lower chance of being correct than established theories. The fallacy is, of course, to judge methods by the chances of any one theory being correct in the first place. The entire concept of status and prestige must be totally abandoned, and replaced by a pursuit of truth where it does not matter at all who came up with what theory. Questioning must be totally free, as must access to observations and experiment results. Consider, for instance, the fact that it was good for science that Johannes Kepler stole Tycho Brahe's observation notes upon Tycho's death instead of letting Tycho's relatives lock them away. This does not mean that lying about the content of observations and experiments is somehow good. Real science must judge actions by their consequences for science, and not follow associations of "honesty" and "dishonesty" created in an unscientific (statusian) context. The truth is that observations and experiment results are far more timeless than are theories, and therefore collections of observational/experimental data free of theoretical interpretation are very valuable for the progress of science (in sharp contrast to what peer review journals assume when they demand that all papers must contain theoretical interpretations). Any assumptions that pointing out observational/experimental results must somehow be based on a "hidden agenda" just because those results happen to contradict a certain theory must be abandoned as the fallacy it is. Assuming "hidden agendas" behind all observations/experiments that contradict your theory has, for all purposes relevant to scientific testability, the exact same effects as conspiracy theories (even when they are not technically conspiracy theories, the distinctions invariably have zero relevance to scientific testability and therefore does not matter in an objective measure of scientificness).
lördag 11 oktober 2014
How to know what experiments have the potential to find absolute limits to the modifiability of theories? Let's start with analyzing cases where experiments did find absolute limits to the modifiability of theories, and contrast them to cases where apparent anomalies were not absolute falsifications. One of the paradigmist's favourite examples is when only a third as many neutrinos from the Sun as predicted was observed, some suggested that the first law of thermodynamics was wrong while others suggested that the Sun was cooler inside than thought, it was later found that the Sun only produced electron neutrinos but many of them transformed to muon or tau neutrinos on their way to Earth (implying that they had mass). The paradigmists who try to generalize that to all shifts of theories deny that the Michelson-Morley experiment falsified the aether, and try explaining it away by claiming that aether theory was never useful but relativity is very useful. That ignores the fact that aether theory did have some use (it was the only theory known back then to explain the wave properties of light, with some practical applications). While those applications pale compared to those of relativity, so would the applications of relativity pale if a theory of everything appeared. The key difference between the neutrino transformation example and the aether falsification is that in the case of neutrinos, the missing neutrinos could be calculated at a predictable mathematical percentage of neutrino transformation. The Michelson-Morley experiment, by contrast, discovered that the speed of light is the same in all directions, which due to the movements of especially the Earth could not be explained by a increase/decrease of the speed of light at a fixed ratio. So in order to find the absolute limits to the modifiability of theories and get science going again, experimentation must be focused at things that cannot be explained by generalizable ratios but instead either should be expected to or do vary in many ways at different measurements.
fredag 10 oktober 2014
Test runs comparing the chances of peer review publication shows no significant difference between serious papers and deliberate hoaxes. Examples of such hoaxes include the peer review publication of an article claiming that christian prayer reduced the risk of childbirth lethality in women regardless not only of their religion but also even if they did not know that someone was praying for them, and an article approved by many peer review boards claiming that a substance found in common lichens was a cure for cancer. Rather, it is part formal writing style and part random chance that determines what papers are peer review published and what papers are not. This means that, contrary to what pre-publication peer review advocates claim, pre-publication peer review is not a nonsense filter at all. It just slows down the flow of publications. No wonder that the belief that such systems are somehow "necessary" have caused and still keeps causing a stagnation of the generation of useful theoretical change! And the claim that it would "not be enough time" to debunk all nonsense without pre-publication peer review both committs a logical fallacy and is empirically wrong. The logical fallacy is to assume that the publication of nonsense must be some sort of constant. Ever considered that the usage of "you did not publish it in a peer review journal" instead of valid to the point criticism and/or assumptions that people who make a particular claim "must" have a particular agenda may be what creates antiscientific attitudes? The empirical error is to ignore the fact that just about all websites with content considered "crackpot" that are open for comments are spammed down with "sceptic" comments calling the content "crackpot". This clearly empirically shows that there is more than enough time. Just use that time more wisely: point out reasons why an idea is wrong instead of just calling the author an idiot, and be prepared for discussions that may end up showing that some of the ideas considered "crackpot" are actually true (not all, but some, though you cannot know which of them a priori).
torsdag 9 oktober 2014
Examples where theories have been modified when new observations have been made are often (ab)used in false generalizations for claiming that any theory could be modified to explain any observation, which is not the case. For example, the "paradigmists" talk a lot about how the geocentric model was modified with epicycles to explain the retrograde planet movements, and claim that it "proves" that their "Occam's razor" (preferring the "simplest" explanation, whatever the definitions of "simple" and "complex" happens to be) should be the only reason to reject geocentrism. They totally ignore the fact that it would have been totally impossible to explain the observations made by space probes geocentrically, no matter what epicycles was made up. Another example is the discovery that the speed of light is the same in all directions despite the movement of the Earth (the Michelson-Morley experiments), which the lumniferous aether theory could not explain. So although theories can sometimes be modified, there are absolute limits to the modifiability of theories. Of course ultraparadigmists can do their false generalizations and claim theories to be infinitely modifiable and say something on the lines of "the theory gives many useful predictions, so we ignore what it cannot explain", and even citing fallacies "disproving" things and generalizing them into falsely claiming that "everything can be proven wrong by some observation or logic" simply by conflating the fallacies with genuine evidence (one example is when they claim that "you can disprove thermodynamics by pointing out that there are no closed systems", which is a fallacious statement). But that ultraparadigmist rhetoric is full of fallacies. Absolute limits to the modifiability of theories refers to limits that cannot be exceeded without fallacies. "Occam's razor", or any other "prefer that-or-that a priori" rules, are recipes for stagnation of science. To get science going again, the top priority must be to create new kinds of observations and experiments with the potential of finding absolute limits to the modifiability of theories. That can generate true progress in theoretical physics, i.e. the kind of change that generates new predictions that can be used for purposes that the old predictions could not. One example would be a theory that explains why gravity is much weaker than the other forces of nature, so that it could be adressed (useful for modifying space-time). That is, as opposed to merely modifying the formulas in ways that makes no novel useful predictions, which is just about all what has been done since the tradition of considering pre-publication peer review to be the definition of scientific evidence began.
tisdag 7 oktober 2014
There are many historical examples of astronomers doing things that, according to mainstream psychology, should have been impossible for humans. The most famous ones were Annie Jump Cannon with her instant classification of stellar spectra and Clyde Tombaugh with his analysis of tiny movements and identification of the correct object (i.e. Pluto) among many, many thousands of others. However, they were far from unique: that kind of manual analysis of celestial objects was the standard procedure of astronomy at the time, practised by many thousands of other astronomers. Considering the smaller world population, poorer global communications and lesser social integration with the opportunities for academic careers at the time, there is no way such numbers of them could possibly be explained by "constant population "percentage"-fraction of extraordinary geniuses" biologism (i put "percentage" in quotes to avoid straw men assuming that I was ignorant about psychology's claim that such genius is much rarer than one percent, in fact I am not ignorant about that at all, but it is the basis for this entire blog post). What this means is that the purported "limitations" of human cognition that (mainstream) psychology "observe" is the result of some form of cultural pressure to be stupid, most likely related to the effects of prepublication peer review, and not a fixed "human nature". In other words, it is possible to get really smart science going again, by getting rid of the cultural pressure to be stupid. This gives hope for the future, but something must be done to get it done. It will not happen by itself just by passively waiting for it. Various ideas of what to do will be in future blog posts.
måndag 6 oktober 2014
What does "getting science going again" mean? In what sense have science stagnated? After all, lots of discoveries are announced, and lots of technological progress is done. However, all dramatic technological progress is made in areas where improved technology for purely practical reasons allows the construction of even better technology, without any new theoretical physics. Computers getting faster at an accelerating rate is the best example of this. The scientific discoveries made today are very limited in practical use, hardly ever producing any new practically useful predictions distinct from those of older theories. The theoretical physics used in modern technology (such as relativity and quantum mechanics) was originally discovered back in the days when pre-publication peer review was seen as optional (not as the official definition of scientific evidence) and the peer review journals that did exist lacked policies against redundant publication. While computers make massive progress due to purely practical improvement, medicine (as shown by average lifespan) makes progress at a much more modest rate. Medicine is approved by a trial and error type experiments under the control of authorities distinct from peer review, slowed down by patents but having no equivalent of the permanent snubbing caused by peer review redundant publication policies. But even that is much more progressive than the directly theoretical science-dependent field of spaceflight. Spaceflight technology have hardly progressed at all since shortly after world war 2, that is, since the tradition of viewing pre-publication peer review as the definition of science began and peer review journals instated the first policies against redundant publication. So far from supporting the assertion that peer review should be the cause of modern progress, the profile of what fields progress the most and what fields progress the least directly contradicts it. So the truth is: peer review as we know it is bad for science.