How scientists cheat

Models, Hypotheses and Logic in Science

[...]"the logic of science," said John Stuart Mill, "is also that of business and life," and science, said T. H. Huxley, is "organised common sense." Indeed, scientific philosophy does produce much the same conclusions as common sense.

Faced with unexplained observations, a scientist is advised to devise a model. In some fields this model can be a physical object but, on many other occasions, the word model is interchangeable with hypothesis. In the philosophy of science the two words have somewhat different meanings but here the distinction is unimportant. If a hypothesis successfully predicts the outcome of many critical experiments, then it is proved beyond reasonable doubt and has become a theory.

The term beyond reasonable doubt, again brings out the analogies between scientific and legal investigations. Scientific logic is the logic of investigation and decision making everywhere. No theory is ever actually proved, it is only not disproved while lawyers use the phrase "beyond reasonable doubt" to recognise that the guilt of a defendant is never proved with absolute certainty. Guilt is proved only beyond reasonable doubt.

Models

A model is a set of axioms or postulates which, it is thought, might fairly describe the nature of the phenomenon being studied. Model building is like using an intellectual version of a child's construction kit; scientists gather a set of axioms and concepts (the component parts of a hypothesis), assemble them into a model and compare its behaviour with that of nature. Models are valuable because they can be used to predict the outcome of experiments and scientists compare these predictions with observation. They may discard a new model immediately if it fails to predict existing results. More usefully, the predictions of a model will guide the experimenter's hand, enabling him to design investigations to differentiate two or more opposing ideas. The model(s) failing to predict the outcome of the test being discarded in favour of those that do.

Philosophers of science point out overarching or general models, called paradigms, ideas that are very wide-ranging and provide the framework for the formation of many more specific models. An example might be Newton's mechanics, a paradigm whose ideas are contained in lots of narrower models from fields as diverse as atomic theory and cosmology.

Classic Scientific Logic

There is no more to science than its method, and there is no more to its method than Popper has said. Hermann Bondi (Quoted by Magee (1973))

Model building is the classic description of scientific method expounded at length by Karl Popper in his famous books The Logic of Scientific Discovery (1968) and Conjectures and Refutations (1972). His approach, often called the hypothetico-deductive method, is accepted as a major feature of scientific logic. Popper is often thought to have regarded falsification as the centre of scientific logic but this is an error. To him falsification was extremely important and the elaboration of this principle was his own major contribution. However, he also held that all ideas, even his own, could and should be subject to reasoned, rational criticism. This principle of critical rationalism originated in ancient Greece, not with Popper, but to him it, not falsification, was the central scientific principle. Thus, it is necessary to be clear about the meaning of these two words, rationality and criticism.

The philosophy of rationality is the philosophy of the enlightenment. It originated much earlier but was elaborated in the 17th and 18th century by Descartes, Spinoza, Leibnitz and others in response to the growing success of science. Rationalism incorporates the principles of logic and certain ideas about the universe. It holds, for example, that there is only one single reality, hence that a person cannot simultaneously hold two contradictory beliefs about the world. It follows that to assert one theory is to simultaneously reject all competing theories. To assert otherwise is, in the strict meaning of the word, irrational. Further, a rational belief must be based on sufficient reason and that a rational believer should proffer reasons that are sufficient to justify holding his view. Rationality asserts that, to hold any belief, one must equally accept all the logical deductions that flow from it. The process of testing ideas by experiment depends on this principle, it leads to the conclusion that inconsistent experimental results undermine a theory.

Rationalism does contain different streams of thought, one split being into subjective and objective rationality. The latter is exemplified by Popper and asserts that the external world is real and that science seeks that reality. Objective rationalisty is the traditional system and remains the foundation of science, it reject all authorities other than observation and reason but does accept that no certain conclusions can ever be drawn. Subjective rationalists include pragmatists and naturalists, who note that lack of certainty and conclude that ultimate reality must reside in humans themselves - their motives, objectives and beliefs. The subjective/objective distinction was made by Horkheimer, The Eclipse of Reason (1947), who attacked subjective philosophies noting how they can rationalise any act, for example, "I have to consider my own best interests," or "I was just following orders". Thus subjective rationality can maintain bizarre social practices, such as witchcraft, or become the tool of authoritarian social attitudes. Such social impacts led Horkheimer to reject all subjective rationality, adding that the, "denunciation of what is currently called reason is the greatest service reason can render." Both in science and elsewhere, people who use the word rationality normally mean objective rationality.

Coming now to the meaning of criticise - to find fault with. This is word that does have quite negative overtones but finding fault is exactly what scientists are asked to do with theories - hypothesis testing is a negative logic. However, they are not asked to give just any criticism, it should be rational, reasoned criticism. The three practical characteristics, of such criticism were summed up by Bertrand Russell (1935, p66) in his description of reason, "in the first place it relies upon persuasion rather than force; in the second place it seeks to persuade by arguments which the man using them believes to be completely valid; and in the third place it uses observation .... as much as possible and intuition as little as possible." The first of these rules out the use of inquisitorial methods, the second rules out the use of propaganda and the third rules out appeals to the emotions or self-interest of the audience.

The implication of this is that critically rationalist debate requires certain behaviours from participants, generally that they be seriously seeking the truth. Thus, they must present all arguments they believe to be valid and may only present arguments they believe to be valid; both facts and opinions must be reported honestly. To enable criticism, such presentations must be open and available to all. A further facet of critical rationalism is, "the principle of sufficient reason", decisions are not made arbitrarily but must be founded on reasons that are stated and adequate to justify the verdict.

Critically rational debate in science, involves relevant experiment and the last idea surviving after a period of such debate becomes knowledge. We can never be sure that a piece of knowledge is true, because a better idea or contrary observation may come along later. Nevertheless such knowledge is the closest we can come to knowing external reality. Because doubt can always be expressed, it is often useful to think of knowledge as a contrast concept to a guess (Harré (1972)). Knowledge is the product of a rationally considered choice between alternative hypotheses, rather than choosing between them by guesswork. Thus, one may not randomly choose two alternatives from three, then conduct a rational debate to decide which of these two is correct. Such a mixing of rationality with irrationality is simply irrational.

These principles of critical rationalism generate the ethical imperatives of science. Popper suggested that they separate random ideas from knowledge, pseudoscience from science; modern scientists agree. It is evident that many human dialogues are not critically rationalist. In many situations the aim of participants in dialogue is to "win," whatever that may mean in their circumstances. Accordingly, in Popper's hands, critical rationalism became more than a scientific principle, he saw it as the alternative to all authoritarianism and it guided his political thinking. To him these principles underlay the freedom of speech and democracy upon which western society prides itself. Science is often held up as a bastion against authoritarianism because of this.

Today Popper's ideas are widely accepted. So much so that they are offered as advice to prospective research students. For example, Phillips & Pugh (1987), begin their advice to students by demolishing an older scientific philosophy, the idea that science starts with the gathering of disparate facts by entirely objective and dispassionate researchers:-

The myth of scientific method is that it is inductive: that the formulation of scientific theory starts with the basic raw evidence of the senses - simple unbiased unprejudiced observation. Out of these sensory data, commonly referred to as "facts" - generalizations will form. The myth is that from a disorderly array of factual information an orderly, relevant theory will somehow emerge. However the starting point of induction is an impossible one.

They point out that even scientists are human and begin with their own prejudices:-

There is no such thing as an unbiased observation. Every act of observation is a function of what we have seen or otherwise experienced in the past. All scientific work of an experimental or exploratory nature starts with some expectation about the outcome. This expectation is an hypothesis. They provide the initiative and incentive for the enquiry and influence the method. It is in the light of an expectation that some observations are held to be relevant and some irrelevant, that one methodology is chosen and others discarded, that some experiments are conducted and others are not. Where is your naive pure and objective researcher now?

Then, crucially, they go on - all scientists start with a hypothesis, a model, but they must never think they have proved it - they must try to disprove it :-

Hypotheses arise by guesswork, or by inspiration, but having been formulated they can and must be tested rigorously, using the appropriate methodology. If the predictions you make as a result of deducing certain consequences from your hypothesis are not shown to be correct then you must discard or modify your hypothesis. If the predictions turn out to be correct then your hypothesis has been supported and may be retained until such time as some further test shows it not to be correct. Once you have arrived at your hypothesis, which is a product of your imagination, you then proceed to a strictly logical and rigorous process, based upon deductive argument - hence the term "hypothetico-deductive".

Prejudices may govern how a hypothesis is created but it is illegitimate to display the same prejudice when comparing its predictions with data. A scientist should permit criticism of his ideas and accept disproofs, even of his own models, when they are there.

Probable and Improbable Hypotheses

Not all models are equal. Apart from well thought out concepts, a whole range of improbable or downright silly notions could be created to account for a set of observed results - Heath Robinson could have worked on scientific theories had he so chosen. How one model is chosen for test, and another deemed silly, is for the judgement of scientists but the verdict should not be random. Intuition, guesswork, prejudice, analogy or any other thought process may help conceive a model but, once devised, there is little reason for the judgement of its reasonableness to be personal and absolutely none for the interpretation to be inexplicable or secret. Scientists can articulate the reasons to consider one model, while dismissing another. There are analogous situations.

[...]Great scientists may be distinguished by their insight into how to eliminate unworkable models. This is scientific strategy but it is a phase of reasoning almost never recorded. During their training, scientists do not read books explaining the principles used to reduce the number of hypotheses to be considered. Even so, practising scientists must surely use such principles, possibly subconsciously. Analysis of this thinking is quite disparate. Most thought has been due to philosophers of science, with their demarcation criteria, and to sociologists of science, who simply ask the workers concerned. In both cases their studies are little read by practising scientists, some will be reviewed later. It is strange that this stage of reasoning is so little recorded. Not only is it perfectly possible to make a record but, at times, scientists have an evident duty to do so.

[...]Three Stages of Scientific Method

The hypothetico-deductive method can be seen as requiring three phases in a scientific thought. These phases are -

1. Laying down, or brainstorming, of all possible explanations of an observation. As many hypotheses as possible can be created here as this gives the best chance of the "correct" model being among those considered. The inclusion of incorrect models should be unimportant.

2. A judgement or strategy based screening of the various models to decide between those worthy of being tested and those that can be discarded on some general principle - some demarcation criterion. For this stage to work, it should be regarded as permissible to criticise the ideas put forward in stage 1. The models surviving this stage are likely to be those for which a reasonable … priori (or prima facie) case can be made.

3. Test of surviving models against empirical observation, either by reference to available data, or by designing new and critical experiments.

The three stages need not be executed consecutively. A new hypothesis may be advanced at any time, even after attempts have been made to test other hypotheses. No theory is ever proved. All theories are open to challenge and criticism may be advanced at any time.

Moreover, there is no reason why a new hypothesis should not be proposed by anybody, including people not deemed to be "expert". Non-experts, people without considerable training, would find it difficult to produce a theoretical novelty that could not be dismissed by reference to established experimental data or a demarcation criterion. Even so, there is no logical barrier to them doing so. The task of criticising theories seems easier than that of devising them and may well be within the capabilities of non-experts but, in practice, the difficulty of the task is not the only fence an amateur would have to jump. Even if his new theory, or his criticisms, met all scientific criteria, the non-scientist may not be listened to by professionals. Even well-established scientists find it difficult to get new theories heard against earlier alternatives.

Of the three stages, generally only the third is found in the scientific literature. The processes going on during the first two stages are rarely recorded. This is unfortunate as the agenda of science, its operational timetable, is laid down during those earlier periods. The exclusion of a concept from that agenda is just as important as the inclusion of another, and more capable of invalidating scientific conclusions. Exclusion, at any stage, is equivalent to saying a theory is wrong. No experiment can ever be done without some form of screening process having been performed but the scientific literature explains these stages only after the event or, more probably, does not explain them at all. When it does, the presentation is a sanitised representation of what may have been a messy process.

To put it another way, and more baldly, it is during those first two stages of a scientific programme that decisions are made as to how research funds will be allocated. In the real world, those decisions largely prejudge the outcome of scientific inquiry, yet there is little study of their formation and only the most opaque of records.

Gatekeepers and the Management of Science

Whatever system of philosophy is adopted, science poses certain unavoidable management problems. Its fields are highly specialised and proper, effective decisions depend upon access to technical knowhow. Such expertise is normally available only from the scientists themselves. To ensure such knowledge is available during administrative decisions, certain scientists, are appointed to decision making positions involving, for example, deciding what projects should receive research funds, which individuals will be appointed or promoted, or what papers will be published. The scientists chosen for these roles have often distinguished themselves in some way and are the elite of science. These gatekeepers play a key role in scientific management.

Scientific gatekeepers decide what is, or is not, science. Their corporate decisions define science in an administrative and practical way, marking out the area of human endeavour called science. Something in the nature of gatekeeping exists for all subcultures and the role is a key and often very powerful one. Most professional subcultures try to select gatekeepers so as to avoid their having any personal vested interest in the decisions they will take. However, science is different in this regard. Because of its highly technical nature, science selects its gatekeepers solely from the field being gatekept. As a result, virtually every gatekeeping decision in science is taken by an individual with a very definite self-interest in its outcome. Also, there is almost no definition of gatekeeping responsibilities and virtually no public accountability for the way gatekeepers discharge their roles. Scientific gatekeeping decisions are taken anonymously, even those affected are kept ignorant of the identity of the person who made it and the rationale he used.[...]

It is most disturbing. The gatekeepers of a field are its existing experts. They can exclude views, not merely because those views lack sense, but simply because they "disagree" with them, and in this context "disagree" can have a range of meaning running from "disagree," through "can't reply," to "I'm jealous." In "disagreeing", gatekeepers can and do turn their back on reasoned explanations. This administrative state of affairs flies in the face of Popperian logic, the principles of critical rationalism, openness and freedom of speech. In effect, science is subject to authoritarian government by gatekeepers.

Chronological Order Dictates Merit

It seems that what matters about a theory is not whether it is right or wrong but whether it was proposed first, second or third etc. (Who proposed it also matters, if the innovator is himself already a gatekeeper things are different.) The first theory in a field is advocated by its first workers. Those workers are taken to be experts. New hypotheses are assessed, anonymously and without unaccountability, by the same men who, now acting in the role of gatekeeper, have a vested interest - an interest in thwarting any ideas that threaten to replace those from which their own influence flows. Those "experts" have complete freedom to reply to the alternative in a rationalist way, simply ignore or patronise the upstart idea or perhaps even steal it. If a good argument is available to rebut an alternative theory, they will no doubt present it in their reply. But even if the newly developed theory is plainly superior, the "expert" gatekeeper is in no way obliged to accept or even consider it. New theories can simply be stifled by gatekeeper disinterest.[...]

Weakness of the Hypothetico-deductive Method

Popper's basic idea, of model (or hypothesis) falsification based on critical rationalism and its concomitant antiauthoritarianism, is the accepted base of scientific logic. It is a testing protocol linking scientific ideas to experimental reality. This link, connecting theory, through experiment, to reality, is the reason for the great success of science as a philosophy but it is not a perfect link - it has weaknesses. The main problem is in the early phases of the process. Firstly, science makes almost no record of how it decides which models or theories it should test. Secondly, and compounding the first problem, in the real world scientific judgement is clouded by the personal subjectivities and deviations of scientists themselves. Thus it is that the initial development and selection of models to be tested, a process not necessarily linked to experiment at all, that remains the major logical difficulty inherent in the paradigm of falsification.

Robert K. Merton enunciated principles of scientific ethics which included Universalism, the belief that ideas must be considered without regard for their origins or who proposed them and this is implicit in Popper's logic. However, that cannot mean all theories must be translated into experiment, that would be impractical. To put it baldly, again, the problem is how to decide which research projects to fund. Especially, how this is decided when sociological observation indicates that the advice given by scientists themselves is hampered by personal subjectivities and deviations from logic. It is necessary to have some ground, some demarcation criterion, to decide before experiment, which theories are most likely to be correct.

In law, similar problems can arise. On the basis of the law and the evidence before him, a judge must often try a case but be unsure of the right decision. If the case is a criminal case, the benefit of this doubt will go to the defendant. In a civil action a judge may be forced to take some kind of practical line. He does not have the luxury of scratching his head for ever, he must decide on the balance of probability. He will need to find a rationale, even if it is not perfectly logical. This may lead the judge to error but it is unlikely it will lead him to fraud - he must give an open account of his judgement and explain the case and how it relates to the law. If he gets these things wrong his judgement is subject to appeal. What is more, a judge should never try a case in which there was any hint of a personal interest.

In one role, a scientist can scratch his head and vacillate between two theories for ever, or stick to a wrong theory purely to save face. There will always be some argument to put. Set against a great mass of often conflicting experimental data, no opposing scientific theory will ever be completely perfect. But gatekeepers are the judges of science and for scientists in this things are different, at the end of the day they must decide. When they go home at night, they must have made funding decisions, or job appointment decisions, or publication decisions. They must decide - whether or not they are sure. A rationale must be found even if it is not perfectly logical. However, although he is forming a judgement, the gatekeeper is not in nearly the same position as a judge. He is not subject to the discipline of explaining his decisions or recounting any scientific law or principle. What is more, he would not be deciding the issue at all unless he had a vested interest in its outcome. For the gatekeeper the temptation to follow the easy route of his interests or relativism must be very real.

In these circumstances problems arise, more for everyone else than for the gatekeeper. There are logical approaches, demarcation criteria, for selecting without experiment those theories most likely to be valid and therefore to reward funding. But how can anyone be sure the gatekeeper follows them? The observer is in a predicament. Strictly, the problem should be addressed by the administrators of science but, [...] they are content. That is not surprising - they are the gatekeepers.

Reducing the number of models - Demarcation Criteria

We will now turn to the question of which hypotheses are scientific. How to choose from a range of possibilities those hypotheses that are worthy of attention and deserve to be pursued. Philosophers of science address this problem by laying down demarcation criteria. A new theory should then be tested against the chosen criterion. Those ideas which satisfy the demarcation criteria would be most likely to be productive and most attention would be payed to them. The following sections present a series of demarcation criteria, though it may not be complete.

Popper

The main demarcation criterion associated with Popper is falsifiability - in order to be scientific, a hypothesis should be falsifiable - it should make predictions that can be tested by observation or experiment. By tested, Popper meant some of its predictions must be such that, at least in principle, the contrary could be observed. This was his primary demarcation criterion and was seen by him as very important. On this basis, for example, he criticised the various schools of psychiatric thought because each could accommodate all observations. As a result the ideas did not compete with one another and attempts to distinguish them could not be informative. This test separates the hypotheses inherent in an act of faith - religion for example - from a scientific hypothesis. The statement, "God created the heavens and the earth," cannot be contradicted by observation. Therefore, Popper would not see it as a scientific hypothesis, whether or not it is believed true.

The idea is that only models which can, in principle, be falsified are scientific - others need not be considered. It is useful to view this assertion from a different perspective. Popper is saying that, to be meaningful, a scientific theory must deny something. The idea must prohibit some observations from being made; this is extremely important, because Popper's logic is purely negative, it asserts that the actual meaning of a theory lies not in what it asserts about the universe but what it denies. Some philosophers go further, arguing that any statement has meaning only in what it denies. Thus, even a sentence as simple as, "this paper is white," actually means, "this paper is not, not white." I.e. it is not green, not blue etc.

Falsifiability is the first example of a strategy, or general principle, for reducing the number of models. It is probably the most widely discussed demarcation criterion and shows at once that asserting a scientific theory is equivalent to denying alternatives.

Popper listed two other criteria besides falsifiability. Firstly, a good, new theory should, "proceed from some simple, new, and powerful unifying idea," (Conjectures and Refutations). It should, in principle, be able to unify a body of knowledge that would otherwise be a set of disparate facts. Secondly, Popper held that it should pass some tests. A good new theory should make at least one successful prediction not apparent from existing theory. This seems rather restrictive but is not as bad as sounds. Popper would not have demanded that a theoretical astronomer build a radio telescope before publishing a new theory. Predictions explaining data within existing knowledge do meet this criterion.[...]

2.16 Metaphysical Logic and Scientific Logic

It is undesirable to believe a proposition when there is no ground whatever for supposing it true. (Bertrand Russell, Sceptical Essays)

The distinction between science and metaphysics is significant because there seems to be a significant difference between the logics of metaphysics and science. Science seeks to disprove a hypothesis and a persistent failure to do so leads to its acceptance. This is the negative logic of falsification. Metaphysics is not quite like this; before the existence of a postulated entity should be accepted, there needs to be positive reason to require the existence in question. For example, the postulate of life on Mars is a postulate of existence. It may be believed or not but well-justified belief would require positive supportive evidence, such as Martian roses.

In laying down theories, scientists do not normally distinguish science from metaphysics. That may be unfortunate, much of the philosophical disputation between confirmation and elimination of theories might be removed if this were done. Metaphysical logic seems to be largely the positive logic of confirmation, while scientific logic seems largely the negative logic of falsification.

Popper's hypothetico-deductive model applies to the scientific parts of theories but not so obviously to their metaphysical elements. It is generally a very difficult, or even universally impossible, task to disprove a metaphysical postulate. Even though it seems very unlikely, it would be difficult to actually prove that there is no life on the moon.

However, it is only when a metaphysical idea has supportive evidence that it becomes important. As an example, consider the atomic theory of matter. As every schoolboy knows, the idea of atoms was originally advanced by the Greeks but in this form the idea was metaphysical speculation unsupported by evidence. The idea of atoms was merely a conjecture, unproven, unlinked to any body of experimental evidence, and irrelevant to any possible course of action. Agnosticism was a rational view of the debate about atoms until Dalton's chemical laws, based as they were on observation, began to require them for chemical interpretations. The observations that positively required atoms also made them relevant, and they began to influence men's actions. In the twentieth century, photographs of atoms have been obtained, and disbelief has become irrational.

In logic, then, you just cannot win. Theories need positive evidence for the entities whose existence they postulate. Then they need negative disproof of competing theories.[...]

Occam's Razor - the Coherence Criterion

A principle stated in correspondence by Dr. John Maddox, as Editor-in-Chief of Nature is that a hypothesis should be "grounded on previous understanding or observation." To take his example, in the nineteenth century there might have been competing hypotheses about the make up of the moon. One school of thought arguing the moon was made of rock, another school advancing the view that it was green cheese. As he says, even without experiment intelligent scientists would not have considered the green cheese hypothesis, because it was founded upon no present knowledge or observation. There are other, rather trite, reasons to reject the green cheese model. Cheese is a dairy product made by men from milk, in turn produced by lactating mammals. The green cheese hypothesis implies that men and other mammals are at large within the solar system, giving the green cheese hypothesis some very complex, improbable and unsupported implications.

The existence of such complex ramifications is a general reason for rejecting, or at least downgrading, a hypothesis without experiment. All this boils down to Occam's razor - hypotheses involving the least possible departure from the existing body of knowledge are most likely to be correct. Hypotheses that pick up well-established ideas from related areas inherit much of their supportive evidence, much as an organism inherits many characteristics from its evolutionary forebears.

Occam's razor is related to the idea of coherence with existing knowledge. To understand coherence one may think of all knowledge as being cut into a large number of small pieces much like a jigsaw puzzle. To reassemble the picture we must examine a piece to see if the pattern on it fits in with, or coheres with, the pattern on those pieces we have already assembled in that area. For a new piece of knowledge fits comfortably in place, the shape of knowledge painted onto it should form a continuous pattern with, or cohere with, surrounding pieces.

A new claim to knowledge or a hypothesis which fails to cohere with surrounding knowledge is an extraordinary claim. Its acceptance would demand the revision of knowledge within those surrounding areas and, consequently, its acceptance demands extraordinary evidence.

Coherence, or Occam's razor, is a well known and important principle but two important caveats should be stated. Firstly, the coherence criterion must be used with care and moderation, applied rigidly it produces closed systems of thought. The pieces of the jigsaw already assembled may actually be in the wrong places. Secondly, the existing body of knowledge means exactly what it says and knowledge is well-founded belief (Popper). The existing body of knowledge does not mean the existing body of hypotheses. To be of any real value, a new idea must compete with existing suppositions used to explain the same data set. It is diametrically wrong to demand of a new hypothesis that it be consistent with the ideas it sets out to replace.

Hypothesis Testing and Probability

Many years before Popper, Bayes investigated the branch of mathematics applied to formal hypothesis evaluation and now known as Bayesian statistics. A scientific investigation links experimental results with the probability assignments attached to particular hypotheses. Before any experimental test is performed initial probabilities (known as antecedent probabilities) must be assigned to the various hypotheses. As experimental data become available these antecedent probabilities are adjusted up or down depending on whether the observations support or do not support the corresponding hypothesis. The theorem used to adjust the probabilities is known as Bayes' theorem. Some fields can use the procedure quite formally. For example, in medical diagnostics, antecedent probabilities reflect the incidence of a disease in the population. In practical science Bayes' theorem has little formal use because of the general difficulty giving objective numerical values to the antecedent probabilities. Accordingly, the theorem is neither stated nor used here. Even so, scientists must intuitively use Bayes' theorem, assigning antecedent probabilities by judgement.

Mathematicians have investigated the fallacies arising in Bayesian statistics, some of which help to clarify points made earlier. A hypothesis is meaningful only if it partitions the possibility space; for example, the hypotheses that a dice will fall as a five or as an odd number are both meaningful in that they can both be wrong - it may fall as a four. On the other hand, the hypothesis that the dice will fall with a number uppermost is not meaningful because all possible outcomes are numbers - the hypothesis cannot be falsified because it fails to partition the possibility space. This failure is what philosophers of science mean when a hypothesis is described as vacuous.

A hypothesis may be "academic" (in a pejorative sense); whether it be true or not will make no difference to actions or beliefs flowing from the statistical analysis. The distinction is important for doctors making a diagnosis - only if two diseases require different treatment, is the physician concerned to know which his patient suffers from. Returning to the example of the dice, whether it falls as a five or not will affect my actions only if I am playing snakes and ladders or have some other link to this test. For most people, the outcome of throwing dice is academic and uninteresting. In science, this pejorative form of the word academic means that whether a hypothesis is true will have no effect on perceptions of the world or how people act.

Finally, note again that a hypothesis set should be well chosen and, without overlap, cover all possible explanations. It is hard, in science, to prove that a hypothesis set does entirely cover the possibility space. The proper response to this problem is to contemplate the possibility that all the considered hypotheses are wrong. It remains very wrong to use a hypothesis set that is known not cover the possibility space.

Assessment of Antecedent Probabilities

Much of the intuitive Bayesian statistics used by practising scientists consists of the assignment of antecedent probabilities to any suggested hypotheses. This is the statistical equivalent of initial hypothesis screening [...]. If a hypothesis fails to cohere with existing knowledge, it is right to assign it a low antecedent probability. Only very clear evidence supporting it, and contradicting more cohering hypotheses, will bring its probability assignment up to a point where it would be accepted.

Invalid criteria such as relativism and self-interest will intrude on the intuitive assignment of antecedent probabilities. They will lead to the assignment of a low antecedent probability to a correct hypothesis and vice versa. However, unless the correct theory is actually assigned an antecedent probability of zero, this should only slow things down. The objective application of Bayes' theorem would steadily improve the probability assigned to the correct hypothesis as experimental data became available. (In Bayesian statistics, antecedent probabilities can, in principle, be assigned randomly but still ultimately produce good knowledge. This may be how some sciences arose from areas we would today classify as mythology. Alchemy for example led to chemistry and astrology to astronomy.) Only if the correct hypothesis is dismissed entirely will Bayesian statistics fail. If the antecedent probability assigned to a correct hypothesis is zero, Bayes' theorem will keep the probability at zero no matter what the outcome of experiment and the remaining ideas will become a closed system of thought. This seems to be true of the intuitive Bayesian scientist, just as it is of the formal statistical process.

Intuitive Bayesian statistics are applied both by individuals and by the community of scientists. Both levels will assign intuitive antecedent probabilities to hypotheses and both, being human, will err. [...] In general, the scientific community is too willing both to assign a probability of zero to dissenting ideas and to assign a probability of one its own beliefs.

The Origins of Uncertainty

It is universally accepted, and implied by use of probability theory in hypothesis testing, that no scientific theory can be known, with total certainty, to be true. Scientific certainty is lost in two general ways - uncertainty in the outcome of experiments and uncertainty in their interpretation. Our certainty in the outcome of experiments is greatly increased by care in its execution and repetition by other groups or on analogous systems. Unfortunately, these hardly improve our confidence in the interpretation of the results.

Clearly repeat experiments and studies on related systems has a role in ensuring validity of results but there are also structural and social reasons for such studies. If an experiment is cheap, quick and already within the laboratory's range, it is quite easy to perform a series of studies around a theme. Moreover, results that accord with earlier data are theoretically uncontroversial and, if the field already understands a technique, other workers are less likely to obstruct publication by raising queries about the validity of the observations. Thus, a large body of publication can quickly accumulate that hinges round one basic experiment.

For purposes of interpretation it is important to realise that, for all its size, that body of papers only amounts to one experiment. Failing to recognise this is to act like the man Wittgenstein mentions in Philosophical Investigations, who purchases several copies of the morning paper to reassure himself that what he reads there is true. Committing this fallacy is both a common individual fault and also structurally embedded in modern scientific administration. Of course, scientists do not buy many copies of their morning paper, but they do publish many copies of the same, or very similar, experiment; then they point to the "mountain of evidence" supporting their ideas.

Experiments report reality much as newspapers report news. The hypothesis used to explain their outcome is the impression of reality they give. Like a newspaper article, the scientific observations may be clear and accurate, or misleading and inaccurate. Because observations may be inaccurate, they need to be reported in a way that enables other workers to replicate them. Because the observation may be misleading, even though accurate, the generated hypothesis should be confirmed by data which is as unrelated to the original observations as possible. Reverting to Wittgenstein's analogy, his man would have been well advised to read another newspaper, one which employed a different reporter who, himself, employed different sources for the news he reported.

This point has been made by many philosophers of science; for example, in the nineteenth century, Whewell, adopted it as a criterion of induction, referring to it as the consilience of hypotheses. Although we no longer think there is a logic of induction, his point remains valid as a means of increasing our confidence in a theory. On the same lines, Popper asserted that a hypothesis supported by data of two or more distinctly different types should be preferred to an alternative able to explain only a narrow domain of data.

In summary, repetition offers confidence that the published data are accurate but those scientists who believe that repetition of data can support ideas are buying too many copies of the morning paper. No matter how many times an experiment, or its close siblings, are repeated - one hundred times or one thousand papers - repetition adds no assurance that any particular interpretation of that result into a hypothesis is valid. If another idea will explain the data from one such experiment, then it will equally apply to any number of repetitions. Assurance of interpretation can come only by comparing the success of competing hypotheses in interpreting data from disparate areas. The more dissimilar are the sources of data used the better, providing only that they do fall within the range of application of the hypotheses in question. Modern scientific administrations fail to recognise this fallacy, a failure linked closely to the procedures they use for quality assessment.

Quality Assessment - Peer Review and Citation Analysis

Science managers and gatekeepers base many policies and decisions on quality assessments. Consequently, how quality is defined, maintained and assessed, is a pivotal issue for modern science - it is also one of the few areas in which scientific practice overlaps with scientific philosophy. In principle assessment of quality in research programmes should include a rational assignment of the antecedent probability of the underlying ideas. In practice, however, the methods adopted simply abandon rationality and one of them jumps head first into Wittgenstein's fallacy, buying as many copies of the morning paper as leaders in a field might find convenient. Assessments are made at several levels, for example of :-

* Research projects before they are funded.

* The value of work before it is published in the scientific literature.

* The worth of researchers before they are appointed to posts.

These prospective evaluations are usually made by peer review. Referees, anonymous experts in the field, are selected by scientific authorities. The expert will then write a report, which is taken to be an objective evaluation of the work in question, but that report is unlikely to make any attempt at explanation and it may not be seen by the scientist concerned, who will have little or no opportunity to reply if he does see it. Besides these initial screening steps, post hoc assessments are also made of :-

* The "success" of published articles in terms of their scientific impact when set against competing articles.

* The "success" of published scientists in terms of their scientific impact when set against competing workers.

* The "status" of institutions and journals.

Sometimes such assessments are made by committees of experts but one of the most important tools used for the appraisal is citation analysis, a tool developed over the past twenty to thirty years.

A scientific paper does not stand alone, it builds on what has gone before, using other workers ideas, techniques and results. To place the work in context, the scientific article ends with a list of relevant publications showing where the ideas it used came from[...]. These are citations and they interested an American named Eugene Garfield. His Institute of Scientific Information (ISI) notes every scientific paper published and, from their citation lists, constructs a computer database, called the Science Citations Index (SCI). Scientists can use the SCI to find all papers citing any earlier article. It has proved to be a very valuable research tool, enabling workers to research a topic forward through the literature, whereas traditional abstracting media permitted only a backwards search.

The SCI is also used in quality assessments. Using it, one can easily determine how often, or whether, a paper is cited by subsequent publications, a process called citation analysis. The argument is that rarely-cited studies cannot have been very important. In making this count, the ISI itself carefully avoids the term "quality", preferring to call the resulting measure the "impact" of a paper, but scientific institutions do take this impact as a measure of quality. Journals and institutions can also be ranked according to the impact of articles published during a given period. Journals even tout their impact rating when advertising to libraries for sales or soliciting the scientific community for new papers.

This way of assessing quality means that the citation practices of authors influence the assessment of the work done by their contemporaries and colleagues. If a scientific theory is not mentioned by establishment figures, and the articles which propose it are not cited by them, the theory is automatically assessed as of low quality, even if no reason for disregarding it has been given. By contrast, if scientists go to great lengths to rebut an incorrect theory, that theory will be assessed as being of high quality, even if most observers regarded the theory as absurd from the outset.

Whatever its value as a management tool, quality assessment by citation analysis is clearly prone to Wittgenstein's fallacy. Moreover, its practical implications for the assessment of theories are clear. Under that process, ignoring, or not citing, a theory is the same as rejecting it. For their part, scientists are well aware of the quality assessment procedures used and the implications of their actions. When a scientist disregards a theory, he knows the result this will have for its assessment and presumably intends that outcome. In short, a scientist who chooses to ignore a theory, is broadcasting a message about that theory - namely that he rejects the theory as of low quality. The message thus broadcast may be implicit but the scientist knows it is sent, he knows who receives it and he knows how they will interpret and act upon it.

Both citation analysis and peer review are highly questionable as methods of quality assessment and amount to little more than statements of establishment opinion[...].

© Copyright John A Hewitt.

[Source: John A Hewitt - A Habit of Lies - How Scientists Cheat : http://freespace.virgin.net/john.hewitt1/pg_ch02.htm]

jesus is buddha

Recent epoch-making discoveries of old Sanskrit manuscripts in Central Asia and Kashmir provide decisive proof that the four Greek Gospels have been translated directly from the Sanskrit.

A careful comparison, word by word, sentence by sentence shows that the Christian Gospels are Pirate-copies of the Buddhist Gospels. God's word, therefore, is originally Buddha's word.

[...] The main Buddhist sources are Mûlasarvâstivâdavinaya (MSV) and the Saddharmapundarîka (SDP). The Sukhâvatîvyûha is the source of Luke 10:17. The first words of Jesus are from the Prajnâpâramitâ. There are a few other Buddhist sources, and of course the numerous quotations from the Old Testament, but the main sources are, without any shadow of doubt, the MSV (parts of which, again, prove more important than others), and the SDP.

[...] The Sanskrit TRi-RaTNaS becomes the Latin TRi-NiTaS.

[...] 1. Matthew 1:1 runs:

biblos geneseôs, ´Iêsou Khristou, huiou Daueid, huiou ´Abraam.

Book of descent, of Jesus Christ, of son David, of son Abraham.

Commentary:
One person cannot possibly be the son of two different fathers belonging to two widely different periods of time. The son of David, the son of Abraham not only has two fathers. He is also the son of Man, of mary, of Joseph etc.
The original source solves the intentional paradoxes.
The source is the introduction to the MSV.
Ma-hâ-Maud-gal-yâ-ya-nam, becoming the Math-thai-on le-go-me-non, Matthew 9:9, introduces the MSV by relating the legend of the vamsas = biblos of the kula, genitive, kulasya = geneseôs of the Sâkyas in Kapila-vastu, alias Ka-phar-naoum.

The genitive form of ksatriyas, son of a king, is ksa-tri-yas-ya. These four syllables in Greek become ´Iê-sou Khris-tou. As will be seen , when comparing the Greek and the Sanskrit, all the syllables and consonants of the original Sanskrit have been preserved. This means, in this case, that the - sou of ´Iêsou represents the genitive ending of ksatriyasya, namely -sya. Moreover, the `I represents the y.
There are , to be sure, several Sanskrit originals behind Jesus. More about this later on. Normally Sanskrit ksatriyas becomes ho Khristos in the Greek. Th article ho is there in order to imitate the three syllables of the original. So, as a rule, Sanskrit ksa-tri-yas is translated as ho khris-tos. Such a ksatriyas is also anointed. Thus the Greek represents not only the sound but also the sense of the Sanskrit perfectly. The sense is, of course, at the same time assimilated to that of the Messiah.

The ksatriyas is, in Q, the son, Sanskrit putras, of the king, called deva. He is, therefore, a deva-putras, a son of the king. Sanskrit devas also means god. He is , therefore also the son of god. This is nicely assimlitated to the king Dauid. So the deva- , god and king, is nicely assimliated to the king David.
Note also, that the Greek has no word for “of”. It says “son David”. The reason is clear. It has to have four syllables only, as does the Sanskrit.

Finally, he is the son (of) Abraham. The Sanskrit original is Brahmâ. The ksatriyas descends from the world of Brahmâ. He is, as such, one of the numerous sons of Brahmâ. Thus it is easy to see that the son of Abraham - a chronological absurdity - was originally the son of Brahmâ.

[...]So, to sum up: The Sanskrit original of the intial eight words of Matthew, runs, in simplified Romanization:
kulasya vamsas ksatriyasya deva-putrasya brâhmanasya .

The total number of syllables, is of course, the same in both sources.
The reader who consults the first few pages of the MSV (being here SBV I) will easily be able to make further identifications.
Let me only add, that the ksatriyas was supposed to be the next king of Kapilavastu. He was the son of a king. But things turned out otherwise.
So, we have the son of a king who never became a normal king. He did, however, become a king of Dharma.
Just like o khristos.
This is, in brief, the secret of Christ.

[Source: www.jesusisbuddha.com]

The Economics of War

It has often been said that war is the health of the State – but the argument could also be made that the reverse is more true: that the State is the health of war. In other words, that war – the greatest of all human evils – is impossible without the State.

The great Austrian economist Ludwig von Mises was once asked what the central defining characteristic of the free market was – i.e. since every economy is more or less a mixture of freedom and State compulsion, what institution truly separated a free market from a controlled economy – and he replied that it was the existence of a stock market. Through a stock market, entrepreneurs can achieve the externalization of risk, or the partial transfer of potential losses from themselves to investors. In the absence of this capacity, business growth is almost impossible.

In other words, when risk is reduced, demand increases. The stagnation of economies in the absence of a stock market is testament to the unwillingness of individuals to take on all the risks of an economic endeavour themselves, even if this were possible. When risk becomes sharable, new possibilities emerge that were not present before – the Industrial Revolution being perhaps the most dramatic example.

Sadly, one of those possibilities – in all its horror, corruption, brutality and genocide – is war. [...] in its capacity to reduce the costs and risks of violence, the State is, in effect, the stock market of war.

[...] We can imagine an unethical window repairman who smashes windows in order to raise demand for his business. This would certainly help his income – and yet we see that this course is almost never pursued in real life in the free market. Why not?

[...]If you want to hire an arsonist to torch the factory of your competitor, you have to become an expert in underworld negotiations. You might pay an arsonist and watch him take off to Hawaii instead of setting the fire. You also face the risk that your arsonist will take your offer to your competitor and ask for more money to not set the fire – or, worse, return the favor and torch your factory! It will certainly cost money to start down the road of vandalism, and there is no guarantee that your investment will pay off in the way you want.

[...]How does this relate to war and the State? Very closely, in fact – but with very opposite effects.

The economics of war are, at bottom, very simple, and contain three major players: those who decide on war, those who profit from war, and those who pay for war. Those who decide on war are the politicians, those who profit from it are those who supply military materials or are paid for military skills, and those who pay for war are the taxpayers. (The first and second groups, of course, overlap.)

In other words, a corporation which profits from supplying arms to the military is paid through a predation on citizens through State taxation – and under no other circumstances could the transaction exist, since the risks associated with destruction outlined above are equal to or greater than any profits that could be made.

Certainly if those who decided on war also paid for it, there would be no such thing as war[...] However, those who decide on war do not pay for it – that unpleasant task is relegated to the taxpayers (both current, in the form of direct taxes and inflation, and future, in the form of national debts).

[...]Those who decide on war and those who profit from war only start wars when there is no real risk of personal destruction. This is a simple historical fact, which can be gleaned from the reality that no nuclear power has ever declared war on another nuclear power. The US gave the USSR money and wheat, and yet invaded Grenada, Haiti and Iraq. (In fact, one of the central reasons it was possible to know in advance that Iraq had no weapons of mass destruction capable of hitting the US was that US leaders were willing to invade it.)

[...]The “risk of retaliation” in economic calculations regarding war should not be taken as a general risk, but rather a specific one – i.e. specific to those who either decide on war or profit from it. For example, Roosevelt knew that blockading Japan in the early 1940s carried a grave risk of retaliation – but only against distant and unknown US personnel in the Pacific, not against his friends and family in Washington. (In fact, the blockading was specifically escalated with the aim of provoking retaliation, in order to bring the US into WWII.)

The power of the State to so fundamentally shift the costs and benefits of violence is one of the most central facts of warfare – and the core reason for its continued existence. [...] if the person who decides to profit through destruction faces the consequences himself, he has almost no economic incentive to do so. However, if he can shift the risks and losses to others – but retain the benefit himself – the economic landscape changes completely! Sadly, it then becomes profitable, say, to tax citizens to pay for 800 US military bases around the world, as long as strangers in New York bear the brunt of the inevitable retaliation. It also becomes profitable to send uneducated youngsters to Iraq to bear the brunt of the insurgency.

[...]Thus the fact that the State externalizes almost all the risks and costs of destruction is a further positive motivation to those who would use the power of State violence for their own ends. Once you throw in endless pro-war propaganda (also called “war-nography”), the emotional benefits of starting and leading wars funded by others can become a definitive positive – which ensures that wars will continue until the State collapses, or the world dies.

[...]If the above is understood, then the hostility of anarchists towards the State should now be at least a little clearer. In the anarchist view, the State is a fundamental moral evil not only because it uses violence to achieve its ends, but also because it is the only social agency capable of making war economically advantageous to those with the power to declare it and profit from it. In other words, it is only through the governmental power of taxation that war can be subsidized to the point where it becomes profitable to certain sections of society. Destruction can only ever be profitable because the costs and risks of violence are shifted to the taxpayers, while the benefits accrue to the few who directly control or influence the State.

This violent distortion of costs, incentives and rewards cannot be controlled or alleviated, since an artificial imbalance of economic incentives will always self-perpetuate and escalate (at least, until the inevitable bankruptcy of the public purse). Or, to put it another way, as long as the State exists, we shall always live with the terror of war. To oppose war is to oppose the State. They can neither be examined in isolation nor opposed separately, since – much more than metaphorically – the State and war are two sides of the same bloody coin.

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/]

The Slave's Revenge

If the slave cannot escape, and is beaten if he does not work hard, then his vengeance will always take on a more subtle form. The slave will perform his work slightly more slowly – not enough to be punished, but enough to irritate his master. The slave will pretend to be less intelligent than he really is, so that when he loses or breaks things, he will be more likely to escape punishment, since he is pretending in effect to be a child.

[...] the slave will also do what he can to promote any negative habits his master may have. If his master likes to drink, the slave will always be on hand to refill his cup. If his master has a tendency towards jealousy, the slave will innocently “mention” that he saw his master’s wife chatting with another man.

If the slave is particularly cunning, he will also do everything that he can to inflate his master’s ego. He will sing his master’s praises, claim joy in “knowing his place,” thank the master for everything he does, and remain fanatically “loyal.” This hyperinflation of the master’s ego inevitably creates pettiness, vanity, hyper-irritability, and unbearable pomposity.

In other words, the slave will always turn his master into an unhappy man – who is constantly annoyed, who cannot experience love, and who engenders no respect from those around him – particularly his children. (One of the worst aspects of being a slave-owner is that it turns you into a terrible and abusive father.) As a result of the slave’s passive-aggressive manipulations, the master becomes prone to violence – verbal and physical – self-abusive habits, crippling self-blindness, and sinks into a bottomless pit of discontent and misery.

This is the vengeance of the slave. All slaves are Iago. And, for the most part, all children are slaves. As you were.

[...] the great danger for the slave is his capacity to become addicted to the dark “satisfactions” of passive-aggressive vengeance. By enslaving his master, the slave gains a sense of control – and also re-creates in his master his own experience of enslavement. It is a subtle cry of hatred – and plea for empathy. [...]A slave can only hope for freedom by making owning slaves unbearable for his master. Not only might the slave’s endless passive-aggressive noncompliance and provocation provoke suicide on the part of his master – but his master’s miserable existence might also serve as a warning for others who might wish to own slaves.

[...]However, as mentioned above, the greatest danger for the slave is that he becomes addicted to the sense of control that comes from manipulating his master. In other words, the great danger for the slave is that he becomes addicted to his slavery. If a slave begins to believe his own master-destroying propaganda, then in the absence of masters, he will create them.

[...]Most of us are raised as slaves. Our opinions are rarely sought, rules are rarely explained – and moral rules never are – we are shipped off to schools where we are treated disrespectfully; our subservience is bought with rewards, and our independence is punished with detentions. Scepticism and curiosity are scorned and belittled, while empty abilities like throwing balls, learning dates, sitting still and “being pretty” are praised and elevated.

Lies about our history become cages for our futures. Lies about our own intelligence and originality lead us to the petty enslavement of “good citizenship” – and horrifying fairy tales about life in the absence of coercive or religious control scare us back into our slave pens the moment we even think of glancing outside to the green and beautiful hills beyond our bars.

Collective punishments turn us against each other; the “kibbles and whips” of the classroom reward us for laughing at each other to gain the favor of the teacher; terrifying and brutal “morality” is inflicted upon us. We are punished for not treating those in authority with “respect” (do they treat us with respect?) – and we are bred for a life of subservience, fear, productivity and dependence as surely as fattened calves are bred for veal.

Where in the past we were not taught to fear the priests, but rather the imaginary devils the priests warned us of, now we are not taught to fear our politicians, who can debase our currency, throw us in prison and send us to war – but rather we are taught to fear each other. We are taught to imagine that the real predators in this world are not those who control prison cells, national debts and nuclear weapons, but rather our fellow citizens, who in the absence of brutal control would surely tear us apart!

The entire purpose of state education is to make sure that we never truly “leave” our childhoods: that we spend our lives trembling in fear of imaginary predators, begging for “protection” from those who threaten us with the most harm.

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/]

necessity of the state

Logically, there are four possibilities as to the mixture of good and evil people in the world:

1. All men are moral.
2. All men are immoral.
3. The majority of men are immoral, and a minority moral.
4. The majority of men are moral, and a minority immoral.

(A perfect balance of good and evil is practically impossible.)

In the first case (all men are moral), the government is obviously not needed, since evil cannot exist.

In the second case (all men are immoral), the government cannot be permitted to exist for one simple reason. The government, it is generally argued, must exist because there are evil people in the world who desire to inflict harm, and who can only be restrained through fear of government retribution (police, prisons et al). A corollary of this argument is that the less retribution these people fear, the more evil they will do. However, the government itself is not subject to any force or retribution, but is a law unto itself. Even in Western democracies, how many policemen and politicians go to jail? Thus if evil people wish to do harm, but are only restrained by force, then society can never permit a government to exist, because evil people will work feverishly to grab control of that government, in order to do evil and avoid retribution. In a society of pure evil, then, the only hope for stability would be a state of nature, where a general arming and fear of retribution would blunt the evil intents of disparate groups. As is the case between nuclear-armed nations, a “balance of power” breeds peace.

The third possibility is that most people are evil, and only a few are good. If that is the case, then the government also cannot be permitted to exist, since the majority of those in control of the government will be evil, and will rule despotically over the good minority. Democracy in particular cannot be permitted, since the minority of good people would be subjugated to the democratic control of the evil majority. Evil people, who wish to do harm without fear of retribution, would inevitably control the government, and use its power to do evil free of the fear of consequences. Good people do not act morally because they fear retribution, but because they love virtue and peace of mind – and thus, unlike evil people, they have little to gain by controlling the government. In this scenario, then, the government will inevitably be controlled by a majority of evil people who will rule over all, to the detriment of all moral people.

The fourth option is that most people are good, and only a few are evil. This possibility is subject to the same problems outlined above, notably that evil people will always want to gain control over the government, in order to shield themselves from just retaliation for their crimes. This option only changes the appearance of democracy: because the majority of people are good, evil power-seekers must lie to them in order to gain power, and then, after achieving public office, will immediately break faith and pursue their own corrupt agendas, enforcing their wills through the police and the military. (This is the current situation in democracies, of course.) Thus the government remains the greatest prize to the most evil men, who will quickly gain control over its awesome power – to the detriment of all good souls – and so the government cannot be permitted to exist in this scenario either.

It is clear, then, that there is no situation under which a government can logically or morally be allowed to exist. The only possible justification for the existence of a government would be if the majority of men are evil, but all the power of the government is always controlled by a minority of good men (see Plato’s Republic). This situation, while interesting theoretically, breaks down logically because:

1.The evil majority would quickly outvote the minority or overpower them through a coup;
2.There is no way to ensure that only good people would always run the government; and,
3.There is absolutely no example of this having ever occurred in any of the brutal annals of state history.

The logical error always made in the defense of the government is to imagine that any collective moral judgments being applied to any group of people is not also being applied to the group which rules over them. If 50% of people are evil, then at least 50% of people ruling over them are also evil (and probably more, since evil people are always drawn to power). Thus the existence of evil can never justify the existence of a government.

If there is no evil, governments are unnecessary. If evil exists, the governments are far too dangerous to be allowed to exist.

Why is this error so prevalent?

There are a number of reasons, which can only be touched on here. The first is that the government introduces itself to children in the form of public school teachers who are considered moral authorities. Thus are morality and authority first associated with the government – an association that is then reinforced through years of grinding repetition.

The second is that the government never teaches children about the root of its power – violence – but instead pretends that it is just another social institution, like a business or a church or a charity, but more moral.

The third is that the prevalence of religion and propaganda has always blinded men to the evils of the government – which is why rulers have always been so interested in furthering the interests of churches and state “education.” In the religious world-view, absolute power is synonymous with perfect virtue, in the form of a deity. In the real political world of men, however, increasing power always means increasing evil. With religion, also, all that happens must be for the good – thus, fighting encroaching political power is fighting the will of the deity.

[...]people generally make two errors when confronted with the idea of dissolving the government. The first is the belief that governments are necessary because evil people exist. The second is the belief that, in the absence of governments, any social institutions that arise will inevitably take the place of governments. Thus, Dispute Resolution Organizations (DROs), insurance companies and private security forces are all considered potential cancers that will swell and overwhelm the body politic.

This view arises from the same error outlined above. If all social institutions are constantly trying to grow in power and enforce their wills on others, then by that very argument a centralized government cannot be allowed to exist. If it is an iron law that groups always try to gain power over other groups and individuals, then that power-lust will not end if one of them wins, but will continue to spread across society virtually unopposed until slavery is the norm.

The only way that social institutions can grow into violent monopolies is to offload the costs of enforcement onto their victims. Governments grow endlessly because they can pay tax collectors with a portion of the taxes they collect. The slaves are thus forced to pay for the costs of their enslavement.

[...]It is very hard to understand the logic and intelligence of the argument that, in order to protect us from a group that might overpower us, we should support a group that already has overpowered us. It is similar to the statist argument about private monopolies – that citizens should create a governmental monopoly because they are afraid of private monopolies. It does not take keen vision to see through such nonsense.

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/]

The Lesser Evil

An objective review of human history would seem to point to the grim reality that by far the most dangerous thing in the world is false ethical systems.

If we look at an ethical system like communism, which was responsible for the murders of 170 million people, we can clearly see that the real danger to individuals was not random criminals, but false moral theories. Similarly, the Spanish Inquisition relied not on thieves and pickpockets, but rather priests and torturers filled with the desire to save the souls of others. Nazism also relied on particular ethical theories regarding the relationship between the individual and the collective, and the moral imperative to serve those in power, as well as theories “proving” the innate virtues of the Aryan race.

Over and over again, throughout human history, we see that the most dangerous instruments in the hands of men are not guns, or bombs, or knives, or poisons, but rather moral theories. From the “divine right of kings” to the endlessly legitimized mob rule of modern democracies, from the ancestor worship of certain Oriental cultures to the modern deference to the nation-state as personified by a political leader, to those who pledge their children to the service of particular religious ideologies, it is clear that by far the most dangerous tool that men possess is morality. Unlike science, which merely describes what is, and what is to be, moral theories exert a near-bottomless influence over the hearts and minds of men by telling them what ought to be.

When our leaders ask for our obedience, it is never to themselves as individuals, they claim, but rather to “the good” in the abstract. JFK did not say: “Ask not what I can do for you, but rather what you can do for me...” Instead, he substituted the words “your country” for himself. Service to “the country” is considered a virtue – although the net beneficiaries of that service are always those who rule citizens by force. In the past (and sometimes even into the present), leaders identified themselves with God, rather than with geography, but the principle remains the same. For Communists, the abstract mechanism that justifies the power of the leaders is class; for fascists it is the nation; for Nazis it is the race; for democrats it is “the will of the people”; for priests it is “the will of God” and so on.

Ruling classes inevitably use ethical theories to justify their power for the simple reason that human beings have an implacable desire to act in accordance to what they believe to be “the good.” If service to the Fatherland can be defined as “the good,” then such service will inevitably be provided. If obedience to military superiors can be defined as “virtue” and “courage,” then such violent slavery will be endlessly praised and performed.

The more false the moral theory is, the earlier that it must be inflicted upon children. We do not see the children of scientifically minded people being sent to “logic school” from the tender age of three or four onwards. We do not see the children of free market advocates being sent to “Capitalism Camp” when they are five years old. We do not see the children of philosophers being sent to a Rational Empiricism Theme Park in order to be indoctrinated into the value of trusting their own senses and using their own minds.

No, wherever ethical theories are corrupt, self-contradictory and destructive, they must be inflicted upon the helpless minds of dependent children. The Jesuits are credited with the proverb: “Give me a child until he is nine and he will be mine for life,” but that is only because the Jesuits were teaching superstitious and destructive lies. You could never imagine a modern scientist hungering to imprint his falsehoods on a newborn consciousness. Picture somebody like Richard Dawkins saying the above, just to see how ridiculous it would be.

Any ethicist, then, who focuses on mere criminality, rather than the institutional crimes supported by ethical theories, is missing the picture almost entirely, and serving mankind up to the slaughterhouse. A doctor who, in the middle of a universal and deadly plague, focused his entire efforts on communicating about the possible health consequences of being slightly overweight, would be considered rather deranged, and scarcely a reliable guide in medical matters. If your house is on fire, mulling over the colors you might want to paint your walls might well be considered a sub-optimal prioritization.

Private criminals exist, of course, but have almost no impact on our lives comparable to those who rule us on the basis of false moral theories.

Once, when I was 11, another boy stole a few dollars from me. Another time, when I was 26, I left my ATM card in a bank machine, and someone stole a few hundred dollars from my account.

On the other hand, I have had hundreds of thousands of dollars taken from me by force through the moral theory of “taxation is good.” I was forced to sit in the grim and brain-destroying mental gulags of public schools for 14 years, based on the moral theory that “state education is a virtue.” (Or, rather: “forced education is a virtue” – my parents were compelled to pay through taxes, and I was compelled to attend.)

The boy (and the man) who stole my money doubtless used it for some personal pleasure or need. The government that steals my money, on the other hand, uses it to oppress the poor, to fund wars, to pay the rich, to borrow money and so impoverish my children – and to pay the salaries of those who steal from me.

If I were a doctor in the middle of a great city struck down by a terrible plague, and I discovered that that plague was being transmitted through the water pipes, what should my rational response be – if I claimed to truly care about the health of my fellow citizens?

Surely I should cry from the very rooftops that their drinking water was causing the plague. Surely I should take every measure possible to get people to understand the true source of the illness that struck them down.

Surely, in the knowledge of such universal and preventable poisoning, I should not waste my time arguing that the true danger you faced was the tiny possibility that some random individual might decide to poison you at some point in the future.

[...]The violations that I experienced at the hands of private criminals fade to insignificance relative to even one day under the tender mercies of my “virtuous and good masters.”

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/]

Human Farming

The Matrix is one of the greatest metaphors ever. Machines invented to make human life easier end up enslaving humanity - this is the most common theme in dystopian science fiction. Why is this fear so universal - so compelling? Is it because we really believe that our toaster and our notebook will end up as our mechanical overlords? Of course not. This is not a future that we fear, but a past that we are already living.

Supposedly, governments were invented to make human life easier and safer, but governments always end up enslaving humanity. That which we create to "serve" us ends up ruling us.

The US government "by and for the people" now imprisons millions, takes half the national income by force, over-regulates, punishes, tortures, slaughters foreigners, invades countries, overthrows governments, imposes 700 imperialistic bases overseas, inflates the currency, and crushes future generations with massive debts. That which we create to "serve" us ends up ruling us.

The problem with the "state as servant" thesis is that it is historically completely false, both empirically and logically. The idea that states were voluntarily invented by citizens to enhance their own security is utterly untrue.

Before governments, in tribal times, human beings could only produce what they consumed -- there was no excess production of food or other resources. Thus, there was no point owning slaves, because the slave could not produce any excess that could be stolen by the master. If a horse pulling a plow can only produce enough additional food to feed the horse, there is no point hunting, capturing and breaking in a horse.

However, when agricultural improvements allowed for the creation of excess crops, suddenly it became highly advantageous to own human beings. When cows began to provide excess milk and meat, owning cows became worthwhile.

The earliest governments and empires were in fact a ruling class of slave hunters, who understood that because human beings could produce more than they consumed, they were worth hunting, capturing, breaking in - and owning.

The earliest Egyptian and Chinese empires were in reality human farms, where people were hunted, captured, domesticated and owned like any other form of livestock. Due to technological and methodological improvements, the slaves produced enough excess that the labor involved in capturing and keeping them represented only a small subset of their total productivity. The ruling class - the farmers - kept a large portion of that excess, while handing out gifts and payments to the brutalizing class - the police, slave hunters, and general sadists - and the propagandizing class - the priests, intellectuals, and artists.

This situation continued for thousands of years, until the 16-17th centuries, when again massive improvements in agricultural organization and technology created the second wave of excess productivity. The enclosure movement re-organized and consolidated farmland, resulting in 5-10 times more crops, creating a new class of industrial workers, displaced from the country and huddling in the new cities. This enormous agricultural excess was the basis of the capital that drove the industrial revolution. The Industrial Revolution did not arise because the ruling class wanted to free their serfs, but rather because they realized how additional "liberties" could make their livestock astoundingly more productive. When cows are placed in very confining stalls, they beat their heads against the walls, resulting in injuries and infections. Thus farmers now give them more room -- not because they want to set their cows free, but rather because they want greater productivity and lower costs.

The next stop after "free range" is not "freedom." The rise of state capitalism in the 19th century was actually the rise of "free range serfdom." Additional liberties were granted to the human livestock not with the goal of setting them free, but rather with the goal of increasing their productivity.

Of course, intellectuals, artists and priests were - and are - well paid to conceal this reality. The great problem of modern human livestock ownership is the challenge of "enthusiasm." State capitalism only works when the entrepreneurial spirit drives creativity and productivity in the economy.

However, excess productivity always creates a larger state, and swells the ruling classes and their dependents, which eats into the motivation for additional productivity. Taxes and regulations rise, state debt (future farming) increases, and living standards slow and decay. Depression and despair began to spread, as the reality of being owned sets in for the general population. The solution to this is additional propaganda, antidepressant medications, superstition, wars, moral campaigns of every kind, the creation of "enemies," the inculcation of patriotism, collective fears, paranoia about "outsiders" and "immigrants," and so on.

It is essential to understand the reality of the world. When you look at a map of the world, you are not looking at countries, but farms.

You are allowed certain liberties - limited property ownership, movement rights, freedom of association and occupation - not because your government approves of these rights in principle - since it constantly violates them - but rather because "free range livestock" is so much cheaper to own and so more productive.

It is important to understand the reality of ideologies. State capitalism, socialism, communism, fascism, democracy - these are all livestock management approaches. Some work well for long periods - state capitalism - and some work very badly - communism.

[...]The recent growth of "freedom" in China, India and Asia is occurring because the local state farmers have upgraded their livestock management practices. They have recognized that putting the cows in a larger stall provides the rulers more milk and meat.

Rulers have also recognized that if they prevent you from fleeing the farm, you will become depressed, inert and unproductive. A serf is the most productive when he imagines he is free. Thus your rulers must provide you the illusion of freedom in order to harvest you most effectively. Thus you are "allowed" to leave - but never to real freedom, only to another farm, because the whole world is a farm. They will prevent you from taking a lot of money, they will bury you in endless paperwork, they will restrict your right to work -- but you are "free" to leave. Due to these difficulties, very few people do leave, but the illusion of mobility is maintained. If only 1 out of 1,000 cows escapes, but the illusion of escaping significantly raises the productivity of the remaining 999, it remains a net gain for the farmer.

You are also kept on the farm through licensing. The most productive livestock are the professionals, so the rulers fit them with an electronic dog collar called a "license," which only allows them to practice their trade on their own farm.

To further create the illusion of freedom, in certain farms, the livestock are allowed to choose between a few farmers that the investors present. At best, they are given minor choices in how they are managed. They are never given the choice to shut down the farm, and be truly free.

Government schools are indoctrination pens for livestock. They train children to "love" the farm, and to fear true freedom and independence, and to attack anyone who questions the brutal reality of human ownership. Furthermore, they create jobs for the intellectuals that state propaganda so relies on.

The ridiculous contradictions of statism -- like religion -- can only be sustained through endless propaganda inflicted upon helpless children. The idea that democracy and some sort of "social contract" justifies the brutal exercise of violent power over billions is patently ridiculous. If you say to a slave that his ancestors "chose" slavery, and therefore he is bound by their decisions, he will simply say: "If slavery is a choice, then I choose not to be a slave." This is the most frightening statement for the ruling classes, which is why they train their slaves to attack anyone who dares speak it.

Statism is not a philosophy. Statism does not originate from historical evidence or rational principles. Statism is an ex post facto justification for human ownership. Statism is an excuse for violence. Statism is an ideology, and all ideologies are variations on human livestock management practices. Religion is pimped-out superstition, designed to drug children with fears that they will endlessly pay to have "alleviated." Nationalism is pimped-out bigotry, designed to provoke a Stockholm Syndrome in the livestock.

----

[...]Like all animals, human beings want to dominate and exploit the resources around them. At first, we mostly hunted and fished and ate off the land - but then something magical and terrible happened to our minds. We became, alone among the animals, afraid of death, and of future loss. And this was the start of a great tragedy, and an even greater possibility...

You see, when we became afraid of death, of injury, and imprisonment, we became controllable -- and so valuable -- in a way that no other resource could ever be. The greatest resource for any human being to control is not natural resources, or tools, or animals or land -- but other human beings.

You can frighten an animal, because animals are afraid of pain in the moment, but you cannot frighten an animal with a loss of liberty, or with torture or imprisonment in the future, because animals have very little sense of tomorrow. You cannot threaten a cow with torture, or a sheep with death. You cannot swing a sword at a tree and scream at it to produce more fruit, or hold a burning torch to a field and demand more wheat. You cannot get more eggs by threatening a hen - but you can get a man to give you his eggs by threatening him.

Human farming has been the most profitable -- and destructive -- occupation throughout history, and it is now reaching its destructive climax. Human society cannot be rationally understood until it is seen for what it is: a series of farms where human farmers own human livestock.

Some people get confused because governments provide healthcare and water and education and roads, and thus imagine that there is some benevolence at work. Nothing could be further from reality. Farmers provide healthcare and irrigation and training to their livestock.

Some people get confused because we are allowed certain liberties, and thus imagine that our government protects our freedoms. But farmers plant their crops a certain distance apart to increase their yields -- and will allow certain animals larger stalls or fields if it means they will produce more meat and milk. In your country, your tax farm, your farmer grants you certain freedoms not because he cares about your liberties, but because he wants to increase his profits. Are you beginning to see the nature of the cage you were born into?

There have been four major phases of human farming.

The first phase, in ancient Egypt, was direct and brutal human compulsion. Human bodies were controlled, but the creative productivity of the human mind remained outside the reach of the whip and the brand and the shackles. Slaves remained woefully underproductive, and required enormous resources to control.

The second phase was the Roman model, wherein slaves were granted some capacity for freedom, ingenuity and creativity, which raised their productivity. This increased the wealth of Rome, and thus the tax income of the Roman government - and with this additional wealth, Rome became an empire, destroying the economic freedoms that fed its power, and collapsed. I'm sure that this does not seem entirely unfamiliar.

After the collapse of Rome, the feudal model introduced the concept of livestock ownership and taxation. Instead of being directly owned, peasants farmed land that they could retain as long as they paid off the local warlords. This model broke down due to the continual subdivision of productive land, and was destroyed during the Enclosure movement, when land was consolidated, and hundreds of thousands of peasants were kicked off their ancestral lands, because new farming techniques made larger farms more productive with fewer people.

The increased productivity of the late Middle Ages created the excess food required for the expansion of towns and cities, which in turn gave rise to the modern Democratic model of human ownership. As displaced peasants flooded into the cities, a huge stock of cheap human capital became available to the rising industrialists - and the ruling class of human farmers quickly realized that they could make more money by letting their livestock choose their own occupations. Under the Democratic model, direct slave ownership has been replaced by the Mafia model. The Mafia rarely owns businesses directly, but rather sends thugs around once a month to steal from the business "owners." You are now allowed to choose your own occupation, which raises your productivity - and thus the taxes you can pay to your masters. Your few freedoms are preserved because they are profitable to your owners.

The great challenge of the Democratic model is that increases in wealth and freedom threaten the farmers. The ruling classes initially profit from a relatively free market in capital and labor, but as their livestock become more used to their freedoms and growing wealth, they begin to question why they need rulers at all.

Ah well. Nobody ever said that human farming was easy.

Keeping the tax livestock securely in the compounds of the ruling classes is a three phase process.

The first is to indoctrinate the young through government "education." As the wealth of democratic countries grew, government schools were universally inflicted in order to control the thoughts and souls of the livestock.

The second is to turn citizens against each other through the creation of dependent livestock. It is very difficult to rule human beings directly through force -- and where it can be achieved, it remains cripplingly underproductive, as can be seen in North Korea. Human beings do not breed well or produce efficiently in direct captivity. If human beings believe that they are free, then they will produce much more for their farmers. The best way to maintain this illusion of freedom is to put some of the livestock on the payroll of the farmer. Those cows that become dependent on the existing hierarchy will then attack any other cows who point out the violence, hypocrisy and immorality of human ownership. Freedom is slavery, and slavery is freedom. If you can get the cows to attack each other whenever anybody brings up the reality of their situation, then you don't have to spend nearly as much controlling them directly. Those cows who become dependent upon the stolen largess of the farmer will violently oppose any questioning of the virtue of human ownership -- and the intellectual and artistic classes, always and forever dependent upon the farmers -- will say, to anyone who demands freedom from ownership: "You will harm your fellow cows." The livestock are kept enclosed by shifting the moral responsibility for the destructiveness of a violent system to those who demand real freedom.

The third phase is to invent continual external threats, so that the frightened livestock cling to the "protection" of the farmers. [...]

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/]

Love

Mythological Love

Our whole lives, we are surrounded by people who claim to love us. Our parents perpetually claim to be motivated by what is best for us. Our teachers eternally proclaim that their sole motivation is to help us learn. Our priests voice concern for our eternal souls, and extended family members endlessly announce their devotion to the clan.

When people claim to love us, it is not unreasonable to expect that they know us. If you tell me that you love Thailand, but it turns out that you have never been there, and know very little about it, then it is hard for me to believe that you really love it. If I say that I love opera, but I never listen to opera – well, you get the general idea!

If I say that I love you, but I know little about your real thoughts and feelings, and have no idea what your true values are – or perhaps even what your favourite books, authors or movies are – then it should logically be very hard for you to believe me.

This is certainly the case in my family. My mother, brother and father made extravagant claims about their love for me. However, when I finally sat down and asked each of them to recount a few facts about me – some of my preferences and values – I got a perfect tripod of “thousand yard stares.”

So, I thought, if people who know almost nothing about me claim to love me, then either they are lying, or I do not understand love at all.

I will not go into details about my theories of love here, other than to say that, in my view, love is our involuntary response to virtue, just as well-being is our involuntary response to a healthy lifestyle. (Our affection for our babies is more attachment than mature love, since it is shared throughout the animal kingdom.)

Virtue is a complicated subject, but I am sure we can agree that virtue must involve some basics that are commonly understood, such as courage, integrity, benevolence, empathy, wisdom and so on.

If this is the case, it cannot be possible to love people that we know very little about. If love requires virtue, then we cannot love perfect strangers, because we know nothing about their virtues. Love depends both on another person’s virtue, and our knowledge of it – and it grows in proportion to that virtue and knowledge, if we are virtuous ourselves.

Throughout my childhood, whenever I expressed a personal thought, desire, wish, preference or feeling, I was generally met with eye rolling, incomprehension, avoidance or, all too often, outright scorn. These various “rejection tactics” were completely co-joined with expressions of love and devotion. When I started getting into philosophy – through the works of Ayn Rand originally – my growing love of wisdom was dismissed out of hand as some sort of psychological dysfunction.

Since my family knew precious little about my virtues – and what they did know they disliked – then we could not all be virtuous. If they were virtuous, and disliked my values, then my values could not be virtuous. If I was virtuous, and they disliked my values, then they could not be virtuous.

And so I set about trying to create an “ethical map” of my family.

It was the most frightening thing I have ever done. The amount of emotional resistance that I felt towards the idea of trying to rationally and morally understand my family was staggering – it literally felt as if I were sprinting directly off a cliff.

Why was it so terrifying?

Well, because I knew that they were lying. I knew that they were lying about loving me, and I knew that, by claiming to be confused about whether they loved me, I was lying as well – and to myself, which is the worst of all falsehoods.
Love: The Word versus the Deed

Saying the word “success” is far easier than actually achieving success. Mouthing the word “love” is far easier than actually loving someone for the right reasons – and being loved for the right reasons.

If we do not have any standards for being loved, then laziness and indifference will inevitably result. If I have a job where I work from home, and no one ever checks up on me, and I never have to produce anything, and I get paid no matter what, and I cannot get fired, how long will it be before my work ethic decays? Days? Weeks? Certainly not months.

One of the most important questions to ask in any examination of the truth is “compared to what?” For instance, if I say I love you, implicit in that statement is a preference for you over others. In other words, compared to others, I prefer you. We prefer honesty compared to falsehood, satiation to hunger, warmth to cold and so on.

It is not logically valid to equate the word “love” with “family.” The word “family” is a mere description of a biological commonality – it makes no more sense to equate “love” with “family” than it does to equate “love” with “mammal.” Thus the word “love” must mean a preference compared to – what?

It is impossible to have any standards for love if we do not have any standards for truth. Since being honest is better than lying, and courage is better than cowardice, and truth is better than falsehood, we cannot have honesty and courage unless we are standing for something that is true. Thus when we say that we “love” someone, what we really mean is that his actions are consistent, compared to a rational standard of virtue. In the same way, when I say that somebody is “healthy,” what I really mean is that his organs are functioning consistently, relative to a rational standard of well-being.

Thus love is not a subjective preference, or a biological commonality, but our involuntary response to virtuous actions on the part of another.

If we truly understand this definition, then it is easy for us to see that a society that does not know truth cannot ever know love.

If nothing is true, virtue is impossible.

If virtue is impossible, then we are forced to pretend to be virtuous, through patriotism, clan loyalties, cultural pride, superstitious conformities and other such amoral counterfeits.

If virtue is impossible, then love is impossible, because actions cannot be compared to any objective standard of goodness. If love is impossible, we are forced to resort to sentimentality, or the shallow show and outward appearance of love.

Thus it can be seen that any set of principles that interferes with our ability to know and understand the truth hollows us out, undermining and destroying our capacity for love. False principles, illusions, fantasies and mythologies separate us from each other, from virtue, from love, from the true connections that we can achieve only through reality.

In fantasy, there is only isolation and pretence. Mythology is, fundamentally, loneliness and emptiness.


Imagination versus Fantasy

At this point, I think it would be well worth highlighting the differences between imagination and fantasy, because many people, on hearing my criticisms of mythology, think that they are now not supposed to enjoy Star Wars.

Imagination is a creative faculty that is deeply rooted in reality. Fantasy, on the other hand, is a mere species of intangible wish fulfillment. It took Tolkien decades of study and writing to produce “The Lord of the Rings” – and each part of that novel was rationally consistent with the whole. That is an example of imagination. If I laze about daydreaming that one day I will make a fortune by writing a better novel than “The Lord of the Rings” – but never actually set pen to paper – that is an example of fantasy. Imagination produced the theory of relativity, not fantasizing about someday winning a Nobel Prize.

Daydreams that are never converted into action are the ultimate procrastination. Imagining a wonderful future that you never have to act to achieve prevents you from achieving a wonderful future.

In the same way, imagining that you know the truth when you do not prevents you from ever learning the truth. Nothing is more dangerous than the illusion of knowledge. If you are going the wrong way, but do not doubt your direction, you will never turn around.

As Socrates noted more than 2,000 years ago, doubt is the midwife of curiosity, and curiosity breeds wisdom.

Fantasy is the opposite of doubt. Mythology provides instant answers when people do not even know what the questions are. In the Middle Ages, when someone asked “Where did the world come from?” he was told: “God made it.” This effectively precluded the necessity of asking the more relevant question: “What is the world?”

Because religious people believed they knew where the world came from, there was little point asking what the world was. Because there was little point asking what the world was, they never learned where the world came from.

Fantasy is a circle of nothingness, forever eating its own tail.
Defining Love

If people fantasize that they know what is true, then they inevitably stop searching for the truth. If I am driving home, I stop driving when I get there. If people fantasize that they know what goodness is, they inevitably stop trying to understand goodness.

And, most importantly, if people fantasize that they already are good, they stop trying to become good. If you want a baby, and you believe that you are pregnant, you stop trying to get pregnant.

The question – which we already know the answer to – thus remains: why do people who claim to love us never tell us what love is?

If I am an accomplished mathematician, and my child comes to me and asks me about the times tables, it would be rude and churlish of me to dismiss his questions. If I go to my mother, who for 30 years has claimed to love me, and ask her what love is, why is it that she refuses to answer my question? Why does my brother roll his eyes and change the subject whenever I ask him what it is that he loves about me? Why does my father claim to love me, while continually rejecting everything that I hold precious?

Why does everyone around me perpetually use words that they refuse to define? Are they full of a knowledge that they cannot express? That is not a good reason for refusing to discuss the topics. A novelist who writes instinctually would not logically be hostile if asked about the source of his inspiration. He may not come up with a perfect answer, but there would be no reason to perpetually avoid the subject.

Unless…

Unless, of course, he is a plagiarist.


What We Know

This is the knowledge that we have, but hate and fear.

We know that the people who claim to love us know precious little about us, and nothing at all about love.

We know that the people who claim to love us make this claim in order to create obligations within us.

We know that the people who claim to love us make this claim in order to control us.

And they know it too.

It is completely obvious that they know this, because they know exactly which topics to avoid. A counterfeiter will not mind if you ask him what the capital of Madagascar is. A counterfeiter will mind, however, if you ask him whether you can check the authenticity of his money. Why is this the one topic that he will try to avoid at all costs?

Because he knows that his currency is fake.

And he also knows that if you find that out, he can no longer use it to rob you blind.


Obligations

If I own a store, and take counterfeit money from a con man, but do not know that it is counterfeit, then I am obligated to hand over what he has “bought.”

In the same way, if I believe that I am loved – even when I am not loved – I am to a degree honour-bound to return that love. If my mother says that she loves me, and she is virtuous, then she must love me because I am virtuous. Since she is herself virtuous, then I “owe” her love as a matter of justice, just as I owe trust to someone who consistently behaves in a trustworthy manner.

Thus when somebody tries to convince you that they love you, they’re actually attempting to create an obligation in you. If I try to convince you that I am a trustworthy person, it is because I want all the benefits of being treated as if I were a trustworthy person. If I am in fact a trustworthy person, then I must understand the nature of trust – at least at some level – and thus I must know that it cannot be demanded, but must be earned. Since earning trust is harder than just demanding trust, I must know the real value of trust, otherwise I would not have taken the trouble to earn it through consistent behaviour – I would have just demanded it and skipped all the hard stuff!

If you demand trust, you are demanding the unearned, which indicates that you do not believe you can earn it. Thus anyone who demands trust is automatically untrustworthy.

Why do people demand trust?

To rob others.

If I want to borrow money from you, and I demand that you trust me, it’s because I am not trustworthy, and will be unlikely to pay you back.

In other words, I want to steal your money, and put you in my power.

It’s the same with love.


Love and Virtue

If I am virtuous, then virtuous people will regard me with at least respect, if not love. Corrupt or evil people may regard me with a certain respect, but they will certainly not love me.

Thus being virtuous and refusing to demand love from anyone is the best way to find other virtuous people. If you are virtuous and undemanding, then other virtuous people will naturally gravitate towards you. Virtue that does not impose itself on others is like a magnet for goodness, and repels corruption.

The practical result of true virtue is fundamental self-protection.

If my stockbroker consistently gets me 30% return on my investments, is there any amount of money that I will not give him, other than what I need to live? Of course not! Because I know I will always get back more than I give.

It’s the same with real love.

If I am virtuous, then I will inevitably feel positively inclined towards other virtuous people – and the more virtuous they are, the more I will love them. My energy, time and resources will be at their disposal, because I know that I will not be exploited, and that they will reciprocate my generosity.

If you and I have lent money to each other over the years, and have always paid each other back, then the next time you come to me for a loan, it would be unjust for me to tell you that I will not lend you anything because I do not think you will pay me back. Your continued and perpetual honesty towards me in financial matters has created an obligation in me towards you. This does not mean that I must lend you money whenever you ask for it, but I cannot justly claim as my reason for not lending you money a belief that you will not pay me back.

In the same way, if you have been my wife for 20 years, and I have never been unfaithful, if a woman calls and then hangs up, it would be unjust for you to immediately accuse me of infidelity.

A central tactic for creating artificial and unjust obligations in others is to demand their positive opinion, without being willing to earn it. The most effective way to do this is to offer a positive opinion, which has not been earned – to claim to love others.

If, over the past 20 years, I have rarely paid back any money I have borrowed from you, it is perfectly reasonable to refuse me an additional loan. I may then get angry, and call you unfair, and demand that you treat me as if I were trustworthy, but it would scarcely be virtuous for you to comply with my wishes. Indeed, it would be dishonest and unjust for you to ignore my untrustworthiness, because you would be acting as if there was no difference between someone who pays back loans, and someone who does not.

When we act in a virtuous manner towards others, we are creating a reservoir of goodwill that we can draw upon, just as when we put our savings into a bank. A man can act imperfectly and still be loved, just as a man can eat an occasional candy bar and still be healthy, but there is a general requirement for consistency in any discipline. I could probably hit a home run in a major-league ballpark once every thousand pitches, but that would scarcely make me a professional baseball player!

If I act in a trustworthy manner, I do not have to ask you to trust me – and in fact, I would be very unwise to do so. Either you will trust me voluntarily, which means that you respect honourable and consistent behaviour, and justly respond to those who do good, or you will not trust me voluntarily, which means that you do not respond in a just manner to trustworthy behaviour, and thus cannot be trusted yourself.

If, on the other hand, I come up to you and demand that you trust me, I am engaged in a complex calculation of counterfeiting and plunder.

The first thing I am trying to do is establish whether or not you know anything about trust. The second thing is to figure out your level of confidence and self-esteem. The third thing is figure out if you know anything about integrity.

An attacker will always try to find the weakest chink in your armour. If I demand trust from you, and you agree to provide it – without any prior evidence – then I know that you do not know anything about trust. Similarly, if you do not require that your trust be earned, then I know that you lack confidence and self-esteem. If you are willing to treat me as if I were trustworthy when I am not trustworthy, then it is clear to me you know very little about integrity.

This tells me all I need to know about your history. This tells me that you were never treated with respect as a child, and that you were never taught to judge people according to independent standards, and that every time you tried to stand up for yourself, your family attacked you.

In other words, I will know that you are easy prey.

I cannot create an obligation in you unless you accept that I have treated you justly in the past. As in all things, it is far easier to convince a weak person that you have treated him justly, than it is to actually treat people in a just and consistent manner. If I can convince you that I have treated you justly in the past, then you “owe” me trust and respect in the present.


“Love” as Predation

Imagine that we are brothers, and one day you awake from a coma to see me sitting by your bed. After some small talk, I tell you that you owe me $1,000, which you borrowed from me the day of your accident. I tell you that because I am a kind brother, and you are in the hospital, you do not have to pay me back the thousand dollars – I would just like you to remember it, so that the next time I need to borrow $1,000, you will lend it to me.

You might look in the pockets of the jeans you wore the day of your accident, and you might look around your apartment to see if there was $1,000 lying around, but there would be no real way to prove that I had not lent you the money. You would either have to call me a liar – an accusation for which you have no certain proof – or you would feel substantially more obligated to lend me money in the future.

If you call me a liar, I will get angry. If you accept the obligation without ever finding the $1,000, you will feel resentful. Either way, our relationship is harmed – and by telling you about the $1,000, I have voluntarily introduced a complication and a suspicion into our relationship, which is scarcely loving, just or benevolent.

This is the kind of brinksmanship and deception that goes on all the time in relationships – particularly in families.

When our parents tell us that they love us, they are in fact demanding that we provide for them. They are basically telling us that they have lent us $1,000 – even if we cannot remember it – and thus we owe them trust in the future, if not $1,000 in the present!

In other words, our parents spend an enormous amount of energy convincing us that they “love” us in order to create artificial obligations within us. In doing so, they take a terrible risk – and force us to make an even more terrible choice..


Brinksmanship

When somebody tells you that they love you, it is either a statement of genuine regard, based on mutual virtue, or it is an exploitive and unjust demand for your money, time, resources, or approval.

There is very little in between.

Either love is real, and a true joy, or love is false, and the most corrupt and cowardly form of theft that can be imagined.

If love is real, then it inflicts no unjust obligations. If love is real, then it is freely given without demands. If a good man gives you his love, and you do not reciprocate it, then he just realizes that he was mistaken, learns a little, and moves on. If a woman tells you that she loves you, and then resents any hesitation or lack of reciprocation you display, then she does not love you, but is using the word “love” as a kind of hook, to entrap you into doing what she wants, to your own detriment.

How can you possibly know whether the love that somebody expresses towards you is genuine or not?

It’s very, very simple.

When it is genuine, you feel it.

What happens, though, when a parent demands love from us?

Well, we must either submit to this demand, and pretend to respond in kind, or we must confront her on her manipulation – thus threatening the entire basis of the relationship.

Would someone who truly loves us ever put us in this terrible position?


Society and Religion

The principle of inflicting a good opinion in order to create an unjust obligation occurs at a social level, as well as at a personal level. Soldiers are supposed to have died “protecting us,” which creates an obligation for us to support the troops. The mere act of being born in a country creates a lifelong obligation to pay taxes at the point of a gun, in order to receive services that we never directly asked for. John F. Kennedy’s famous quote, “Ask not what your country can do for you, but rather what you can do for your country,” is another way of saying, “One of us is going to get screwed in this interaction, and it ain’t gonna be me!”

The same thing occurs in the realm of religion, of course, as well. Jesus died for your sins, God loves you, you will be punished if you do not obey, Hell is the destination of unbelievers etc. etc. etc.

All of these emotional tricks are designed to create an obligation in you that would not exist in any reasonable universe.

“Sacrifice,” in other words, is merely demand in disguise.

[Source: http://freedomainradio.com/BOARD/blogs/freedomain/archive/2008/09/11/book-on-truth-the-tyranny-of-illusion.aspx]
 
Loading...