“Come brave Diderot, intrepid d’Alembert, ally yourselves; … overwhelm the fanatics and the knaves, destroy the insipid declamations, the miserable sophistries, the lying history … the absurdities without number; do not let those who have sense to be subjected to those who have none; and the generation which is being born will owe to us its reason and liberty.”— Voltaire
“Every civilization comes at last to the point where the individual, made by speculation conscious of himself as an end per se, demands of the state, as the price of its continuance, that it shall henceforth enhance rather than exploit his capacities. Philosophers sympathize with this demand, the state almost always rejects it: therefore civilizations come and civilizations go. The history of philosophy is essentially an account of the efforts great men have made to avert social disintegration by building up natural moral sanctions to take the place of the supernatural sanctions which they themselves have destroyed. To find – without resorting to celestial machinery – some way of winning for their people social coherence and permanence without sacrificing plasticity and individual uniqueness to regimentation, – that has been the task of philosophers, that is the task of philosophers.”— Will Durant
“The most formidable weapon against errors of every kind is reason. I have never used any other, and I trust I never shall.”— Thomas Paine
“No problem can withstand the assault of sustained thinking.” – Voltaire
Although the human species is the most cognitively gifted, our fledgling intellect is still prone to barbarism. The evidence of this abounds. We as a species have engaged and continue to engage in shockingly horrendous actions, while, evidently, believing these actions to be justified. Related to this is just how easy it is to lock a child into a lifetime of false beliefs. It is as if we are not quite finished evolving yet, with one foot in a rational realm, and one foot in animalistic hell.
Philosophy, properly conceived, aims to be a civilizing influence. Good philosophy helps to guard against our deficits, to tame our barbarism, and thereby to minimize the damage these cause, both to ourselves, and to humanity in general. It does this by providing a healthier way of finding our beliefs than we, bereft of a proper philosophy, would use.
But good philosophy requires good philosophers. Who is he who thinks he has overcome his own frailties enough to help rather than harm? From whence does the audacity of a philosopher spring? It is easy for a philosopher to reply: no one is forcing you to listen, and you can make up your own mind. But indeed, many “philosophers”, being more representatives of the barbarism philosophy must fight, than the humanity a proper philosophy fights for, have harmed and continue to harm. Men like Adolf Hitler and Joseph Stalin didn’t have supernatural powers to hypnotize; on the contrary, philosophers laid a barbaric cultural foundation for them, long ahead of time.
In reaction to the horrific consequences of bad philosophy, some philosophers have put forth as a “remedy” the simplistic notion that “one must never be certain.” But philosophy demands a certain certainty: it demands rigid and unswerving devotion to reason, for if we fail to trust what is our only means of knowing, then we shut ourselves out of knowledge, including the “knowledge” that reason is intrinsically too weak to find truth. If we do not extend full confidence and trust in reason in its basic sense, then inevitably and in an essential sense, we can only be hypocrites, as in claiming to know with certainty that “one must never be certain.” Such hypocrites, when faced with a firm confidence in reason, inexorably castigate what they claim to be arrogance – when actually there is only a devotion to the truth. They therefore further injustice on the pretense that they aim to mitigate it. But proper philosophy demands loyalty to reason, a commitment to find beliefs through natural experience and to reconcile contrary beliefs. Such is the cause and essence of civilization in the true and best sense. The essence of barbarism is precisely the opposite: to permit a throng of contradictory beliefs to exist in one’s mind, which inexorably leads to disaster in reality. Precisely because “one must never be certain” is itself contradictory, it is barbarism, and must be shunned by actually civilized people.
The pragmatist might respond: “A pox on all of your houses; one should shun philosophy!” But to flee from philosophy is as much a folly as it is to flee from one’s own body, since to refrain from philosophizing is merely to refrain from governing one’s implicit and explicit philosophical beliefs, and, as weeds naturally grow in an untended garden, ungoverned beliefs are contradictory beliefs, and inevitably produce a clashing mayhem. So if you scratch below the surface of someone who professes to not have a philosophy, you will find someone who has uncritically accepted various and sundry philosophical assumptions from the surrounding culture, and who dogmatically applies them, inconsistently, to suit his whims. Contrary to such fools, a stunning lack of self-awareness does not lead to world peace.
One cannot escape philosophy without embracing absurdity. Regardless of the hazards, then, we have no good alternative but to proceed. We must, as a matter of principle, be confident in the power of human reason; likewise, we must take due care to accept only justified beliefs as knowledge. Once we begin to think and communicate, we are in a real sense forced into philosophy. To use a metaphor: we are in a jungle of problems, and we wish to find a proper way out. To not proceed with a firm confidence in reason is to have a dogmatic and hypocritical philosophy; to proceed is to be fraught with difficulties in every direction.
In order for success to be open to frail creatures, we must not demotivate ourselves at the outset. We must begin with the assumption that nature is not out to get us, and that we can indeed resolve our intellectual problems, if we try hard enough. As a corollary, we must dismiss any philosophy that, in basic terms, is impossible for the sincere layman of normal intelligence to ascertain the veracity of. A system of philosophy that is only open to understanding by those who were granted many years of resources (usually by government agencies), is by necessity a system of authoritarianism and control. No one can escape from philosophy, so if one cannot justify one’s philosophy to oneself, if the only true philosophy is that which specialists justify and understand, then under that philosophy every non-specialist is a slave, who pays for his master’s arrogant stipulations with the sweat of his brow, and by his master’s design, cannot question that which his work pays for. A nature that would put the vast majority of human beings into such subjection is not benevolent, and a human being who would proffer such a self-serving view of nature is not to be trusted. There indeed are areas of human knowledge that only specialists can understand; a philosophy proper to human beings is not one of them. If a specialist tells you that your philosophy is fine for “the average man”, but is not the one he uses, then he is pompous and untrustworthy and his government funding should be cut off.
“Is there any knowledge in the world which is so certain that no reasonable man could doubt it? This question, which at first sight might not seem difficult, is really one of the most difficult that can be asked. When we have realized the obstacles in the way of a straightforward and confident answer, we shall be well launched on the study of philosophy – for philosophy is merely the attempt to answer such ultimate questions, not carelessly and dogmatically, as we do in ordinary life and even in the sciences, but critically, after exploring all that makes such questions puzzling, and after realizing all the vagueness and confusion that underlie our ordinary ideas.”— Bertrand Russell The Problems of Philosophy (1912), Ch. I: Appearance and Reality
“The endeavor to understand is the first and only basis of virtue.”— Spinoza
Your basic choice as a human being is whether or not to choose to “make sense” of your experience. What does it mean, to “make sense”? The clue is given out of the mouths of babes, in their question: “Why?” Once they learn that things have explanations, their desire to know is so strong that the constant questioning they produce can overwhelm adults. This natural yearning is at the foundation of philosophy, or as Aristotle wrote in Metaphysics (which in his understanding of the term meant “first philosophy”): “All men by nature desire to know.”
Plato held that knowledge is “justified true belief.” To believe is trivial; to know requires an often very difficult process: thinking. Thinking is not always productive, but always has the same aim: to know. What thinking produces is a link between a belief in question, and a chain of reasoning that supports it. This is the belief’s justification.
The choice to think or not is a gift that no other creature on earth but human beings have. Authentic philosophy is rooted in the choice to think – to produce reasons why. Pseudo-philosophy is rooted in the opposite choice: to believe without reasons.
To “justify” an idea is to present a reason why the idea is true, a means by which, if we follow a rational process, we can in some sense rightfully proceed from one idea to another. Since reasons are themselves ideas, we can apply this concept to the proffered reason why as well. This leads to the idea of “infinite regress” – an unending series of “whys”. But if there is an infinite regress, then because we are finite beings, there can never be a rational explanation for any idea, and the idea that there could be knowledge would fall. Even the idea that there is no justified knowledge would itself be unjustified. Therefore, if there is knowledge, there must be a justified end to the series – these terminals in the series are “axioms.” But by definition, axioms can’t be justified in the sense of providing reasons why. Can they be otherwise justified or are they merely arbitrary postulates?
A person who rejects axioms as justifiable, per se rejects knowledge. His rejection of axioms constitutes a belief that they are unjustifiable, and while he may offer “reasons” for this belief, ultimately, these are, by his own stipulations, necessarily without foundation. Usually, he admits as much. For him, philosophy is an idle entertainment or a game, but it has nothing to do with truth, and he will often cynically (and usually with an air of self-righteous superiority) view those who think otherwise as deluded. But why indeed is he justified in making all of these assessments? In the end, he must admit that everything he believes or assesses is merely accident or whimsical preference, for on his view, there really is no such thing as philosophy qua “love of wisdom.”
Just as one can choose to live or die, one can choose to accept the preconditions of knowledge or not. And one of those preconditions is: there must be justified axioms. How are they justified? In fact, we just justified one, the “Axiom axiom”: Knowledge is rooted in axioms. We determined that rejecting this axiom repudiates all knowledge, including the “knowledge” that knowledge has been repudiated. This is the hallmark of a justified axiom – that rejecting it is self-defeating. You may indeed decide that you don’t care if you have defeated yourself, but this decision evicts you from the realm of authentic philosophy. Just as truth is of no interest to you, you are of no interest to those who love truth.
Axioms are interrelated and interdependent; for example, the foregoing axiom presupposes that we can know and mean (these axioms will be elaborated on later). We depend on axioms at every step, and thus while we can analyze them separately, they are simultaneously and at least tacitly assumed at every point in the analysis.
People who claim to know anything whatsoever believe in axioms regardless of whether they admit them or not; the question is whether those axioms are justifiable (i.e. rejecting them constitutes a self-defeating position) or not: If you ask the reason why they believe anything, and then ask why they believe that, and so on, you will find that they either land at a proposition they have not yet explained, meaning they believe in at least this one axiom, or they will go in a circle, which means they believe in many. The choice then is not whether we shall have axioms or not, but in what our choice of axioms is.
In rational philosophy we seek axioms that are as simple and irreducible as possible, and that are implied in all discourse whatsoever. Axioms aren’t arbitrary, nor are they “self-evident” in the sense of being obvious, but they are self-evident in the sense that you must become aware of them through your own first-handed appraisal. They are not empirical in the sense of being externally measurable, but they are empirical in the sense that to notice them is to observe, and the unit of observing is an observation, a moment in time in which you are paying attention to something. What is the object of observation? Your own thought.
The alternative to explicitly recognizing axioms is not that they aren’t there, but that one takes an unanalyzed complexity of judgment calls about what is and isn’t true as axiomatically reliable and valid. Thus, a denial of axioms constitutes the implicit and extraordinarily arrogant subjectivism of asserting that one’s own judgment is the ultimate axiom (this move is not substantially altered when one alludes to external authority, since implicitly, the ultimate authority is the one who decrees who the authorities are).
“[A] Philosopher who affects to doubt of the Maxims of common Reason, and even of his Senses, declares sufficiently that he is not in earnest, and that he intends not to advance an Opinion which he would recommend as Standards of Judgment and Action.”— David Hume
In the foregoing is a recurring theme in rational philosophy: there are beliefs that when analyzed, permit one to remain in the realm of rational philosophy, and beliefs which if decisively accepted, mean that one has evicted oneself. Most of the unnecessary tragedy in our past was caused by heeding the words of charlatans, and could have been avoided if society had accepted this principle. If we wish to avoid needless tragedy in the future, we must choose to accept it.
A common philosophic question is: What is the meaning of life? Scrutinizing this question yields an even more fundamental question: What is the meaning of meaning?
All thought presupposes meaning. Statements mean something to the person uttering the statement; the terms statements are composed of mean something to the person using them. The question naturally arises: What is the source of meaning?
What does “meaning” mean? As I use the term, meaning is an aspect of consciousness. It may or may not be associated with a word; someone can soundlessly point at an object, and know, for example, whether they mean to refer to the object itself or only to some aspect of it, and what the significance of pointing to it is. Meaning is axiomatic: it can’t be explained or defined except in terms that mean something. So you begin to understand meaning by observing what you mean by various things, and all anyone else can do is direct your attention to this. If you lack the capacity to reflect on your experience in this way, then communication with you regarding the nature of meaning is hopeless. Such is the nature of philosophic axioms.
Just as you can choose to point to this object or that one, you know what you mean, and you can change your mind about whether you should mean this, or that. Such choices are indeed at the very foundation of your ethical character. Meaning is the function and prerogative of the individual. To speak of a purely collective meaning is meaningless, for there is no meaning without an individual mind that means something. If the human population were to suddenly disappear, then all meaning would be lost, the only thing remaining would be arrangements of atoms in various objects formerly considered by humans to have meaning and significance; if even one individual remains, then so too does meaning. (None of this is to deny that people can and do mutually consent to a broad array of meanings; of course they can, and this is critical to civilization.)
That meaning is individual is presupposed by all philosophic discourse: either the capacity of meaning is in each of us as individuals, or not, and if not, meaningful discourse is impossible. We cannot have a rational discussion without recognizing the prerogative of all parties to decide on what they mean, we cannot have a mutual understanding without individual assent to common meanings.
To recognize that individuals choose their own meaning is not to approve of their meaning. They might be choosing cumbersome or misleading or delusional meanings, and we can and should argue against their errors, and we should be open to revising our own meanings when we have erred.
Although we often forget specifically where we initially derived any particular meaning from, it is readily observed that we derive new meanings from experience, and therefore we can infer that experience is the source of meaning. Experience comes from many sources, not the least of which is introspective experience, i.e. reflecting on prior experiences, thoughts, thought processes, emotions, etc.
The idea that meaning is definite, that you mean what you mean, is confusing or even alarming to some people, but this stems more from a confusion about what meaning is, or from fear of being wrong, or from holding conflicting meanings one refuses to reconcile, than from a violation of the law of identity concerning one’s own thoughts. If meaning were not definite, then it would be impossible to make the case that it weren’t, since the veracity of any argument depends on a definite meaning. A person who insists that meaning is indefinite is insisting that he himself makes no sense, which is to say, it is as if he’s said nothing at all.
It is self-evident that we are sometimes confused, that one of our meanings can contradict another one, and that we can be slow to recognize it, or worse, can refuse to recognize it. We have now entered the domain of logic, which is the art of reconciling contradictions in meaning. I say that this is “self-evident”, not because it is obvious, but because you must make yourself aware of the fact that to be aware of a subject is to have an integrated awareness, which is to have beliefs about a given subject that follow from various experiences and that do not contradict one another. It’s up to you to decide to be coherent, and if you decide to be incoherent, no amount of explanation of logic will help; the problem is in your decision to be irrational, not in your understanding.
All formal fallacies are violations of the axiom of definite meaning – deductive logic is no more and no less than intellectual continence, i.e., meaning what you mean. (The law of identity as it pertains to logic is a different expression of the same axiom.)
The foregoing can be summarized as: “I can mean.” A closely related and implied axiom is: “I can know”, since to meaningfully say “I can mean” is to imply that one knows one can mean.
To doubt this is meaningless: if you can’t know anything, then even the notion “I can’t know anything” is beyond your grasp. To declare that the only thing you know is that you cannot know anything is philosophical trash: there would be no way to justify that premise, without knowing that the method of justification is valid; the latter knowledge would contradict the trash answer.
Not everyone can meaningfully join a philosophical discussion. In particular, to one who does not know in a substantial sense what “know” means, the preceding reasoning is gibberish. Suffice it to say that a capacity to know is required in order to know that you have an answer to the question “What is knowledge?” That is, the idea that “one can know” is axiomatic in the same sense that “one can mean” is.
Meaning and knowledge share the same root: awareness. It is you, qua conscious being, that knows and means. Meaning and knowledge are intimately related: to explicitly know is to mean, since a statement of knowledge means something; to mean is to know what one means; if one does not know what one means, then there is at best only the illusion of meaning.
(We have just touched upon a large subject. Knowledge is usually expressed as propositions, which are comprised of words denoting concepts, which embody meaning. Theories of concepts and propositions have a long and rich history. Some theorists say that propositions and concepts reflect two distinct functions of awareness; some say they are really the same function. It is beyond the scope of this essay to explore these matters: it is not necessary to know everything in order to know something, and you can know that you know and mean, without knowing every implication and relation of these ideas.)
The premise that we can mean and know is the premise that we are aware, which presupposes that there is something to be aware of and which we refer to by the term existence. In Ayn Rand’s formulation of this axiom: Existence exists.
For most people, it is easy to establish the axiom “I can know” or “Existence exists”. And many if not most people, due to their natural inclination to give proper respect to their experience, easily progress from “I can know” to the conclusion “We can know.” But there is a certain very stubborn class of skeptic that doubts the existence of others, or at least, believes there is no reason to believe there are others besides himself, as he is skeptical of the existence of the “external world” generally. I will pause here briefly for his benefit, but I do not wish to clutter the discourse by dealing with a barrage of deranged objections.
This breed of skeptic claims certainty of his own self-awareness while casting doubt on every other kind of awareness, and not just his own awareness of things other than himself, but of others’ awareness as well. He claims he has no reason to believe that anything outside himself exists; to him, there is only his mind and “phenomenon,” which, as far as he claims to be able to discern, inflicts itself upon him without rationally discernible cause. Therefore, any arguments or claims he makes about these things, are, by his own words, rationally unfounded, they are utterly baseless. And this definitely includes any arguments he might be making about you and your beliefs, since these exist outside of the realm the skeptic knows anything about. Thus, if we are to take the skeptic’s position seriously, we must remove the skeptic from any discussion of nature of things outside the realm of his own consciousness (which is to say, we should remove him from virtually all discussion) – we can only hope that this will motivate him to make better use of the faculties that Nature has so generously endowed him with.
The ultimate root of philosophy is trust in one’s own faculty of awareness. If one cannot trust, then one can neither testify to meaning nor to knowledge. This trust is not a “blind faith”, but a simple recognition that one in fact sees, hears, smells, values, intends, decides, acts. Those who cannot decide to trust in their basic faculty of awareness are shut out of philosophic discourse, whether by foolish choice, or by tragic constitution.
Now, let us continue.
The premise “We can know” entails: 1) We each can, as individuals, know (established in the foregoing); 2) Implied in “We can know” is “We can know”, i.e. there is a “how” to knowing; 3) We can communicate our knowledge, which means both that we can express our knowledge and that we can comprehend such expressions, i.e. we can know what these expressions mean.
Just as we can look at a horizon and know that there must be something specific beyond it (in spite of the fact that we do not know precisely what is there) we can also know that there must be some specific how to knowledge, even if we are not acquainted with all or even any of its specific qualities. And just as we can name and refer to what lies beyond the horizon, we can name and refer to the how of knowledge. The name usually chosen is: reason. Regardless of whether you are just barely acquainted with the how of knowledge such that the term is not much more than a mere placeholder for what you may learn in the future, or have explored its meaning for a lifetime, the term still designates the same thing.
Reason is universal: it must be a how that we can all employ. To claim otherwise is to claim that we cannot communicate our knowledge, which would mean that you could not coherently advocate “reason is not universal” to anyone, which thereby excludes it from rational discourse (skeptical positions are productive in the sense that they exclude those without good sense from the discussion). If we can communicate knowledge to one another, then we must have a common means of transmitting knowledge, i.e. a way of recreating how we know and in common terms, which is to say: reason is universal.
Reason is something general and shared in common, but any instance of communication is real, it exists. To communicate is to express one’s knowledge, to cast it into a specific form, i.e., to give it a real existence outside the realm of consciousness.
To say “existence is complex” is a gross understatement. At every instant in time, the state of things in any given part of the universe is different from what it was before, and from what it will ever be again. As Heraclitus writes: “You could not step twice into the same river.” Not only do we have no hope of knowing everything, we also have no hope of understanding any thing (i.e., any real existent) completely, since to know everything about any particular is to know all of its effects on and relations to other things, which is to know everything. Even considering a single type of effect, there is no hope of complete understanding. Consider the gravitational pull a given speck of dust has on all of existence. What changes would be produced by moving this speck from one part of existence to another? Considering chaos theory, we know that in the long run the changes would be dramatic, thoroughly unpredictable, and will manifest even in distant solar systems.
On the other hand, your consciousness, which makes up but a tiny portion of the universe, is relatively simple. How is it that a relatively simple consciousness can in any terms have knowledge about an unfathomably complex existence? The answer is evident: there is a “principle of uniformity” to Nature, and this permits us meaningful knowledge. There is no way to explain this principle without using it, since all explanation utterly depends on it. It is an axiom.
To recognize that we can communicate is to recognize that we can know what this manifested communication is, i.e. that we can unambiguously and properly discern truths about existence. To deny this is to deny that we communicate, and thereby to evict oneself from all rational discourse. Thus, one cannot remain coherent while also denying the principle of uniformity; i.e. there is no further justification of the principle of uniformity required than that you have chosen to be rational.
(In the foregoing argument I am deriving the principle of the uniformity from the axiom that reason is universal, however, I do not propose that this is the only way of deriving this axiom or that the former axiom depends on the latter; in fact I regard them as being two perspectives on the same truth. Furthermore, to “properly discern truths about existence” is otherwise known as induction, which is an axiom we will discuss later on.)
We have now made explicit some crucial assumptions implicit to philosophy:
Meaning and knowledge are the products of reason and existence; to deny any of these is to deny all of them, and thus to evict oneself from the realm of philosophic discourse.
To deny that meaning is definite is to deny that reason is universal because if meaning were indefinite then communication would be meaningless, which would contradict the axiom that reason is universal. In other words, the law of identity applies just as much to thought as to anything else.
Any philosophy that denies these axioms is one that may well be something we can appreciate in a limited sense, but it is a false philosophy.
“It is therefore important to discover whether there is any answer to Hume [on the problem of induction] within the framework of a philosophy that is wholly or mainly empirical. If not, there is no intellectual difference between sanity and insanity. The lunatic who believes that he is a poached egg is to be condemned solely on the ground that he is in a minority, or rather – since we must not assume democracy – on the ground that the government does not agree with him. This is a desperate point of view, and it must be hoped that there is some way of escaping from it. … What [Hume’s] arguments prove – and I do not think the proof can be controverted – is that induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle science is impossible.”— Bertrand Russell
We have an integrated awareness, founded upon fragmentary experience. This fact is what makes error possible, but it is also what makes knowledge possible.
Any being in this universe necessarily possesses a fragmentary experience: the senses must necessarily gather information across time and space through some physical means, and any physical means necessarily takes time and space to gather and process this information, in order to create the possibility of an integrated awareness of the world. A notion of “direct” access to knowledge is absurd: it would be knowledge acquisition, but not by any means. When there is no means of knowledge, there can be no knowledge.
This is not a mere technical detail; it is a founding principle of knowledge: we experience fragments of information (David Hume refers to these as “impressions”), which we integrate in some manner in order to produce truthful awareness – knowledge.
On a rudimentary level, Nature does this for us (and other higher animals) automatically: we see one side of a rock, but automatically experience it as an object (sense perception) and not as a mere patch of color. We reach for our coffee cup, automatically supposing that because it was an object filled with pleasant substance a few moments ago, then it will be now. The only guarantee of the correctness of our assumption is the nature of Nature. This is, of course, the epitome of a guarantee; nothing could be more guaranteed than the nature of Nature. The point is that our ability to know is conditioned on there being a uniformity or stability regarding what we know, that Nature supplies this and has a primacy or supremacy relative to us, we are utterly dependent and subordinate, and must obey in some sense if we want a chance at possessing reliable knowledge.
Reliable knowledge does not simply drop from the trees as if it were like apples in the Garden of Eden. We must put the fragments together, we must do it properly, or our “awareness” will be mere illusion. And indeed, the way many people fall into delusion is by attending to too few facets of information: they integrate the fragments they wish to integrate, and ignore those that, if integrated, would bother them. Their resulting “awareness” is therefore a fiction. If you inquire as to why they have integrated this information but not that, they will often become angry, treating you and the information you are bringing as if it were a threat. But when you regard reality as the enemy, you have become your own insidious enemy – they are attacking the wrong person.
Knowledge exists only when there is a proper link between belief and existence; this link is provided by the faculty of reason and in particular by 1) drawing beliefs from experience and 2) reconciling contradictions in meaning. The measure of reliability of our knowledge is dependent upon the degree to which we follow reason; there are no guarantees of certainty that come from any higher plane than this – any certainty we can hope to attain is a function of our own integrity. The claim that certainty is impossible is beside the point: what is possible is for us to embrace intellectual integrity as our standard. To the extent that we fail to have integrity, we will through natural consequence, be uncertain to that extent.
“But we have now posited that it is impossible for anything at the same time to be and not to be, and by this means have shown that this is the most indisputable of all principles. Some indeed demand that even this shall be demonstrated, but this they do through want of education, for not to know of what things one should demand demonstration, and of what one should not, argues want of education. For it is impossible that there should be demonstration of absolutely everything (there would be an infinite regress, so that there would still be no demonstration); but if there are things of which one should not demand demonstration, these persons could not say what principle they maintain to be more self-evident than the present one.”— Aristotle
The process of drawing knowledge from experience is called induction.
It is commonly alleged that either induction is per se a logical fallacy (which of course directly implies that one can never know anything about reality), or that there is a so-called “problem of induction”, which claims that we need to identify not only how we use experience to arrive at knowledge, but that something other than induction is needed, in order to provide the ultimate justification of induction.
This “problem” tacitly assumes (without justification) that the problem of induction is itself a legitimate and worthy problem. But it is fallacious, since it inexorably implies infinite regress (a true logical fallacy), since any justification given for induction would, given the premise of the “problem”, itself be open to the same question of how to justify this justification. For example, if one claims that “faith” justifies induction, then the next natural question is: “What justifies faith?” This infinite regress problem with the “problem” of induction is obvious, but usually not noticed, probably because one’s attention is more directed to the “problem” than to the problem that would result from having “solved” it. I.e., “the problem of induction” is based in the same thing any magic trick or con is: distraction.
So the “problem of induction” is not a profound philosophic question; it is merely a logical fallacy. We directly experience the fact that knowledge derives from logic and experience, and that’s that. The validity of induction is axiomatic. We can and should ask “How do you know?” for most things, but to ask how your basic means of knowledge is “justified” is to invalidate the questioner’s own question.
The “problem of induction” tells you, in effect, that it is not good enough that you experience and reason, that you’re in need of some additional justification outside of your own mind and your own experience. But to require some extra justification beyond logic and experience is to seek for permission and authority outside of your own organic faculties – you are not allowed to know, unless you can know by unknowable means. So the “problem of induction” is not only a fallacy, it is the most deep-rooted and pernicious form of authoritarianism possible.
It is evident that we can believe that which is not true, and that providing a means to distinguishing true from false belief is a fundamental task of philosophy. But philosophy cannot guarantee that one will find truth, it can only identify beliefs and behaviors which either further or undermine truth.
Intellectual integrity does not come naturally: barbarism is the default, natural state; it takes effort and discipline to become civilized. This a congenital condition for human beings: we can only explicitly hold in our conscious awareness a few ideas at a time, whereas contradictions in meaning are often outside this narrow range of immediate awareness. It requires a force of will to reveal these tacit contradictions. The twin tools of reason, presupposition and implication, are indispensable to reconciling these contradictions and therefore to honing our intellectual integrity.
Presupposition and implication start with a given idea, and work in opposite directions from each other: with presupposition, we discover ideas a given idea depends on; with implication, we find the ideas that follow from or depend on it. Through the use of these tools, we can bring hidden, implicit assumptions into the light of day, and by ruthlessly applying the law of non-contradiction, cleanse our minds of contradiction. Perfect consistency is impossible, but it is possible to be perfectly committed to the practice of reconciling contradiction.
Perceptiveness, which is a honed skill, consists in the capacity for the first-handed discernment of relatively distant presuppositions or implications.
Whether one is committed to intellectual integrity is the most fundamental difference between people. The civilized man, while being imperfect, has embraced the ideal of perfect consistency and consciously strives for it. The barbarian has no clear conception of the virtue in reconciling contradiction. The evil man understands and yet explicitly rejects the practice of reconciling contradictions.
“But God has not been so sparing to men to make them barely two-legged creatures, and left it to Aristotle to make them rational, i.e. those few of them that he could get so to examine the grounds of syllogisms, as to see that, in above three score ways that three propositions may be laid together, there are but about fourteen wherein one may be sure that the conclusion is right; and upon what grounds it is, that, in these few, the conclusion is certain, and in the other not. God has been more bountiful to mankind than so. He has given them a mind that can reason, without being instructed in methods of syllogizing: the understanding is not taught to reason by these rules; it has a native faculty to perceive the coherence or incoherence of its ideas, and can range them right, without any such perplexing repetitions.”— John Locke
Being logical is a precondition of finding truth, but it cannot be the case that an explicit knowledge of the laws of logic are required in order to identify whether something is true, since these laws are themselves a species of truth. It is of course true that one must follow the laws of logic in order to find truth, regardless of whether one explicitly knows them, but what causes a person to do this?
A liar is purposefully illogical: he means one thing to himself and feigns another meaning to you; he acts one way in one context, and his actions hypocritically contradict these actions in another context. He is insincere. Indeed, we naturally wonder whether a person who constantly contradicts themselves is incontinent, or insincere, or, since insincerity breeds incontinence, both.
An insidious form of insincerity is in those who put on an appearance of basing beliefs on experience and logic, but who really “cherry pick” what experience they will take account of. They recognize experience that leads to beliefs they prefer, and ignore experience that leads to beliefs they don’t prefer. But personal preferences are not universal, therefore this behavior flouts the axiom that “reason is universal.” Even more insidious is when they shape and appeal to tribal preferences, since having a large number of people possessed with similar preferences gives the false impression of universality and is thereby an even higher quality counterfeit of the truth.
We are logical primarily because we mean to be. This is the essence of sincerity. It is the essence and function of a healthy mind: a properly functioning awareness depends on a meaning that corresponds to the truth, and no contradictory meaning can be true. The cause and essence of intellectual integrity qua virtue is sincerity practiced over the long-term; it is a skill earned by sincerity. Since finding truth depends on sincerity, it is an ethical axiom of philosophy that one must be sincere.
The point here is not to moralize. For a marathon runner, the faster overall time, the better. In life, the more integrity you have, the better. If a professional marathon runner is slower than he might otherwise be, the remedy is to train better, not to feel guilty. On the other hand, if the marathon runner denies that it is better to run faster, then he should feel guilty – he is not really a marathon runner, he is a fraud. Likewise, if your integrity needs work then the thing to do is work on it; we should reserve moralizing for those whose lack of integrity stems from a conscious defiance of rational standards.
Why does a person choose to be insincere? An innocent reason for being “insincere” is that one is acting to protect a value from unearned appropriation by evildoers, as when a Nazi asks “Do you have Jews in your basement?” But this is not the type of insincerity I am referring to here. I am referring to a person who commits treason to the truth and to others who are themselves committed to the truth. One who “lies” to the Nazi is acting for truth, not against it. But because lying is a valid form of self-defense, and because “We are what we repeatedly do” (Aristotle), someone in abusive circumstances, especially an intellectually weak person, such as a child, might easily come to habituate lying, doing it not only when appropriate but when inappropriate. In any case, while this offers a partial explanation and may well be helpful for habitual liars to be aware of, they should not use it as an excuse, lest the maladaptive behavior continue.
Suffice it to say that a person with a normal and organically healthy mind is sincere because he chooses to be, and ultimately, there is no deeper cause than his own choice. A person who denies that he can choose to be logical is a person who denies that he himself makes sense and is therefore not a party to philosophic discourse. We can only hope that he has the good sense to return once he has corrected his illogical nature.
There is no guarantee that one will reliably find truth, but truth-seeking is guaranteed to be unreliable when one does not wish to find it. It is evident then, that sincerity is as fundamental a requirement as logic is when seeking the truth, and that a sign of its lack is when one is persistently illogical.
The soul of sincerity is the soul of logic, which is saying what you mean and meaning what you say.
The ideal of civilization is a harmony of actions of its constituent individuals.
A harmony of actions does not mean that everyone acts in the way others want them to; it means that individual action is not actively blocked by another, i.e. that human interaction is governed by consent instead of coercion. The former is an impossibility: even just two people never act in complete harmony with one another’s desires. The latter is nothing more nor less than the ideal of eliminating criminal action.
Put differently, the ideal of a proper civilization is to be free from the criminal interference of others, which is to say that those actions of others which interact with your actions must not violate your consent. Assuming we define “criminal” correctly, this is the heart and soul of liberty. To strive for an ideal civilization is to strive for a harmony of action, which is to strive for the obliteration of criminal action, which is to strive for liberty. These all denote the same thing.
Liberty cannot exist where universal reason is not recognized, for without reason there is no agreement, and without agreement there is no consent, and where there is no consent there is tyranny. Those who defy reason, and yet engage politically, necessarily attempt to assert their personal preferences as a substitute for reason, and such a methodology can only lead to the tyranny of being subjugated to arbitrary preferences.
From the foregoing follows the maxim: The linchpin of liberty is the universality of reason.
“I do not feel obliged to believe that the same God who has endowed us with senses, reason, and intellect has intended us to forgo their use and by some other means to give us knowledge which we can attain by them.”— Galileo
“I believe in God, only I spell it Nature.”— Frank Lloyd Wright
“Nothing is more dangerous to reason than the flights of the imagination, and nothing has been the occasion of more mistakes among philosophers.”— David Hume
“A man may imagine things that are false, but he can only understand things that are true.”— Issac Newton
Metaphysics identifies on the deepest possible level the nature of our perceptions. Is what we perceive an actual comprehension of some aspect of Nature – does our perception correspond to reality – or is our perception merely a delusion? Alternatively and equivalently, if we take any proposition and ask why it is so, and then ask why this proposition is so, and so on, we must ultimately land at propositions that we take to be a direct consequence and representation of the facts that are in some manner directly evident to us. What is the general nature of these first facts, that holds true across all possible domains of human knowledge?
The primitive man has no need to ask whether a deer is really a deer or merely appears to be, nor whether a tree truly is a tree, nor whether a dangerous bear is truly dangerous and a bear. However, man in a more advanced state of mind can become very confused about the nature of the things he deals with: the very small, the very remote in time or place, the very fast or massive, and the very abstract. Or in other words, he can easily become confused regarding the not easily perceived, or regarding that which does not lead to immediate pain or death when misperceived. Nature has given us the ability to easily and reliably perceive a certain range of things, and when we endeavor to take this perception further, then because of error or vice, we easily falter, and not merely on integrating enough information to form an accurate perception (which can be painstaking and error prone), but even on trusting in our own mind’s capacity to perform such a feat.
A rational metaphysics identifies the roots of what leads to knowledge about reality, explicitly identifying the fundamental nature and relationship of mind and reality. At the root of this relation is the relation of sensations to perceptions.
By “perception” I am not merely referring to sensory perception, but to a far wider concept: a mental object normally corresponding to something that exists; the knowledge of some part of reality, through various means, but that is ultimately reducible to what I will call “sensation.” (I say “normally” because sometimes a perception is an illusion – there is of course no guarantee that because you perceive something that therefore it is real.)
The concept “sensation” as I use it here is a wider concept than the typical five senses, for it includes every starting point for reason, not merely those starting points that correspond to something external to our body, but also to those that give us experience regarding our own internal states. (My concept of “sensation” is similar to or perhaps precisely what David Hume meant by “impression.”) These states are also part of reality, since we are part of reality, and they include our memories, whether we feel ill or healthy, our particular mood, our emotions toward a particular object, and even the fact that we believe something (but clearly, being aware that we believe something does not ipso facto justify that belief) – all that Nature gives us as starting points to reason from, but only the starting points and not the inferences we make (except in the aforementioned respect that we have made such an inference). Note that this wider concept of “sensation” includes the sorts of mental objects required in order to engage in mathematical reasoning. For example, the branch of mathematics dealing with quantity (arithmetic, algebra, calculus, and so on) is rooted in our ability to regard things as a unit – as a member of a group of similar members. It is this regarding of things, this selective focus, upon which we build our mathematical abstractions; we do not directly build them upon externally existing objects. If we were to restrict the foundation of metaphysics to sensation in the narrow sense, then it would become impossible to philosophically validate mathematics, for it would disassociate mathematical reasoning from its roots in mental objects.
I distinguish “perception” from “sensation” by virtue of the fact that perception involves the integration of sensations. Whether the integration is automatic (akin to an emotional reaction) or not is irrelevant; what is relevant is that we are or can make ourselves aware of the discrete sensations, if we choose to analyze the perception and thereby isolate these sensations. Nor is the scale of the integration relevant to perception. For example, a sighted person can instantly sense and integrate a perception of a car in a very direct way, whereas a blind person must sense different areas over time and perform a chain of integrations relatively distantly removed from the sense of touch. But, both the blind and the sighted person perceive the same thing; they both know that it is a car. Continuing in this vein, the most abstract of thoughts, which correspond to a higher-level perception of an object, can be reduced to sensations: we perceive a tree, but we can reflect on the fact that each part of the tree has a given shade of color that we sense, and, potentially using tools, analyze precisely how we come to perceive a tree, and thereby develop a theory that enables us to recreate a rendering of the tree in a painting, such as Leonardo Da Vinci developed. Or, if we endeavor to comprehend for ourselves the scientific fact that the Earth revolves around the Sun, we will, through a very long chain of abstract reasoning rooted in sensations provided via instrumentation (which we also must understand well in order to rationally trust), perceive this scientific fact – just as Galileo perceived and reportedly said, after being convicted of heresy: “But it does move.” One cannot know this kind of thing merely by looking through a telescope; reasoning is required in order to observe an object through a telescope and know that it is Venus, a planet orbiting the Sun. This kind of perception, rooted in sensation and a careful chain of reasoning, stands in contrast to mere imagination, or mere belief on the authority of others, and the entire point of philosophy is to identify just how to arrive at rational perception – to not merely believe, but to know. Implied by all this is that we can and must take full responsibility for the integration of individual sensations into a percept; we must not take any kind of perception as a not-to-be-questioned starting point.
It has taken mankind a long time to get to the point where at least some humans accept the responsibility in taking systematic due diligence concerning what they believe, which means not taking anything for granted but that which must necessarily be taken for granted: our sensations.
Before continuing beyond this initial starting point, let us consider the opposite starting point, the prevailing irrational metaphysics.
As a rule, most people still use a primitive approach to perception on at least some issues: they take at least some of their beliefs as unquestioned, not willing to trace them back to their origins in sensation and to validate them. (The last vestige of this primitivism is to take sensory perception as the unquestioned and allegedly irreducible starting point.) I refer to all irrational metaphysics as a “dichotomous metaphysics” because all of them share in common the same characteristic of inserting a not-to-be-questioned set of perceptions (or means of perception) that to some extent severs mind from reality. Instead of calling it a “dichotomous metaphysics”, it could also be called an authoritarian metaphysics, since its precepts are not grounded in rational perception, but rather, dogmatically insisted upon based on a given person’s “intuition” or “feeling”.
It is all but impossible for a bad metaphysical view to sever mind from reality completely: everyone, even the insane, must use proper methods to at least some extent. They must recognize their environment in order to move within it; they must heed their pleasure/pain responses to a substantial degree; they must have some sensible way of thinking about what others are saying to them in order to communicate; etc. To this extent, (a true) metaphysics merely describes how we do function, it does not prescribe how to function. Only a person whose insanity is so extreme that it has driven them to completely suicidal behavior could be said to truly have escaped from a good metaphysics. For the rest of us, we have no choice: to stay in reality is to respect it to some degree. In other words, unless you choose death, a bad metaphysics cannot replace good metaphysics, it can only exist parasitically, side-by-side with an at least implicit good metaphysics. Thus, a bad metaphysics must always imply a dichotomous metaphysics, one part good and the other part bad, unless the victim of it has been driven to suicidal insanity.
Religion and the secular equivalent (e.g. Plato or Kant) divides reality into two parts: Heaven (or the equivalent), which is beyond sensuous perception, and Earth (otherwise termed as tangible reality), which can be sensuously perceived. It likewise divides consciousness into the “lower” form, which relies on the sense, and the “higher” form, which is “intuitive” and can in some manner perceive the so-called “higher” reality. This basic dichotomy in the dichotomous metaphysics leads to related dichotomies in modes of understanding and behaving, which as a natural consequence, leads to hypocrisy – you can’t serve two masters.
On matters where a person has convinced himself that he in some manner “hears” other-worldly thoughts, there is no mode of reaching him through communication to convince him otherwise, whether it be through writing or speaking, because all of these are sensuous and not relevant to the mode he pretends that he perceives on. The motive for a dichotomous metaphysics seems obvious: the purveyor of it wishes to create a pretended authority over a domain that he alleges to have a special conduit to, in order to either rationalize his own actions or to govern or to judge the actions of others in the domain that he otherwise has no authority. Thus, while it happens to be that this sort of metaphysics has been deemed by some to be a school of philosophy, it is actually a sophisticated form of delusion or fraud.
David Hume plunged deep into the mechanics of perception, examining in fine detail the relation between reason and sensory information in his epochal work, An Enquiry Concerning Human Understanding. (Hume had a minor bias toward being unreasonably skeptical, but he also had an important point. Unfortunately, many of his interpreters have exaggerated his bias and ignored his point.) What Hume observed is that there is nothing in sensory data that provides us with a deductive justification to “leap” from sensory and memory-driven association to causal connection. All we actually receive from Nature is a “constant conjunction” of this with that, and nothing more (Hume identified three key concepts to which this general concept applies: similarity, contiguity, and causality.) For example, we sense a blue ball as a patch of blue, and regardless of its motion or our motion, we experience that each portion of this blue patch constantly conjoins with the other patches, and from this and only this kind of experience we conclude that it is an object. Further, we sense that when this “patch of blue” is dropped, it always bounces afterwards; in other words, the dropping and the bouncing afterwards are constantly conjoined. We could imagine otherwise: that on a different occasion, the ball does not bounce, even though all known factors are identical (if the ball had been dropped differently, such as on mud, then that is a different situation and does not violate the “constant conjunction”; we merely need to increase the precision of our remarks about the conjunction, to something like “when we drop the ball on a hard surface, it bounces”). Hume scoured the depths of human reason, looking for a different answer, for he was evidently uncomfortable with the idea that all of our knowledge is based on “mere” constant conjunctions coming from the senses. Had he a reason to be uncomfortable?
Many people are uncomfortable with Hume’s report, feeling that it undermines something in their grasp of reality (for those with a dichotomous metaphysics, Hume certainly undermines their presumed grasp). But there can be no alternative explanation that is also rational and scientific, for clearly, the data of the sense is nothing more than the data of the sense. For example, everything we know about biology substantiates Hume’s view: when you see the bouncing ball, each part of your retina separately receives light from different parts of the ball, which is distributed through separate nerve cells, which is then reintegrated independently and only by the mind, to form the mental grasp of the bouncing ball. There is simply nothing else going into the cognitive system than separate elements of sense data. To reach for another mode of receiving data that goes beyond this is to reach for divine knowledge, which is to create a dichotomous metaphysics. Once one has decided to reach further than what is rationally possible, the only basic choice left is whether to honestly call it a dichotomous metaphysics or not, to explicitly reject the idea of a comprehensively rational and scientific understanding, and to live with the consequences. Some try to escape this by claiming that there are certain structures in the mind (such as Plato’s Forms or Kant’s Categories) that are determined not by prior familiarity with bouncing balls but as some sort of in-built “intuition.” But this trick resolves nothing: whenever one experiences particular and real balls, one cannot avoid the fact that one is merely conjoining this “higher” and “intuitive” realm with what one actually senses. In other words, in order to apply one’s “intuition” to reality, one makes the same kind of “leap” one was trying to avoid making in the first place.
Why do people try to short-circuit their connection to reality, to pretend that how things seem to them to be is actually what they are, to try to cheat reality and thereby cheat themselves out of actual knowledge? Using Nature as your authority can be demanding. It is in fact too demanding for children and those unfortunate enough to have mental disability, but there are a great many perfectly healthy adults who pretend that they live in an intellectual Garden of Eden, where reliable knowledge just falls from the lips of their professors or preachers as apples fall from trees, handed to them with no substantial effort required on their part.
The deference to the authority of others rather than Nature has two integral aspects. The first aspect is the one already described: an adult who chooses to be as a little child, to be a perpetual intellectual dependent. The other aspect is deference to one’s own whims; to use these as the authority rather than to defer to Nature. Both aspects usually occur together but on different aspects of the person’s system of rationalization. The root of these however is deference to their own whims, for their deference to an outside authority must in the final analysis be based on their own whims – they are the ones who have chosen to anoint the authority as an authority. So, any defiance of Nature, on any pretext, is ultimately for the sake of the individual’s own whims; it is not for alleged reasons such as allegiance to God, or State, or any other figurehead.
Why would an individual wish to defy Nature?
It is natural for humans to not like being at the mercy of others. When we are dependent, we may come to feel resentful of our dependence, and therefore, to irrationally associate feelings of resentment with our benefactor. Adolescents may go so far as to destructively rebel against good parents, both hating their own dependence and being unable to actually free themselves of it. This is not a rational response. The adolescent would be better served by recognizing the reality of his situation: he is dependent on beings that love him and want the best for him, and that is the condition of his continued relatively easy existence. His resentful and rebellious feelings are actually deranged and should be rectified.
Nature itself is, in an important sense, our first parent, and we are utterly dependent on her. We should not resent the fact that she has made us in such a way that we can perceive her, but only through certain limited means, nor that we can’t sense things “as they really are” but only as Nature presents them to us. We should not resent the perfectly consistent Nature (others have termed this “the uniformity of Nature” or “the principle of uniformity”), which allows our method of discerning “constant conjunctions” to work and for human beings to thrive. Yes, you can choose to resent what Nature has given, and you can concoct a fantasy to console yourself in your adolescent rebellion – and you can suffer the consequences. Or, you can recognize that you are what you are and do the best with what you are actually given – that your mode of knowledge is rooted in the empirical, in sensation.
The power of our reason flows not merely or even primarily from our own nature, but from Nature’s nature: she is perfectly consistent, and because of that, our method of perceiving by the method of discerning constant conjunction works. How do we know that she is perfectly consistent? Because we have never known an exception, but more fundamentally, because we can’t know otherwise: for us, to “know” is to be logical. Yes, we have perceived a contradiction between what we thought was the case and what actually was, but in that perceiving, we begin to perceive an even deeper truth about Nature. It is only on this mode of total trust of Nature that we can actually reach these deeper truths. To act like an adolescent and rebel against the terms which Nature has created for you is to engage in self-destruction and condemn yourself to ignorance.
And so we have arrived at the crux of the matter, the important metaphysical question: how can we know that our perceptions correspond to reality? The answer is that we can not automatically know, but if we adhere to the rules that Nature suggests (and the task of philosophy is to unravel these), then because she is consistent, we can know. If she were not consistent, then we could not know; we operate at Nature’s “pleasure”, so to speak. If she had whims, we would be at their mercy; it is fortunate for us that she does not. The real power of reason lies not primarily in the human mind, but rather in the nature of Nature, which has such a perfect consistency that we can actually comprehend it via what is, at bottom, inference rooted in simple and consistent association, and we can only rely on these associations because Nature is preeminently trustworthy.
The irrational response to this is to demand, without comprehending the meaning of the demand, for a “proof” that Nature is consistent, perhaps while referring to some pseudo-scientific claim that seems to undercut the consistent observation that Nature is consistent. This person has never troubled to ask themselves what exactly it is that they mean by “proof” in the first place, for if Nature really were inconsistent, then no concept of proof about the nature of things would be possible: proof requires not merely a consistent argument, but also a consistent reality that it corresponds to. If they were somehow able to eviscerate Nature of its perfect consistency, all that would remain for philosophy is merely what you choose to believe and what I choose to believe, a total subjectivism and irrationalism that denies that knowledge is possible (without, of course, being able to prove such), and which inexorably leads to an authoritarianism that attempts to create order from the inevitable chaos that results. Their use of the word “proof” is as meaningless and dishonest as it is insolent to Nature. Our task is to work out the implications of Nature’s perfection; to question this perfection is either a folly or idle entertainment at best and the worst kind of psychotic destructiveness at worst, for there can be no answer to a question that makes all answers meaningless.
When a person claims to speak for science, telling you, for example, that quantum mechanics “proves” that reality is contradictory, recognize this nonsense for what it is: an attempt to hack into the deepest level of your mind, a hack which would substitute your healthy respect for the authority of Nature for their authority. This hack would utterly disable your ability to discern true from false, leaving you helpless – and conveniently for them, for they are the ones who want to “help” you. It is true that we do not yet understand everything; we do not know why the various and conflicting interpretations of quantum mechanics are wrong, but because of the contradictions we do know they are wrong in some respect, regardless of how well their mathematics works. When you come upon someone who, when prompted with a discussion of basic philosophical principles, only refers to the alleged implications of hotly contested discoveries at the very frontiers of science, who assiduously avoids a discussion of basic truths available to anyone, then you can be assured that you have a person who wishes to confound, not clarify. You have come across a charlatan, not an intellectual.
Like the flies and mosquitoes pestering explorers of the vast wilderness, peddlers of irrationality and mysticism have been an ever-present nuisance for those who explore the frontiers of human knowledge about nature. In our era, that frontier happens to be in the realm of quantum mechanics and particle physics. What is evident to every rational and sincere thinker is that our present lack of definite interpretations in these areas is not a sign of a failure of Aristotelian logic; on the contrary, it is only logic that makes it possible to ultimately find correct interpretations in any sphere.
The mental stance that recognizes reason’s subordination to a perfectly consistent Nature – its total dependence on Nature’s perfect empirical guidance – is self-evident, it is axiomatic. We can either choose to recognize it or not, but there is no possibility of an argument that deductively refutes or proves it, since all proof or argument depends on the consistency of and our subordination to Nature, that it is what it is, and that there is no alternate realm from which we can gather evidence of anything whatsoever. This axiom is at once both a stance about the nature of Nature and about the nature of reason, and is the foundation of a rational metaphysics.
Just as we can examine a human being and isolate its separate parts, we can isolate separate aspects of this axiom (following Ayn Rand): we can identify that Nature exists, that it has identity, that it has primacy, that we are conscious of it, that reason is our means of knowledge. But, just as only knowing the parts of a human as separate things – as a disembodied head or limb – does not properly convey what it means to be a living, breathing human being, to deal only with disembodied aspects of this axiom is to fail to comprehend its full meaning. To perceive that Nature is consistent is to perceive that it is knowable through reason by us, and to perceive that it is knowable is to perceive that it is consistent; these are two perspectives on one in the same fundamental truth.
Various virtuous thinkers have identified the implications of Nature’s perfect consistency, but two thinkers stand out. Aristotle identified the laws of logic and modes of their defiance in various logical fallacies. Newton identified the proper metaphysical stance, the way in which he regarded Nature’s gentle hints and that enabled him to create Calculus and modern physics, in his Principia, as Rules of Reasoning in Philosophy. Of most direct relevance here is his Rule IV:
“In experimental philosophy we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions. This rule we must follow: that the argument of induction may not be evaded by hypotheses .”
Induction, as I take Newton to use it, and as I use the term, is precisely an implied recognition of the axiom I have been discussing. I shall therefore designate it as the the induction axiom or simply, induction, since the process of induction is precisely to rigorously follow Nature’s empirical guidance while reaching our conclusions.
The word induction is in fact used by contemporary philosophers to describe this issue in the so-called “problem” of induction. The “problem” they imagine is that there is no authority or deductive logic telling them that such and such empirical observation (such that one part of a ball will constantly conjoin with the other part while the whole ball moves) is “necessary.” They feel that because they can imagine one part of the ball randomly taking off in a different direction for no reason, then they may reasonably conclude that there is in fact no legitimate reason to expect the ball not to fly apart, that its remaining whole is “contingent,” along with the whole of Nature’s behavior being “contingent” (which according to them merely seems to us to be perfectly consistent, wherein a ball would only fly apart if acted upon by a violent force). They would claim, for example, that the ball will only probably not fly apart without natural cause – ignoring the fact that a probability statement is a ratio of how many times something happened vs. how many times it didn’t, and in this case no such kind of action has ever happened, so the probability is zero (but evidently they are counting their juvenile imaginings as evidence to the contrary).
These philosophers are right about several things. For one, they are right that you can’t make a deductive argument to justify induction – induction must be accepted as a valid method in order to make any true statement. Accepting its validity is therefore a key requirement of rationality. What’s more, the validity and process of deduction are only learned inductively – you can’t even learn what deduction is or how to apply it without first relying on induction. They are right that no authority can speak in Nature’s place here and in any terms whatsoever affirm for you that an inductive truth is correct – there is only you and the empirical evidence and the integrity with which you piece it together. This last sentence highlights an important issue: how many people actually have the integrity to piece together the evidence for themselves? And how many believe what authorities tell them instead? The answer to this question explains why, for so many, there is a “problem” with induction. The problem is, they don’t practice it, are therefore quite incompetent at it, and therefore really wish someone could step in and do it for them. Borrowing a metaphor from Ayn Rand: Nature has not created a way to transplant knowledge from one mind to another – just as food is only digested individually, truth is only discerned individually. They are violating the very ground rules that Nature has laid out, and deep down they are probably very aware of this, and therefore experience feelings of uncertainty, guilt, shame, and fear, and ironically enough, want some authority to step in and save them from their own irresponsible behavior and to force-feed them predigested knowledge. So yes, they do have a “problem” with induction. As indeed do those with illegitimate power and control over others have a “problem” with induction: if their willful subordinates ever start consulting Nature instead of those who blaspheme Nature, then those blasphemers would lose everything. Thus one can perceive the true aim of mythology and understand its association with authoritarianism: it is a direct assault on the inductive powers of a free mind.
“Reflex action is a local response to a local stimulus; instinctive action is a partial response to part of a situation; reason is a total response to the whole situation.”— Will Durant
Now, Nature exists as a whole, whereas we perceive only parts – a god-like perception of all that exists is impossible, not just to a human mind, but to any real mind. Furthermore, our sensations are only in response to aspects of these parts. At any given instant, you do not receive the sensations of “the blue ball” as a whole, but only of one side of it, in a given light, subject to limitations of what the light can convey; you do not sense in every possible manner or direction simultaneously, but rather, only a single perspective (you may sense in a few aspects simultaneously, such as by looking at it, touching it, and hearing the sound as it bounces). In other words, your experience is constituted from sensory fragments, which come from a particular perspective, and which you must somehow assemble in order to comprehend the whole, which is itself merely part of the whole of Nature. The principle task of a rational philosophy is precisely to explain just how to do that, such that what you perceive actually corresponds to Nature, rather than being mere delusion. Nothing is of more fundamental importance in philosophy than this.
To restate: We perceive nature by integrating sensory fragments, which themselves come from a certain perspective, which is a relation of the perceived object to the perceiver of the object. Our only task in knowing anything whatsoever is to assemble those fragments into a non-contradictory whole. So, the primary purpose of philosophy is and can only be to guide us in this process. I shall therefore call this the founding principle of philosophy .
The founding principle of philosophy stands in contrast to a religious view, namely, that one can perceive the whole truth instantly and intuitively without any rational means of integrating sensory data, or in the secular version of the religious view, that in some terms perception is a self-evident given and the starting point in philosophy. Both of these rejections of the founding principle have rationalizations for their rejection, such as referring to the rapidity at which we can reliably identify the objects we see, which to us might make it seem as if we can instantly know. But the fact is that such rapid identifications depend on lengthy past training, much of it reaching back into early infancy. The fact that we can become good at quickly assembling sensory fragments is not an argument against the idea that we do assemble sensory fragments. Further, the fact that we have rapidly come to a correct conclusion does not argue that we cannot rationally dissect the operation and discern precisely the flow of empirical data and its integration in the mind. On the contrary, if we could not do so, then we would be hopelessly at the mercy of various illusions of perception.
To follow reason, then, is to do three basic things: 1) to inductively draw ideas from sensation and not from mere imagination; 2) to be logical, to refrain from advocating ideas that contradict one another; 3) to examine issues from a variety of perspectives, looking at all relevant sides of the issue, and to reconcile these perspectives.
A purely deductive “reason” that is only concerned with internal consistency is not reason at all, it’s an abuse of the machinery of reason that misses the whole point. Such a notion is really a counterfeit reason – to cleave reason from its inductive, factual, empirical base is to destroy reason. True reason is about pursuit of truth, and pursuit of truth depends on paying extreme attention to the empirical facts we get from reality. Reason is primarily about drawing ideas from experience, it is not primarily a robotic, deductive machine.
A proper scientific education would include a number of first-handed explorations of basic scientific truths in this manner. It would not merely consist of the authoritarian conveying of facts to be repeated back by the students, but they would learn these things entirely for themselves.
Ageōmétrētos mēdeìs eisítō. (Let no one untrained in geometry enter.)— Motto over the entrance to Plato’s Academy
“To love truth for truth’s sake is the principal part of human perfection in this world, and the seed plot of all the other virtues.”— John Locke
“Listen, what’s the most horrible experience you can imagine? To me – it’s being left, unarmed, in a sealed cell with a drooling beast of prey or a maniac who’s had some disease that’s eaten his brain out. You’d have nothing then but your voice – your voice and your thought. You’d scream to that creature why it should not touch you, you’d have the most eloquent words, the unanswerable words, you’d become the vessel of the absolute truth. And you’d see living eyes watching you and you’d know that the thing can’t hear you, that it can’t be reached, not reached, not in any way, yet it’s breathing and moving there before you with a purpose of its own. That’s horror. Well, that’s what’s hanging over the world, prowling somewhere through mankind, that same thing, something closed, mindless, utterly wanton, but something with an aim and a cunning of its own.”— Steven Mallory, The Fountainhead, by Ayn Rand
“I recall an incident involving the late George Stigler at a conference in Spain in the 1980s. Hearing that I had written a book on reason and natural law, Stigler started to ridicule reason, going so far as to say that there is as much reason in a monkey’s antics as in any human act. At that point I asked him whether he was trying to tell me something about how he wrote his books; he gave me a blank stare and stormed out of the room.”— Frank Van Dun
Unlike metaphysics, which demands a good deal of thinking in order to explicitly comprehend, and unlike ethics proper which is a far larger topic than metaphysics, metaethics is a trivial subject. (Just as any work of fiction can be made arbitrarily elaborate, false metaethical theories can be arbitrarily complex – there is no limit to the complexity of untruth. But truth is what it is, which is to say that it is neither more nor less complex than it is.)
We cannot find truth through whim. In order to identify a rational ethic, we must choose to follow reason. Since choosing to follow reason is a requirement of finding all knowledge, including the knowledge of whether a given action or class of action is ethical or not, then rationality is the primary virtue and irrationality is the primary vice. If one would know and adhere to a rational ethic, then one must make a commitment to follow reason; if one does not so commit, then all knowledge, including ethical knowledge, is beyond one’s reach. A person can choose to not follow reason, or he could perhaps be incapable of choosing to follow reason, but then that person is not relevant to a rational discussion of what constitutes an ethical choice. To be ethical is to be rational; to be irrational is to be unethical. Once one chooses to follow reason, then remaining true to that choice entails never choosing to undercut reason. Recognizing these facts gains one access to the field of ethics.
That you have a choice to be rational is implied in every argument you might make for or against this premise. If you choose to defy this axiom and begin to argue that you have no capacity to choose to follow reason, then by your own choice, whatever you have to say is as relevant and welcome to a rational discussion as is the buzzing of a mosquito.
This is the entirety of metaethics; the rest of what follows merely elaborates and deals with a few common objections.
“Even if you persuade me, you won’t persuade me.”— Aristophanes
“Attempting to debate with a person who has abandoned reason is like giving medicine to the dead.”— Thomas Paine
“Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.”— Avicenna
Since the aim of ethics is to know what is good and right, and since knowing entails a rational process, then our first ethical imperative is to follow reason. If our thoughts concerning the subject of ethics are to have meaning, then we must strictly maintain a rational philosophical posture, which is to say, we must possess a solemn respect for reason. If we do not, then we may indeed have a lot to say, but what we have to say is mere self-indulgent babbling, it is not truth. Rationality is the primary virtue of an ethical system. An ethical code based on irrational premises could not be a system: it would not fit together into a consistent comprehensible and practicable whole. Thus, it is not possible for an irrational person to be ethical, not even on terms they themselves define, for, due to their irrationality, they cannot escape from being hypocrites. Such is the nature of irrationality: to be irrational is to be contradictory, to fail to know truth, including ethical truths and truths about how these apply in action, which is to lack integrity.
To choose to be irrational is to convert oneself from a possible participant in philosophic discussion into one of the many objects to be studied by philosophy. Let us examine what some of these objects have emitted.
The banality of a denial of free will is evident: the person who denies free will implicitly admits to not choosing to follow reason and therefore their remarks are undercut by themselves at the outset.
Another common objection to a rational ethic is the denial of its possibility in what is called the “is-ought problem.” The supposed problem is that although there is a way of arguing that if one desires to achieve a certain end, then one ought to take such and such steps, there is no argument that justifies the original desire (or if there is, then it ultimately rests on a prior desire that itself cannot be rationally justified). David Hume is usually cited on this point:
“In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ‘tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.”
For the most part, none of Hume’s thoughts here argue that one cannot derive an ought; they merely state that most systems of morality leap from is to ought without making a rational connection. He is simply pointing out a common fallacy, he is not making a sweeping assertion about whether rational ethical arguments are possible. Indeed, as Hume himself wrote:
“I must confess that a man is guilty of unpardonable arrogance who concludes, because an argument has escaped his own investigation, that therefore it does not really exist.”
It is difficult to believe that a man with his philosophic diligence would brazenly and dogmatically assert, without proof, that one cannot rationally derive an “ought.” But there is the very last remark he made, “nor is [the distinction of vice and virtue] perceived by reason.” This would seem to indict him, but if you read Hume elsewhere, it is clear that he wrongly construed reason as merely the logical deductive faculty, i.e. as the faculty that deduces whether a given premise follows from others, not as the faculty that perceives truth. Indeed, a merely deductive faculty would not only fail to perceive ethical truth, but it would also not be able to perceive any truth whatsoever; it would, like a computer, merely be able to deduce propositions from other propositions. It would only be able to robotically pronounce “if the given propositions are true, then that proposition is also true”, which is merely a generalization that subsumes “if you want to achieve this end, then you ought to take these steps.” In other words, taking Hume’s actual meaning of reason into account, his observation was obviously true: one cannot arrive at truth by merely using deduction.
If Hume is to be indicted for anything then, it should be for constraining reason to an arbitrarily narrow sphere in the first place. The conclusion that you cannot form an ethical science based on this arbitrarily narrow conception of reason is true, however, it is also trivially obvious. Without a faculty of discerning truth, you cannot arrive at any species of truth. As Newton observed, induction is required in order to find truth.
One should not underestimate the potential evil that can result from believing that there is an insoluble “is-ought problem”. From the premise that reason can merely help one identify the best means of pursuing a given goal, that it cannot itself be used to discern the proper goals, one can logically deduce the bizarre proposition that reason is useful for deciding how to most efficiently execute innocent citizens, but it is of no use in deciding on whether or not one ought to execute them. (Of course it is not really true that we can’t make a rational case that one ought not commit mass murder, but it is commonly believed that one cannot, so one can naturally have at least some sympathy for those who have concluded that philosophy is hopeless.)
Let us consider, for sake of argument, what it would mean for (rational) philosophy to approve of your committing mass murder.
By its nature, if philosophy approves of some particular person committing a given kind of act, then it would have to approve of every person committing it, for there is no fundamental difference between human beings that could be used to argue that what is proper action for the one is improper for the other. (This is the same type of observation as Newton’s Rule III, viz., “The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”)
In other words, if it were possible to make a rational case for your commission of heinous acts, then that case would necessarily have to include yourself as a victim of these very acts. You can prove this to yourself using induction: if you are following Newton’s Rule IV (see Induction), then you will have to conclude that if a type of act is appropriate for one one human being then it must be appropriate to all; if you are diligent, you will find that any argument you attempt to create for the contrary result will collapse in contradiction, that it will necessarily contradict rational philosophic principles, the root of which are Newton’s Rule IV. (I submit that every bogus philosophical perspective is an attempt to flout Newton’s Rule IV. David Hume put it like this: “Nothing is more dangerous to reason than the flights of the imagination, and nothing has been the occasion of more mistakes among philosophers.”).
Clearly, no one would desire heinous acts to be committed upon themselves personally, not even if they are the worst kind of criminal that desires it for others, but that is not the point. If it were the point then this line of thinking would be pointless: the fact that one does not desire a given result does not argue that one ought not pursue it. The point here is that philosophy, as a rational discipline, would have to universalize heinous crime if it were to approve of it for even a single individual; it cannot approve it only for certain select human beings.
The climax of the preceding line of thought is this: how can someone who is actually following reason propose as allegedly rational an act that, in the end, subverts reason? The answer is that he cannot at once follow reason and also tacitly imply that someone prevent him from following reason. Once a person has decided to follow reason, then it is inexorable: he cannot advocate interference with his own reason, nor therefore can he advocate interference with another’s reason. Just as an irrational person is locked out of philosophy – it is a domain he may not participate in except as a pretense – a rational person is locked into upholding natural rights, and not merely his own but those of others, as a universal principle,. Thus we see that the alleged “is-ought gap” is exploded in a reductio ad absurdum.
The primary “ought” – that one ought to follow reason – is implicitly embraced by anyone who chooses to argue for anything whatsoever. A man cannot both be rational and also tacitly invite discussion about how perhaps reason ought to be subverted. If he is not rational, then his words are meaningless – he has lost permission to philosophize. This principle is a philosophic razor that separates those who can legitimately philosophize from those who cannot.
So, the “is-ought gap” is closed by the very decision to philosophize, i.e., to endeavor to find truth through rational means. It is true that a rational ethic has nothing to say to a man who firmly chooses to be irrational. Philosophy cannot convince him he ought to do anything. But it does have something to say about how rational men should interact with such men. In other words, the “is-ought gap” is closed by learning that the rational meaning of “ought” is, like all other legitimate concepts, rooted in the decision to follow reason. For those who have so decided, the term “ought” becomes filled with a rich array of consequent rational ethical decisions. For those who have not, the term “ought” is and can only be authoritarian arbitrariness – ergo the rampant confusion that the “is-ought gap” is unbridgeable.
Is it the case that the very decision to follow reason is arbitrary, that there is no reason to follow reason?
It is true that we have a choice to follow reason or not. We can choose to follow reason at some point in our lives even while not choosing to do so at other points. Why choose one way or another? How can it be rationally demonstrated that one ought to follow reason? Isn’t there still an “is-ought gap” here? We could list all the benefits of rationality, and these are unlimited – they are indeed the glory of mankind. But that would be pointless for answering the question, because iterating over benefits (as does utilitarianism) does not itself rationally demonstrate that one ought to choose to claim these benefits.
As Aristotle observed, there can be no infinite regress in explanation. There is no answer to the question “Why follow reason?” but to point out that the question begs the question: The word “Why” demands that we adhere to rational standards, while the question itself implies that these very standards are arbitrary and unjustified. The question is, ironically enough, itself arbitrary, for if one admitted that there was an answer to that question, that there was a deeper “reason” to follow reason, then that answer would itself either be subject to the same kind of question about it, leading to an infinite regress, or it would have to arbitrarily demand that we stop asking “why” at that point. One can make various true statements about following reason – that to disregard reason is to disregard one’s humanity, that following reason yields everything that is good about mankind, that it is the deepest form of betrayal to one’s own mind to refuse to follow reason – but one can’t isolate anything more fundamental than reason, something standing behind it as a “reason” why we ought to follow reason, than reason.
The very question entails the choice to submit oneself to reason. Philosophy can identify implications and consequences of making the choice to either follow or to not follow reason; it cannot offer anything more fundamental. If you are philosophizing, then you have already committed yourself to follow reason; if you have not so committed, then nothing you have to say, including any questions you might ask or demands you might make, are relevant, nor is there any argument that can be offered to you.
In other words, the question “why follow reason” demonstrates either naivety or bad faith on the part of the questioner. What do we mean by the “ought” in any statement that one ought to do something? We mean that to not do that something would contradict or undercut reason in some manner, and to maintain the case that one ought not would, by virtue of its contradiction to reason, eject one from the realm of rational discourse.
Asking whether the decision to follow reason is arbitrary is similar to asking “Is the science of nutrition arbitrary?” If you want, you can choose to eat only dirt. You can even choose to believe that your choice will have consequences contrary to the ones that science predicts. What you cannot do is escape from the law of cause and effect: regardless of your poor choice in diet, you will suffer and die. The science of nutrition is only possible because it takes as its starting points reason and the goal of optimizing human life with respect to one’s diet. Likewise, ethics is only possible by taking reason as a starting point, since that is the only means by which we can philosophize. You can certainly choose to deny that following reason is necessary for achieving your overall goals, but you can’t escape the law of cause and effect: to the extent that you evade the fact that reason is your means of knowing what you should do, you will suffer from self-inflicted ignorance and from its consequences. Ethics can explain why you will suffer. It cannot force you to prefer not to suffer, but it can judge you as deranged for so preferring. (Arguably, no one chooses suffering; what they do is focus only on narrow aspects of an issue and foolishly ignore other crucial aspects, and it is this ignorance that leads them to make bad decisions that result in suffering; they do not do this as a conscious choice to suffer, but through intellectual neglect. Whatever sins anyone commits in the world, then, begin as the sin of evasion of pertinent aspects of an issue in the mind.)
Leaving the realm of rationality is like becoming as a little child, or as an animal. If you insist on incoherent babbling, childishly covering your ears and not listening to reason, or howling at the moon, then there’s really nothing that can be said to you – one might as well try to convince a dog as convince a hopelessly irrational man. Again, there are things that rational men ought to choose to do with respect to you, to encourage you to reconsider your choice. But there’s no rational hope for you so long as you choose irrationality. All rational thoughts, including a rational ethic, become superfluous to you once you renounce reason. So what would be the point of a rational argument aimed at someone who renounces reason? If you have chosen to follow reason, then no argument that you should follow reason is necessary; if you have abandoned reason, then no meaningful argument with you is possible. If a wild animal interferes with civilized life, we fence it off or put it down. We don’t worry about an “is-ought gap” that the wild animal can’t cross. If a man chooses to be irrational, we shame or shun or cage him (depending on the nature of his behavior). There is no cause for further concern about the gap between him and us, and nothing he says or thinks has any bearing on philosophy whatsoever; he is shut out by his own choice. On the contrary, to act as if he is raising legitimate issues only encourages further irrationality on his part, and what’s even worse, it muddles philosophy with irrational nonsense.
The rational man is bound to a rational ethic by choice and implication. The irrational man is irrelevant to philosophy by his own choice. So there is no “is-ought problem” in philosophy; it is already a precondition and ethical kernel of philosophy that one ought to choose to be rational, and there is no possibility of rationally putting forth a case for the contrary. There is only the problem of irrational men and the trouble they cause.
To be ethical one must first make the primary choice to follow reason, and in making that choice one gains entry to the field of philosophy and acquires the chosen obligation to adhere to a rational ethic, which by implication morally condemns all thought or action that would subvert reason.
“That reason should prevail” could be the motto of a rational ethic, and we can proceed from metaethics to ethics from this key starting point. It is implied that a primary goal of a rational ethic must be to unleash the individual’s ability to follow reason in both thought and action: the purpose of philosophy in general is to unleash one’s ability to think and act rationally; natural rights aims to unleash rational thought and action specifically from the interference of irrational men; a personal ethic (i.e. virtue) unleashes rational thought and action from the interference of bad habits.
The foregoing demonstrates that “one ought to follow reason” is the moral axiom and kernel of a rational ethic. Just as the legitimacy of reason is axiomatic for any valid theory of knowledge (one cannot make a rational argument that reason is invalid), so too is the choice to follow reason axiomatic to a rational ethic.
It may be non-obvious how my argument here entails all natural rights and not merely freedom of thought and expression, but the purpose of metaethics is not to fully explicate natural rights; they are a derivative issue. For now, suffice it to say that if you systematically interfere with some species of action of a rational man (i.e., if you create a rule or law that says he should be interfered with, for some category of his living action), then he will tend not to take that action in the future, nor will he undertake the thought required to in order to perform it, because it would be irrational to create a plan that he knows could not be followed. Thus, to interfere with a rational man’s action is to interfere with his reason. Liberty will elaborate on the relation of reason and liberty.
There have been several attempts to ground individual liberty in reason. Ayn Rand’s metaethic attempts to root ethics in the nature of man’s life “qua man”. Hans-Hermann Hoppe’s “Argumentation Ethic” attempts to demonstrate that to engage in argument presupposes liberty (for example, by presupposing the “right” to use one’s body, in order to argue). Hoppe’s approach bears some similarity to my approach; however, I do not try to argue that arguing presupposes liberty, but rather, that reason necessarily endorses liberty. Further, Hoppe’s Kantian, aprioristic methodology is antithetical to my inductive, empirical approach. While I agree that certain things are logically presupposed by engaging in argument, not so many things as Hoppe would like to be actually are.
“It is impossible in a discussion to bring in the actual things discussed: we use their names as symbols instead of them; and we suppose that what follows in the names, follows in the things as well, just as people who calculate suppose in regard to their counters. But the two cases are not alike. For names are finite and so is the sum-total of accounts, while things are infinite in number.”— Aristotle
“But we have now posited that it is impossible for anything at the same time to be and not to be, and by this means have shown that this is the most indisputable of all principles. Some indeed demand that even this shall be demonstrated, but this they do through want of education, for not to know of what things one should demand demonstration, and of what one should not, argues want of education. For it is impossible that there should be demonstration of absolutely everything (there would be an infinite regress, so that there would still be no demonstration); but if there are things of which one should not demand demonstration, these persons could not say what principle they maintain to be more self-evident than the present one.”— Aristotle
“Nothing is more free than the imagination of man; and though it cannot exceed that original stock of ideas, furnished by the internal and external senses, it has unlimited power of mixing, compounding, separating, and dividing these ideas, in all the varieties of fiction and vision.”— David Hume
“Men fear thought as they fear nothing else on earth – more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habits; thought is anarchic and lawless, indifferent to authority, careless of the well-tried wisdom of the ages. Thought looks into the pit of hell and is not afraid. It sees man, a feeble speck, surrounded by unfathomable depths of silence; yet it bears itself proudly, as unmoved as if it were lord of the universe. Thought is great and swift and free, the light of the world, and the chief glory of man.”— Bertrand Russell
An idea is exemplified in the simplest form when a child points and knows what she is pointing at, she knows what she means. As adults, we come to use words as well as pointing, but the same process is in operation in either case.
We each have an innate capacity to create our own meanings. If this were not true, then nothing of any kind could ever be explained to anyone, for an explanation does not in and of itself synthesize meaning in another’s mind; it only attempts to indicate where the meaning lies – it is completely up to the listener to create the meaning from the indication. There is obviously no way to confer the capacity for creating meaning through attempted communication, for this capacity is itself an essential requirement of communication: a successfully communicated idea is nothing more nor less than the recreation of the idea in another person’s mind, and they are the ones that must engage in the creative process. This is what it means to listen. It doesn’t mean that you are “installing” ideas from another person; it means you are recreating those ideas. At best, you can try to motivate someone to actually listen, but it is they who must choose to do so.
While the concept of meaning can be indicated, it can’t be defined in other than ostensive terms; it is a concept introspectively grasped by a thinker. You can’t explicitly understand what meaning is, without first implicitly understanding it. You are either by nature equipped with the faculty of meaning, or all thought and communication with you is hopeless.
Meaning is a primitive function of our minds that comes before words do – and meaning is indeed a function. Importantly, this function has no predetermined specification. The simplest and most natural sort of function is mere pointing or direct reference, such as when you point to an object. But even a dog or chimpanzee can do that. We on the other hand have the power to expand the range of this function without limit, such as when we refer to “all blue objects” or “all mammals” or “all types of motion matching this mathematical formula.” Our power to change the nature of the function of meaning is a key to what makes us human beings and largely explains our distinctiveness.
In spite of our clear need to have common meanings in order to communicate, it is self-evident that all thought begins in the individual mind, and that meanings are created and assented to individually. Fundamentally, then, meaning is a very personal, individual thing. What you mean is ultimately up to you, and how you choose to interpret my meaning is also up to you.
Because of the foundational function of meaning in a person’s mind, if you can devise a way to control meaning, then you can control society. This gambit has the fundamental weakness that individuals must in some sense assent to such control, but in spite of this weakness one can observe that it works very well, and the obvious reason why it works so well is that the control is usually initiated when the mind is very young, weak, and therefore particularly susceptible. Furthermore, since the control undermines and weakens the mind, it is also self-sustaining. Various rationalizations for authoritarian meaning, something I will here call conceptual fascism, are easily sneaked in once the individual’s integrity of meaning is breached. For example, they may tell you that if you do not submit to the authoritative meanings, then you will not be able to communicate (thereby causing social ostracism and isolation). Or they may attempt direct intimidation in order to cause you to submit, decreeing that “you don’t know what you’re talking about” unless you rigorously adhere to their specified meanings. Or they may attempt an altruistic tack, claiming that society needs you to submit to status-quo meanings in order for it to function and progress properly. Any of these rationalizations, once accepted, are enough to cement a mind into an authoritarian conceptual regime and, since thought determines action, to govern behavior to a very significant degree.
This is not to say that there are no rational standards or common meanings; on the contrary, I think we should strive for universal meaning and efficient communication. But such striving must be governed by universal informed consent, not by submission, compliance, and obedience to authority. The precondition of any rational system of thought is a recognition of the inviolable sovereignty of the individual mind to either embrace such a system or to reject it.
The history of philosophy is filled with conceptual fascism, which dictates to you or predetermines for you what you must mean when you use a given term. Plato asserted that meaning is not ultimately your choice, but rather determined by a previously existing “Form” that exists neither in the object of meaning nor in the mind, but in some sort of “heaven” that contains the Form. Aristotle moved the meaning one step closer to the individual, that it exists not as a Form in heaven but as an “essence” in the thing. Ayn Rand followed in their footsteps, except that instead of saying that our meanings are determined by a thing outside of ourselves, she said that meaning was determined by a fixed function she prescribed, namely: “A concept is a mental integration of two or more units which are isolated according to a specific characteristic(s) and united by a specific definition.” The vulgar form of conceptual fascism consists in claiming that you have to choose among the meanings that someone else defined already, that you can’t think of any original meanings yourself, but must use a term already defined for you, as in a common dictionary.
Certainly, all of these philosophers were aware that not everyone had in mind actually identical “Forms”, or “essences”, or “concepts”, or “official meanings”, and the implication is that these philosophers had to think that some people had failed in some manner to understand, but the interesting question is: if philosophers had the right “Form”, “essence”, or whatever in mind, then what exactly is it the allegedly “lesser” or “mistaken” humans have in mind? And since to be human is to make mistakes, what does this philosopher have in mind when he thinks he’s got a “concept” in his “official” sense, but it turns out really not to be correct? Restrictions of conception that go beyond the law of non-contradiction usurp the meaning of meaning and can only be unstable; they can never really explain the true nature of our concepts, since we mean what we choose to mean, we do not have to choose to mean what or how others tell us to mean.
It is presumptuous of philosophers to prescribe the range of the function of meaning in some final sense: the people we are prescribing for might be more sophisticated than we are, they might have new and subtle forms of meaning that we are unaware of. Indeed, as we trace our way to the roots of every discipline (mathematics, physics, software, psychology, and so on), we often find forms of meaning unique to each discipline. Hence there have been philosophers who have revolted against conceptual fascism, the so-called “nominalists.” This school recognizes that meaning is individual. But at least some in this school lapse into vulgar conceptual fascism, claiming that because meanings are individual, then we must use a community standard (which could be a dictionary or other cultural artifact). Further, the nominalists commonly conclude that because you can mean what you want, then there is no such thing as objectivity, that since meaning is personal it is also arbitrary, that there are no possible rational standards. (Not all nominalists claim that meaning is arbitrary; on the contrary, my own position is arguably a type of nominalism.)
Most philosophers have given you a false alternative: you can choose your meaning, but you won’t make any sense; or you can make sense, but you can’t choose your meaning. In effect, the false choice they give you is: you can either be free or be rational but not both. This is very related to the false choice we have been given in other realms, e.g. political philosophers have said that you can have a rational, civil society but without liberty, or a free society but without it being rational and civil.
All awareness is necessarily limited and selective. There is no such thing as a “full and complete awareness” of any object that exists, nor could there be, for every object has such a deep complexity (which includes its relationship to other objects) that no consciousness could conceivably be aware of all of it. Consider the fact that every object that exists is made of atoms, that the properties of these atoms are so rich and complex that they are able to produce everything in the known universe, and that the laws that govern these atoms result in all the physical laws we have yet discovered, while at the same time these very objects confound the particle physicists who dissect them. Or, consider your awareness of your own thoughts: suppose that you claimed that “thought is simple”, what precisely do you mean by “thought”? Philosophers spend their whole lives untangling such questions.
The most primitive mode of selectivity happens via our senses, and all the animals share in this. When you observe an object, you only see it to some level of detail and from a given vantage point, you cannot see it in every detail or at every angle (nor is this exhaustive of how your perspective on this object is limited). This selectivity, enforced by the nature of the universe, is the primal form of abstraction, of limiting conscious awareness to something relatively narrow and specific.
Distinctive to humans are their ability to engage in deliberate selective attention, to create a perspective on an object of awareness that Nature does not itself provide. This perspective, once created and practiced becomes a habitual form of awareness that permits us to rapidly see things from this perspective, should the proper context present itself.
Also distinctive to us is our (optionally exercised) ability to name, describe, and share these perspectives with others. The key building block of these synthesized perspectives is what can be called an “abstraction” or a “concept”, which, to put into the most efficient use, we assign a word to, one that stands for the concept when used in the right context. We combine these concepts into propositions which are the units which can stand for any thought.
Our capacity to forge novel perspectives is our greatest gift from Nature, and perspectives can be magnificently and powerfully life-affirming, but as the history of the world attests, evil perspectives can be just as powerfully destructive, potentially resulting in the most tragic of consequences. It is therefore of utmost importance that we explicitly come to understand how to form perspectives properly – this is the proper purpose of philosophy.
We have extreme liberty to create perspectives. Any work of art exemplifies this liberty, but a novel exemplifies it to perhaps the highest degree. We can even create a contradictory perspective (we often inadvertently do), but if we want to know truth, we must adhere to the Laws of Thought – we must actively find and fix our contradictions. Further, given our very limited cognitive power, efficiency is essential if we wish to maximize the range of our understanding, to discover deep truths, so even if a given perspective is true, we may need to dispense with it in favor of a more economical one.
Only a tiny fraction of mankind chooses to exercise their natural capacity to forge new perspectives to its utmost; such people are the visionaries. The next more prevalent but still rare type of person is one who can truly entertain new perspectives, meaning that they forge perspectives in their own mind by following the lead of someone else. These two types of people, when rational, move humanity forward; they are the source of all progress (the irrational variants of these types move humanity backwards). Last and least is the dogmatist, who, sadly, is not interested in exercising their natural capacity for forging perspectives. This type of person is cultural dead weight, slowing down the progress of humanity, but having no long-range significance.
The extreme liberty of our thoughts is juxtaposed with an absolute confinement: a concept means what it means (to you), neither more nor less; a thought means what it means, neither more nor less. There is no such thing as a “vague” or “approximate” or “rough” thought; you either think something or you do not. This is a simple application of The Law of Identity, or “A is A”, to thought – a thought is a thought. It is true that you can be in a state of forging new thoughts, or you can revise old thoughts, but this does not change the fact that what you think at a given point in time is what you think, and that’s that. Why might this most fundamental fact of thought strike some as paradoxical? This is a psychological, not philosophical question, but I would suppose that they really want to believe two or more contradictory thoughts at the same time, or claim to have believed the same thing all along when really their views changed, and so use this alleged “vagueness” of their thoughts to exempt themselves of the responsibility of deciding which thoughts are true and which are false.
What is the first implication of this most fundamental law of thought? Your philosophic outlook is determined, at root, by your stance regarding the identity of your own thoughts. When one becomes practiced at identifying which thoughts contradict which other thoughts, and also resolves the contradictions by forging new thoughts that fit together well with each other and with reality, then we call that person wise. When a person refuses to engage in this process, then we call that person a fool. And when a person recognizes their contradictions, but believes it a folly to correct them (on the grounds that it will allegedly only yield more contradictions), we call that person a cynic. And when the cynic consciously manipulates the fools at their weakest point – their obliviousness to their own contradictions – we call him a charlatan.
It so happens that the same activity that helps us to isolate and correct contradictions also unleashes our efficiency, for several reasons: 1) Contradictions cause psychological distress, which interferes with thought. 2) Contradictions in principle result in a sprawling complexity of potentially unlimited extent, even in a narrow domain of thought, for not only can a single explanation for a given phenomenon be given, but a multiplicity of contradictory explanations can be given as well. This sprawling complexity is inefficient. 3) Contradictions encourage rationalization, which is the creation of illusory “reasons”, which are only fallacies that make it appear as if the contradictory ideas are true. This is wasteful activity that also creates its own sprawling complexity. 4) Consistent thought is consistent with our experience, and thus is reinforced as time goes by, reminding us again and again of its truth and freeing us from the demands of memorization.
The more elements there are in a given understanding of something, the more likely our limited consciousness will overlook a problem and therefore make an error, so, all things being equal, we should actively seek to reduce our beliefs to the fewest number that actually results in a correct understanding. This principle is known as “Occam’s Razor”, and has been formulated in various ways, including by Issac Newton in Rule I: “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, so far as possible, assign the same causes.”
One of the most important errors in the history of philosophy was caused by multiplying explanations beyond what is necessary.
It is evident that philosophy requires a sort of dualism between mind and reality. This dualism does not imply that the mind is not part of reality, but that the primary function of the mind is to become aware; there is a subject that becomes aware, and an object of awareness. But many philosophies have gone one step too far in this, in dividing our awareness into awareness of particulars and of abstractions.
It is easy to see where this particular/abstraction dualism came from, because it has a certain plausibility. For example, suppose that standing in front of you is your dog. That is said to be a particular. But the idea “dog” refers to an unlimited number of past, present, future, and even potential or imaginary dogs – an abstraction. The idea behind the particular/abstraction dichotomy is that the type of meaning involved in either category is of a different order. But that dichotomy is an illusion, a confusion.
The root of the confusion stems from the fact that when a dog is standing right in front of you, it is a real thing that is, in substantial part, causing the meaning as it stands in your mind. But if we examine the meaning itself, as separate from the real creature, we see that it is no different in kind than the most sweeping generalization. We see that the illusion has been caused by an ambiguity between whether the word “particular” is supposed to refer to the object of awareness, or to the awareness of the object.
When you look at a dog, you do a very quick induction based on perceived attributes, which is a generalization process that results in relating it to a prior conceptualization, resulting in your calling it a dog. That’s what it means to say “that is a dog” – that thought is a generalization. Even when looking at your dog, both the recognition of it as your dog and even the idea “my dog” are both generalizations. There is no truly particular “my dog” in your mind – at each moment the dog is aging, the atoms it is made of are whirring and changing place, the “dog” is constantly changing over time. To be truly “particular”, it would have to have an unchanging specific identity, but its identity is constantly changing. What you are actually doing is deciding in what kinds of changes you care about and what kinds you don’t; you selectively focus, you generalize, you abstract, even to consider a “particular” dog.
Any time you look at an object and claim it has some kind of fixed and fully-specific (i.e. particular) identity, you’re generalizing. If you say “here is my car”, you’re saying that in spite of the fact that many things have changed about it since the last time you used it – it has less gas, it’s older, it has a new dent, the tires are more worn, or are new, etc. etc. etc. – it’s still essentially and in the respect that concerns you the same object. This is a generalization. All thoughts are generalizations.
As a matter of principle, there is no way to refer to anything real without generalizing, since all real objects constantly change in some respect or other. Further, there’s no way to relate two different objects (via a concept) without implicitly generalizing. So, if you are talking about reality, you are generalizing. There is literally no way to get an actual “particular”, in the sense of a fully specified non-general object of thought, in your mind.
There is no dichotomy between “particulars” and abstractions – everything we deal with mentally is an abstraction; there are no mental referents to particulars qua particulars and therefore there are no thoughts about particulars qua particulars either. Therefore, Plato’s “problem of universals” of is a non-problem. I.e., “the problem of universals” is really just the problem of thought, as in how can any thought correspond to reality given that all thought is abstraction? That is the subject at hand, and at root the answer is to refer to the axioms of induction and meaning.
Some conclude that the alternative to conceptual fascism is conceptual anarchism, that there are no sensible rules of how to form concepts, but just freewheeling individual thought. Now, it is true that in an important respect thought must be freewheeling – we should never bind ourselves to arbitrary authority. But the aim of this freewheeling thought must be to create an integrated whole, an understanding that fits together and identifies reality, which means, to identify the non-contradictory whole that is reality. This project, which is always and only a product of each individual, is meaning on a grand scale.
The law of non-contradiction is our guide here, but it is not our only guide. Since we are very limited beings, to strive for the most meaning is to strive for utmost in efficiency. The scientists who probe Nature’s complexity know this well. As Newton’s formulation of Occam’s Razor states in his Rule I: “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, so far as possible, assign the same causes.” Or, as this quote (allegedly by Einstein) states: “Everything should be made as simple as possible, but not simpler.”
I cannot specify all the modes and methods of maximizing efficiency (I do not presume to preclude that of which I am unaware), but one that must not be neglected in a theory of meaning is our capacity to create a hierarchy of concepts. For example, every word in the following sequence refers to the same thing, but viewed at a different level of a particular conceptual hierarchy: object, organism, animal, mammal, dog, poodle, Spot.
In fact, there is not only an efficiency basis in the aforementioned hierarchy, but a physical basis as well. An “object” in the sense used is something that exists separately, having its own physical boundaries; an “organism” is an object that engages in self-sustaining and self-generated action (this is Ayn Rand’s definition of “life”); an animal is a self-moving organism; a mammal is a warm-blooded animal; etc. Note the pattern where each concept is defined in terms of similarity with wider concepts (the genus of the definition), and difference relative to the other objects it is similar to (the differentia).
The hierarchical organization involved in these concepts represents a systematization of knowledge, a culmination of scientific activity that gives us an efficient, powerful, and sweeping grasp of some major area of knowledge. However, it is critical to understand that this organization is a final step of a process of forging meaning. If you want to be an independent, critical, first-handed thinker, you must be able to see how such hierarchies arise from a more chaotic, personal, individualized kind of meaning. Otherwise, on some level, you are just a parrot.
There are two key elements involved in the “Spot” example: 1) there is a substance-oriented aspect to the ordering of referents; 2) there is a process involving similarity and difference.
Regarding the first element, the concepts drawn in that example result from a broad scientific process of identifying “what is it?” in the most fundamental, substance/material-oriented manner possible. But the concepts we form are not always for this purpose (a purpose which might be termed “primary identification”). For example, the concept “fly” is a mode of movement though air. Many different kinds of “primary substances” might fly: a bat, an airplane, a bird, an insect, a Frisbee, etc. Here, the concept “fly” is created in order to refer to what something does, not to what it is. (I am not here going to explore the full range of fundamental kinds of meaning. Aristotle endeavored to do that in his “Categories”.)
Even though the fundamental kind of meaning involved is different in the Spot example vs. the flying example (“is” vs. “does”), the process of similarity and difference involved is the same. If anything other than the fact that we can synthesize our own perspectives can be said to be distinctively human, it is our ability to integrate our knowledge in terms of similarity and difference. It is this ability that permits us to compress a vast wealth of knowledge into something a limited human mind can actually deal with. But it is absolutely critical to use this faculty in a proper manner, for a sloppy use of it converts it from an instrument of precise knowledge into an instrument of self-deception. Intrinsic to a proper use is proper maintenance: we are not omniscient, sometimes we claim something is similar and it’s not, and we need to revise what we thought we knew.
The cash value of the non-contradictory integration of knowledge by similarity and difference is an unparalleled expressive power, which can itself be converted into almost magical human actions such as landing a man on the moon, the price of which is an ongoing struggle aiming at an absolute standard: All S is P. It is this absolute standard held as an ideal that separates the civilized man from the barbarian.
So what is meant by “All S is P” and what is its significance here? This statement is a logical formulation that states that for everything that is an S, the proposition P holds true for it. For example, when we say “all dogs (S) are mammals (P)”, or “all mammals (S) breath air (P)”, these are statements that match the “All S is P” form. All knowledge, to be knowledge, matches this form. Even probabilistic statements (“Some S is P”) imply the “All S is P” form, e.g.: saying that it sometimes rains outside also says that there is a potential for rain, i.e., it is always the case that outside (S) it can possibly rain (P).
Nature is what we aim to know. It is the ultimate standard by which we measure our knowledge. Its uniformity and perfection is what makes knowledge possible in the first place. We may err, but if we are diligent and honest, we will eventually discern our errors and correct them. Reason is self-correcting because Nature is an absolute and perfect standard by which we can measure our knowledge. The “All S is P” formulation is an expression of both of these facts, and manifests this truth: that in the field of knowledge, fortune favors the bold.
“They’ll be no peace in the world until every man is free, because to every man, he is the world.”— unattributed
“The true character of liberty is independence, maintained by force.”— Voltaire
“The last end of the state is not to dominate men, nor to restrain them by fear; rather it is so to free each man from fear that he may live and act with full security and without injury to himself or his neighbor. The end of the state, I repeat, is not to make rational beings into brute beasts and machines. It is to enable their bodies and their minds to function safely. It is to lead men to live by, and to exercise, a free reason; that they may not waste their strength in hatred, anger and guile, nor act unfairly toward one another. Thus the end of the state is really liberty.”— Spinoza
In my 2010 book, For Individual Rights, I presented a new theory of liberty that defined natural rights in empirical terms and gave a common-sense moral argument for why they should be respected. In some sense this book is an expansion on my previous argument that makes it more rigorous. In this chapter, I will provide a complete definition of natural rights, and provide a rational moral argument for them that is complimentary to the argument I gave in Metaethics.
Many people believe that natural rights are arbitrary, that they are just a human construct with no objective, rational basis. Part of the reason why is that the definitions of natural rights usually offered are unclear. I’m going to rejuvenate the idea of natural rights in this chapter creating a new perspective on them that makes the idea fall perfectly into place, thereby providing a solid foundation for political philosophy.
Natural rights philosophy is a science. And every science must start with some basic and incontrovertible empirical observations. What are these basic observations?
“Life is a process of self-sustaining and self-generated action.”— Ayn Rand
The empirical substance of life is action – to live is to act. This is our incontrovertible starting point.
The term “action” has a broad range of specificity – we can talk about human action, thereby referring to all human action that has, does, or will ever exist; we can talk about your action, referring to the actions of your own life; we can talk about building a house, painting a room, hammering a nail. We can talk about your arms moving, your heart beating, your cells metabolizing, and so on. The concept “action” telescopes from the very broad to the very specific.
Two broad types of action are of importance here: interfering action and non-interfering action. By “interference”, I mean a scientific, biological, objective sense: either an action prevents or blocks another action or it does not.
There is a difference between an action and a wish: if someone refuses to marry you, that does not mean they interfered with any actual action of yours, even though in some vague sense of the word, they “interfered” with a wish, hope, or desire. On the other hand, interference in my sense here is objective: We don’t think that when a cat eats a mouse, that the mouse interfered with the cat. It is either-or: either an action is biologically, objectively interference with another action or it isn’t.
I name the distinction between non-interfering action and interfering action natural right vs. natural crime, or simply, right vs. crime. This is the first principle of natural rights: Every human action is either a right or it is a crime.
There are two paths a person can take upon understanding this principle: 1) They can consider it as a fundamental principle to be applied consistently to the human actions they consider. For many this will lead to a radical shift in perspective over time. Whether, in the final analysis, they agree or not with the perspective not is not the issue; the issue is whether they have actually authentically entertained the idea or not, and that means holding in mind as if it were true in order to reflect on how it would modify their other conceptions and principles. 2) They can nihilistically fabricate exceptions and alleged gray areas, arbitrarily incorporating definitions that come from a different theoretical framework into this one.
Whether the principle is a crystal-clear identification of how human beings can interact or whether it’s a self-contradictory delusion is entirely up to how a person chooses to interpret it. The second response is an entirely wrong approach for judging the logical coherence of the principle. This principle is a standard by which one can tell whether a definition has been formed properly or not, according to the theory. If you think that the principle is breaking down, then really what is happening is that you should change a definition, only then can you be said to have truly understood the actual theory and not a straw man version of it. In other words, part of what my theory calls for is revising definitions and interpretations such that they are consistent with the principle “every human action is either a right or it is a crime”.
Note that a major objection to natural rights – that they’re not “real”, that they are a myth – has been eradicated by creating this principle. Natural rights exist because non-interfering human action exists. Natural rights are as real as the beating of your heart.
“Rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add ‘within the limits of the law’ because law is often but the tyrant’s will, and always so when it violates the rights of the individual.”— Thomas Jefferson
Your rights are nothing more nor less than your life, if you are peaceful. Your life is nothing more nor less than the actions that make it up, and if you are peaceful then these actions are nothing less nor more than your rights. Liberty is the state where rights – which is nothing more nor less than your life as you choose to live it – are not infringed by crimes.
Some people claim that interference is relative. E.g., if you are fired from your job, in some sense that interferes with your life, but the fact is that no one physically accosted you, they just told you to get off of their property and they stopped purchasing your productivity. The fallacy here is called “equivocation”, and in this case there’s a confusion of different senses of words, between what someone merely hopes, wishes, and desires, and what their actual non-interfering actions are. But mere desire is just in one’s own head; it can’t be interfered with by others. In general we need a method for figuring out whether the interference is really the objective kind I’m talking about instead of a whimsical kind.
If we want to prove that something is or is not interference, we must start with things as they first appear to be. This concept of “at first appearance” is called “prima facie” by lawyers. For example, all appearances tell us that the cat interfered with the mouse when he ate the mouse. If someone wants to argue otherwise, it’s they who must provide the proof. Critical to this proof is identifying the locus of interference. You’ve got to specifically pin down where the interference actually happened.
While the principle of natural rights is clear, the application to the vast range of human action is not trivial. For a great many things, it is not obvious which human actions are rights and which are crimes, consider: abortion, the Lockean Proviso, patents, contracts, legitimate land ownership and abandonment.
On the other hand, it is obvious that if someone claims that something is a right and someone else is claiming it’s a crime, at least one of these are advocating crime. So it’s very important to carefully scrutinize things – a sloppy analysis leads inexorably to advocating or committing crime. Confusion about rights breeds crime. So this is serious.
Furthermore, if you are brought before trial and are innocent, you want to be judged objectively. You want a judge to use a scientific method for determining whether you committed a crime. You don’t want him using a woozy “intuitive” methodology.
So we need a careful analysis, not just a principle. How should we begin?
Human action has a vast range and complexity. To understand it, we need to group it into categories, we need a “taxonomy of rights”, just as biology has its taxonomy of life.
John Locke created the categories: life, liberty, and property. Why not use these?
We should not sunder life from liberty. To violate your right to life in any degree is to deprive you of liberty to that degree, and vice versa. Consider a man who through great effort becomes a surgeon, and consider this man’s ambition cut short through 30 years of false imprisonment. After 30 years of imprisonment, he is certainly not the same man he would have been; to a very great extent his life is not his, it’s become what someone else had chosen for him. To fully and completely deprive someone of their liberty is to deprive them of every living action whatsoever, which is to deprive them of life. So, “life” and “liberty” denote the same thing. Thus, this distinction is fine poetry, but isn’t very useful in deriving a taxonomy of rights.
Finally, the concept “property” is too narrow; there are some things we have a right to do that don’t fit under the idea of property (see below).
Consider four key methodological standards: 1) Basic categories should be fundamental, each encompassing a wide range of human action. 2) The full set of categories should be comprehensive of rightful human action. 3) Each category should distinguish actions that are significantly different from those in other categories. 4) Since this theory of rights is built upon the concept of human action, it is crucial to keep that human action focus. Either something one does is or is not a right. For example, property rights cannot be viewed merely as an object sitting out there with your name tag on it; rather, property rights concern actions one takes with respect to owned objects, such as exclusive use.
The set of rights which I claim meet the foregoing methodological standards are the rights of Self, Property, Medium, Land, Consent, and Justice. I will address each in turn. (What appears here is a cursory analysis; for a more in-depth analysis see For Individual Rights.)
The right of self is the right to move one’s body as one desires, to reside unmolested by others in a given place, and to move from one unoccupied place to another.
An noteworthy example of right of self is freedom of speech: when one is speaking, one is exercising one’s natural abilities; so long as such exercise is not violating someone else’s right, then interference with freedom of speech is a violation of right of self.
The three aspects of this right, which can be viewed as separate rights to one’s body, one’s place, and one’s freedom of movement, are very closely related, as one can easily infer from reflection. It should already be clear to the reader how essential they are to human life and how those who interfere with such rights are indeed committing criminal acts.
This right is violated when we have done nothing to infringe anyone’s rights, but are, for example, threatened, harassed, pushed around, physically attacked, detained, arrested, imprisoned, or kidnapped.
In ordinary day-to-day life, most of us do not encounter any of these actions (although there are certainly exceptions), but everyone, whether they are conscious of it are not, is always experiencing the threat of such action from governments acting upon criminally inappropriate laws.
It is important to recognize that when a man threatens you with a gun and tells you to do his bidding, that there is no difference in principle between whether he’s physically shoving the gun into your side, whether he’s showing it to you, whether it’s hidden in his pocket, whether he’s following you from ten or a hundred yards, whether he tells you over the phone that he’ll break into your house at four in the morning if he learns that you defied him, or whether ten or ten million of his kind voted on something that orders you around, takes your property, and employs similar threats to control your behavior. I call this the “principle of the hidden gun.” When it comes to deciding whether something infringes a right or not, one can easily infer that there is no difference between a credible threat that it will be violated and the physical violation itself.
Physical hiding of a gun, which is only brought out when you don’t obey, is a trick to make it appear that the oppressor is peaceful. That this trick works is somewhat surprising – you’d think that the “out of sight, out of mind” principle wouldn’t work on adults since they have developed object permanence, but it does.
Consider a country full of non-rebelling slaves. Without the foregoing principle you’d declare that it was a peaceful society, even though there’s a constant threat of attack if the slaves rebel.
The right of property is the right for human beings to pursue, acquire, and keep particular well-demarcated physical objects, and to prevent others from using or taking those particular material objects.
The right to specific property, i.e. ownership, is an action, and as such, it must exist as some form of action – otherwise there is nothing to be interfered with and thus no property right. A mere intention is not an action. The required action can be goal-oriented and abstract, but it must exist. For example, the action of building or living in a home does not require that one be on site constantly in order to claim ownership.
When one ceases to take actions of ownership, one has abandoned one’s property, leaving it for someone else to claim, precisely because the actions one was formerly taking do not exist anymore and thus there is no possible interference (that it may be difficult to adjudicate this point in some circumstances does not refute the principle).
Exercise of property rights goes through five phases of action: 1) pursuit; 2) claim; 3) acquisition; 4) ownership; 5) abandonment. For example, suppose you see a wild apple tree on a hill, you decide you want one of those apples and start up the hill (pursuit); then as you arrive at the tree, you reach for a particular apple, which signals to anyone nearby that you want it (claim); then, you grab the apple (acquisition); carry it with you, and then eat it (ownership); finally you finish eating it and throw the core over your shoulder, leaving it to the squirrels (abandonment).
Obvious crimes against your right of property are use without permission, theft, and vandalism. All of these interfere with your actions concerning your property. A few less obvious crimes include patents and “frivolous pursuit” (unfortunately crimes against property abound – see For Individual Rights for more examples).
In the case of patents, by all appearance (recall the prima facie standard), the initiator of interference is the patent holder and the government, such as when they drag the alleged patent violator to court and extract his money or force him to stop selling something. The burden of proof is on them to show that patents are a legitimate natural right or are based on legitimate rights. The simple fact is that no one has ever done that. There is no actually principled argument that demonstrates patents are legitimate property rights. On the contrary, there have been many arguments demonstrating the reverse. But because of the prima facie standard, the burden of proof does not lie with those who are opposed to patent; it fully lies with the pro-patent mob. No ethical person who lacks a proof can stand on the pro-patent side. They can’t sit on the fence. They must recognize the prima facie facts and stand against patents, for the same reason that someone should be regarded by the law as innocent until they are proven guilty.
Sometimes there are conflicting claims to property that happen for honest reasons, and we need to sort these out as rational adults. Frivolous pursuit is where someone dishonestly claims something as if he was independently pursuing, when his real aim is to interfere with your honest claim. E.g. 1) Someone snatches the apple after you claimed it; 2) an apple seller destroys all the wild apples so the prices on apples rise. (Frivolous pursuit is related to Lockean Proviso, but is not the same thing.)
The right of medium (aka “natural resources”) is the right to use portions of the natural flow of sunlight, water, air, and other natural mediums, as well as the right to defend one’s continued use of such mediums after having incorporated its use into one’s pattern of activities.
The physical nature of mediums is opposite from property in an important respect: rather than having well-defined physical boundaries, they are physically indistinct and unconnected to any particular property or person. The air flows freely across the Earth; water is lifted from the ocean by sunlight and deposited to flow where Nature wills; the medium of the Earth itself is filled with metals and minerals we mine; the electromagnetic spectrum that supports radio waves pervades every property without any clear boundary. With each breath we naturally acquire and then release air from and into the wild. Air is in some sense unowned and not able to be owned, but in another sense, we would defend to the death any threat to our right to breathe. The right to travel is the right to the use of a particular kind of medium according to the mode of travel. Other natural resources differ in how tightly they are coupled to our life, but do not differ in principle regarding our right to come to depend on them and defend our right to continue to use them. No one has the right to pour poison into “his part” of the river, precisely because there are those downstream who are using this medium.
The right of land is the property right to the naturally-fixed portions of a given geographic area, as well as the medium right to the surrounding mediums that one’s land use depends upon. The right of land includes all relevant property and medium rights.
Right of land is derived from the right of self, the right of property, and the right of medium. The right of self means you have a right to reside on an unoccupied part of the Earth (since that is an action that does not interfere with anyone else’s right and all actions that are not interference are rights). Furthermore, because of right of self, you have a right to create property on and of the land, to reshape the land into a new form, to build structures, to farm crops, and so on. Since these are your property and you have a right to them, then you also have a right to the previously unoccupied land of which they are an inherent part, as well as to the medium that you depend on.
The right of land is the ultimate expression of individual sovereignty and human rights. To respect a man’s right to land is to respect his other rights; to violate it is to usurp all of his rights.
The most striking thing about land rights is the phenomenon of government, which organically emerges out of the natural exercise of land rights. The most simple case of this is the idea that “a man’s home is his castle” or “my house, my rules.” Any land owner has a right to create rules, within certain limits, governing the jurisdiction of his own property.
“The fact therefore must be that the individuals themselves, each in his own personal and sovereign right, entered into a contract with each other to produce a government: and this is the only mode in which governments have a right to arise, and the only principle on which they have a right to exist.”— Thomas Paine, Rights of Man
Since “house rules” can be arbitrary to a large degree, I call a jurisdiction rooted in land ownership a jurisdiction of “man-made law.” We are free to make up the rules because we own the land. We also have a prerogative to band together with others who own adjoining land and make rules in common, which can ultimately manifest in the “city-state”. This sort of arrangement was also envisioned by John Locke:
MEN being, as has been said, by nature, all free, equal, and independent, no one can be put out of this estate, and subjected to the political power of another, without his own consent. The only way whereby any one divests himself of his natural liberty, and puts on the bonds of civil society, is by agreeing with other men to join and unite into a community for their comfortable, safe, and peaceable living one amongst another, in a secure enjoyment of their properties, and a greater security against any, that are not of it. This any number of men may do, because it injures not the freedom of the rest; they are left as they were in the liberty of the state of nature. When any number of men have so consented to make one community or government, they are thereby presently incorporated, and make one body politic, wherein the majority have a right to act and conclude the rest.— John Locke
This government of man-made laws is distinguished from the other fundamental type: the natural law government. The natural law government enforces natural rights in jurisdictions of arbitrary extent, i.e., the jurisdiction goes beyond the bounds of one’s own land. The emergence of such a type of government naturally comes about when city-states desire to secure rights in the surrounding areas, for example to make long-distance travel and trade possible, or to work out conflicts between city-states or to deal with criminals who may have fled one jurisdiction and into another. Another name for a natural rights jurisdiction is a federation; the proper role of a federal government is to secure natural rights in a broad geographic area, but never should its actions be allowed to exceed this purpose, otherwise it is tyranny.
The distinction between natural law jurisdiction and man-made law jurisdiction is at once the most important legal distinction that can be made and the most confused and suppressed of distinctions.
There are many complexities and qualifications regarding both types of government, see For Individual Rights and Against Anarchism: The Case Against Individualist Anarchism for further details.
Consent is the mental action of reassigning rights, with a corresponding physical action that conveys this mental action to others. It is at the heart of the term “voluntary.”
The right of consent is a recognition of the primacy of individual reason in determining what counts as interference. For example, if someone cuts you with a knife, it’s not assault if that person is your surgeon. Consent is the crown of all rights, explicitly recognizing both that the root of your rights is your reasoning, volitional consciousness. It recognizes that you are the King, Lord, and Master that governs all your other rights. You get to decide whether to appropriate from Nature. You get to decide whether to trade what you’ve appropriated with someone else.
The concept of rights is intimately bound up with the concepts of consciousness, of the mind that decides whether it wants this or it wants that, so any attack on any rights is necessarily a violation of consent.
A special type of attack on rights is through fraud, which by deception produces counterfeit consent. For example, suppose a dentist tells you your tooth has a cavity when you don’t actually have one, then drills your tooth, fills it, and charges you. He just violently interfered with your right of self, your right to the integrity of your body, and he stole your money as well. But you didn’t consent to what he actually did, the only difference between an overt attack and this is that you weren’t aware of what was actually going on. Fraud is thus a special type of violence.
“I have no right to force anyone to be religious, charitable, well educated, or industrious; but I have the right to force him to be just: this is a case of legitimate self-defense.”— Frederic Bastiat, Economic Harmonies
Justice is the right to halt ongoing interference with your rights, so long as you go no further than that. Justice is not interference with non-interfering action, it is the action that puts a stop to interfering action.
Self defense is the most obvious example of justice – if someone is attacking you, then when you take steps to put that to an end, you are taking the actions of justice. Retrieving your property or compensation that is owed to you puts an end to ongoing interference with the enjoyment of your property.
An avidity to punish is always dangerous to liberty. It leads men to stretch, to misinterpret, and to misapply even the best of laws. He that would make his own liberty secure, must guard even his enemy from oppression; for if he violates this duty, he establishes a precedent that will reach to himself.— Thomas Paine
Justice has many qualifications and complexities. Two key qualifications: 1) when the actions of justice go further than necessary, then they themselves become rights violating – the opposite of justice. 2) Natural rights philosophy cannot pre-adjudicate all possible cases, it can only specify the basic kinds of rights and crimes. It is up to a rational, moral people to apply the philosophy to specific cases.
Because rights are human action and a human being is an integrated being, rights are a unity – to attack one right is to attack them all. For example, to attack a person’s right of self is to attack their means of enjoying their property, to violate their consent, and quell their just resistance. To attack the right of property is to interfere with the right of self, since actions you would have taken in regards to your property are destroyed. To violate the right of consent or justice is likewise to violate the right of self.
“And who can doubt that it will lead to the worst disorders when minds created free by God are compelled to submit slavishly to an outside will?”— Galileo
What the foregoing demonstrates is that human action can be divided into two categories, natural rights and natural crimes. Understood in this way, natural rights exist, they are objective, biological and scientific facts. But what I haven’t done so far in this chapter is argue that it is right to respect natural rights.
Knowing clearly why natural rights should be respected is self-evidently a good thing, and this is reason enough alone to offer an argument in defense of natural rights. But even more important is the fact that unanswered corrupt philosophy can create doubt and destroy moral courage and action in reality. A confused and morally uncertain person is a passive, apathetic, and compliant person and tends to put up with a lot of abuse, thereby inviting even more abuse.
Again, my definition of natural rights is descriptive. It just says what natural rights are. So long as a person is reasonable, they can’t really disagree that natural rights exist. But that doesn’t mean that they agree that natural rights should be respected. A person can agree that rights as I have defined them are objective, but still think that whether he respects natural rights is just a subjective personal preference.
The key proposition I still need to validate then is that one ought to respect natural rights. There are two steps in my argument: 1) the argument that one ought to follow reason – see Metaethics; 2) the argument that reason necessarily endorses natural rights and necessarily condemns natural crimes (I gave a brief argument for this in Metaethics, but I will give another, complimentary argument here).
To summarize my argument: The fact that you have the capacity for freedom of thought means that you must endorse freedom of action.
Freedom of thought is following the evidence where it leads you, instead of being bound to the dictates of emotion or tradition or arbitrary authority. It is the freedom of looking at things from all perspectives, instead of being constrained by limited perspectives.
Someone who believes something because they feel like believing it is not free in their thought. Emotions are not freely chosen by us; they bubble up automatically based on circumstances and prior assumptions. They are not a valid source of truth, so a person who believes something because they want to believe it is a prisoner to their own past. They are not free.
A rational mind freely follows the evidence where it leads. It is not bound by any outside influences or internal biases. A rational mind refuses to subordinate itself to any person, whether directly through obedience or indirectly through vague emotions, but will only follow the evidence where it leads.
A rational mind recognizes that one must be free to conclude for oneself. It claims the prerogative of authority, for itself, and in total. A rational mind accepts no arbitrary limits or constraints, but only allows constraints that are dictated by reason.
The foregoing truth endorses a radical anti-authoritarianism of mind, claiming the total prerogative to think and conclude for oneself. But this means that one is free to conclude about what one ought to be free to do. One cannot rationally conclude both that one should be free to do something, and that one ought to be blocked from doing that same thing.
A truly rational person believes that reason should be given the widest possible berth, that it should be restricted by nothing but its own refusal to be absurd and illogical, and as a natural consequence of this, that human action must also be given the widest possible berth, since human action flows from human conclusions about how to act.
What is this widest possible berth in the realm of human action? Precisely the limit drawn by natural rights. Natural rights specify that you should be free to take any action whatsoever, so long as that action does not, by interfering with others, undercut the original premise that one should be free from interference. Your rights end where another person’s rights begin. This is the idea named by Herbert Spencer (1820-1903) as The Law of Equal Liberty. One who rejects natural rights cannot be rational, because to reject natural rights is to advocate that reason be arbitrarily constrained.
In the history of debates about natural rights, the placement of burden of proof has been perverse. Concerning the proper limits of human action, a rational person is not going to conclude that he shouldn’t be free to do something unless he can prove he should. He will conclude just the opposite: that absolutely everything should be permitted but that which can be conclusively proven to be logically disallowed. There is simply no evidence for any contrary conclusion. All of the evidence points toward liberty, and there is no evidence whatsoever that points away from it. The burden of proof lies squarely on the person who wishes to violate natural rights, not on the person who would respect them.
The roots of liberty consist in a proper conception of both reason and natural rights. As I pointed out in Induction, to cleave induction from reason is to destroy a proper concept of reason and create a counterfeit. Likewise, to cleave reason from liberty is destroy the concept of liberty.
It is no coincidence that our modern notions of liberty emerged from the Enlightenment, otherwise known as the Age of Reason. This was a time when freedom of thought flourished, and it is no coincidence that this freedom of thought led to the idea of freedom of action. It is no coincidence that the work of Issac Newton was followed by the work of John Locke leading to the creation of the best (but far from perfect) lasting government we’ve known.
It should now be clear to the reader why tyranny is the natural consequence of irrationality, and why a rising respect for reason leads to a rising respect for individual liberty.
“Nothing is stronger than habit.”— Ovid
“Physical loneliness is a real terror to the gregarious animal, and that association with the herd causes a feeling of security. In man this fear of loneliness creates a desire for identification with the herd in matters of opinion.”— Bernays
“Once a paradigm is well-ensconced it becomes a power in itself, a set of reflexes to sort the true and false. Any exception spoils the web of interpretation through which art seeks to make human experience intelligible. Only the young, the brave, the energetic, the sincere and the skeptical can break off such fetters.”— Mason Gaffney
“I see men assassinated around me every day. I walk through rooms of the dead, streets of the dead, cities of the dead; men without eyes, men without voices; men with manufactured feelings and standard reactions; men with newspaper brains, television souls and high school ideas.”— Charles Bukowski
“Civilization can only revive when there shall come into being in a number of individuals a new tone of mind, independent of the prevalent one among the crowds, and in opposition to it – a tone of mind which will gradually win influence over the collective one, and in the end determine its character. Only an ethical movement can rescue us from barbarism, and the ethical comes into existence only in individuals.”— Albert Schweitzer
“But the secret of intellectual excellence is the spirit of criticism; it is intellectual independence. And this leads to difficulties which must prove insurmountable for any kind of authoritarianism. The authoritarian will in general select those who obey, who believe, who respond to his influence. But in doing so, he is bound to select mediocrities. For he excludes those who revolt, who doubt, who dare to resist his influence. Never can an authority admit that the intellectually courageous, i.e. those who dare to defy his authority, may be the most valuable type. Of course, the authorities will always remain convinced of their ability to detect initiative. But what they mean by this is only a quick grasp of their intentions, and they will remain forever incapable of seeing the difference.”— Karl Popper
“… habit makes everything seem reasonable.”— Will Durant
Institutions both shape our society and are our society’s shape.
Institutions are the essence and lifeblood of civilization. We depend on institutions to pass down knowledge, to secure new knowledge, to secure our property, to instill and shape moral values, to facilitate reliable sources of food, and so on. It is by and through institutions that the full potential of the individual is unleashed; without them, human life would be not far above that of the rest of the animal kingdom, and certainly, we would not be able to sustain our current number without them.
But institutions can be perverted and deranged and misdirected from their proper purpose of furthering civilization, and twisted into tools of regression and barbarism. There is little need to point this fact out, all one need do is study a little history: the destruction of Ancient Greek culture, imprisonment of Galileo, the Spanish Inquisition, the genocide of the American Indian, the Nazis – the list goes on and on.
Institutions can embody the glory of mankind’s capacity for good – but also can embody the capacity for horrific evil. And so, institutions must be protected from perversion and reformed or abolished when they become perverted, and this implies that we must have a standard by which institutions are measured and a method by which they are formed and reformed. Institutions must not be permitted to evolve without a moral check on their power, for they are the aggregation of the united power of many individuals and when morally unchecked will cause massive mayhem.
The genesis of institutions is rooted in the individual, and the most elemental institution is the institution of your own habits and beliefs; these habits and beliefs are an institution in microcosm. All greater institutions are an organic synthesis rooted in the basic institutional unit that is the individual. Thus, it is impossible to speak of forming or reforming institutions without forming or reforming the habits and beliefs of individuals.
The fundamental choice an individual faces is whether to engineer his beliefs or to be engineered by them, which in a social setting becomes the choice of whether he will join with others in engineering institutions that govern him, or will be content to be engineered by these institutions.
No one is exempt from the influence of institutions, for whether they choose to admit it or not, their actions are to a significant extent governed and determined by society’s rules. But however much an individual is influenced by institutions, he has a choice to either be an unthinking slave to history, or to express his own humanity and to some extent alter the future shape of the institutions that are shaping him.
Before an individual can rightly do this, he must have a proper idea of true and false, right and wrong. He must reform himself before he tries to reform institutions, otherwise any influence he exerts on institutions is destructive. No irrational individual should participate in the forming or reforming of institutions. Rationality is a basic requirement of participation in debate and in forging institutions, and a lack of that virtue bars the individual from active participation in forging civilized society.
Obviously, this standard has not been rigorously used in modern society. It is used to at least some extent – no overtly insane babbling crazy person is allowed to participate in the activity of forging and maintaining institutions – however, that is a fairly low bar and the consequences of such a low bar surround us. Advocates of reason and liberty must not rest until the philosophy of reason and liberty permeates our social institutions and uproots the disease of irrationality and tyranny that poisons humanity’s future potential.
“It is no measure of health to be well adjusted to a profoundly sick society.”— Jiddu Krishnamurti
If one adopts a proper frame of mind, with the expectation that reason ought generally to prevail, our modern institutions often appear utterly bizarre, foreign, and insane. Imagine being taken back in time to an even more barbaric phase of human history, say during the Spanish Inquisition, and being immersed in the horrible and tragic acts of those people. Imagine how shocking it would be to experience firsthand the perverse spectacle of human beings trying to thwart their own flourishing to such an extent. There is a difference of course between that time and ours, but it is only one of degree and not kind.
For example (ca. 2012):
Outrageous examples such as these go on, and on, and on – clearly, for all our progress, barbarism is running amok in our institutions.
Now, many of these problems are being addressed, singly, by tiny institutions created in order to counter particular social evils. To an extent, this approach is perfectly good, but those who do not fixate on any one evil observe the sheer volume of social evils that prevail, and wonder whether this issue-by-issue approach isn’t like trying to cut one of the heads off a Lernaean Hydra only to watch two more regrow.
Those who think in terms of fundamentals will naturally be led to the idea of a wholesale reimagining of our institutions from the ground up. There are two basic schools of thought about how this should work: there are those who regard ideas as being independently created, understood, chosen, and held by individuals, as the basic cause of social change; and there are those who regard institutions that manipulate individuals as the basic cause of social change (for a detailed example of the latter, see The Masks of Communism, by Dan N. Jacobs). Implicit in this latter view is, of course, the idea that an elite class of humans direct the institutions that manipulate individuals, but this premise is not always explicitly recognized, even while it is openly practiced.
It is true that to the extent that individuals choose to yield their power to choose their own beliefs and favor tribalist impulses instead, the latter mode will prevail. To that extent, instead of seeking to energize individual thought in open, rational, and thus potentially contentious discourse, there will be a tendency to ignore individual choice and blame institutions as such, to use tactics of manipulation rather than appealing to fundamental principles and facts, and to seek and create organizations that shut down dissent and contention and establish those that have a “party line” instead.
And now we have touched upon the most difficult problem institutions face: maintaining integrity to what is true and right, while simultaneously consolidating assent. The most common road taken is to use the tactics just mentioned, but they do not actually lead to integrity, they lead to the social mayhem we observe all around us.
“It is error only, and not truth, that shrinks from inquiry.”— Thomas Paine
What is “the road not taken”? The road not taken is to follow reason on an organizational scale. Reason does not obey, it does not submit to dogma, to party lines, to contradiction, to lack of transparency, nor any manner of nonsense. The legitimacy of a rational and therefore open institution is a function of the number and scale of the challenges to its own actions that have been posed and rationally answered. It is up to the individual supporters to judge, and then vote with their feet. There is no other means of guarding the institution’s integrity than that. An institution that does not answer rational challenges is institutionalized barbarism. An institution that slinks away from openness and debate, that has an inner circle which conceals “dirty laundry” from everyone else, and especially one that uses force and fraud and secrecy, is a corrupt institution in need of reform or abolition.
The technology by which challenges are submitted and answered, the particular organization of the institution, the scale and purpose of the institution – these are all matters of the optional. But within whatever structure that is created, no institution that does not rationally answer challenges is legitimate. And just what does it mean to rationally answer challenges? The answer to that question is only one that philosophy can provide. Since only philosophy can provide the standard by which challenges are rationally answered, the ultimate means of propping up irrational institutions is to create a cacophonous and bewildering disarray in the field of philosophy, for if there is no such thing as rational philosophy, then there are no rational standards – and then violence, fraud, manipulation, authoritarianism, and irrationality continue to hold their established positions in society.
And so the premier institution in society is in that Ancient Greek tradition – it is The School of Athens, where philosophy serves as the central core and standard of knowledge. All other areas of human knowledge branch from this central core, and every able-minded citizen should have a basic understanding of philosophy.
Where is our modern School of Athens?
Our modern universities are funded by a system that is directly opposed to reason and its natural corollary, liberty. It is inconceivable that a modern university curriculum would sanction any individual or group that was consistently pro-reason and pro-liberty, nor does any such association exist today. The philosophy profession is decrepit, out of touch, and in ill repute among the general population. This is a recipe for social decay and tyranny.
The reason why we have no modern School of Athens is probably lurking as an almost unquestioned backdrop in most everyone’s minds: there is a deep-seated cultural hostility toward institutional philosophic consistency. If and when an institution emerges that claims to be seeking a universal truth, such institution will instantaneously be branded as a “cult” (and many of our modern “philosophers” would encourage such mudslinging). The cult of disintegration and irrationality that has captured almost our entire culture is, of course, never recognized as such. Hence, the first barrier that must be overcome by individuals who want a better direction for humanity is the psychological barrier preventing them from embracing the goal of achieving widespread agreement on philosophical fundamentals and in the face of the prevailing tribal hysteria and pseudo-intellectualism set against pursuing such a goal.
“You should fight on the merits of the cause, not play some Machiavellian game where you agree to support things that are bad in order to get some things that are good passed.”— Elon Musk on politics
The battle for civilization is a contest of two principles: Unity though strength versus strength through unity. What I mean by “unity through strength” is the tired old Machiavellian method of using coercion, intimidation, manipulation, insincerity, tribalism, authoritarianism, and so on, to achieve a more or less unified (even if grumbling, incompetent, paranoid, backstabbing, and unstable) institutional body. What I mean by “strength through unity” is sincere striving for a true, deep, and systematic assent to the ideas and consent to the activities of the institution, throughout the institution, through learning about what each other thinks and then debating when we disagree.
Assent is unity, and unity is power. The “unity through strength” method is about counterfeit assent, and it does lead to a kind of power, albeit nefarious and unstable. People claim that “power corrupts”, but really, a corrupt power follows from a corrupt method of obtaining it. The method of authentic assent also leads to power, but it is no more corrupting than is the power of our technological institutions to build dazzlingly powerful computer chips. Imagine if you tried to build computer chips the way most governments are instituted – there would be no such thing as a computer. Now imagine that governments were carefully, rationally, openly, and systematically engineered based on true philosophic principles. We can do this for one kind of institution, we should strive to do it for the other.
It is true that we will never fully realize universal assent – it is impossible for two human beings to agree on everything, let alone a large number of them. However, it is also impossible for the “unity through strength” method to fully achieve its goals either: there will always be those who take actions that subvert the actions of others. But it is far better for humanity to strive to openly contradict one another in the realm of ideas than in the realm of clashing actions. This is particularly so concerning the institutions of force, where a clash in actions can and has led to the deaths of millions upon millions. And even though we will never realize universal assent on everything, I believe that sincere and rational human beings will tend to evolve toward broader and deeper agreement over time, and leaving aside criminal mentalities, will eventually universally assent to basic principles, if they embrace the vision that such a thing is possible. We have seen this happen concerning some principles already, for example, no serious person embraces a virtue in slavery or racism anymore. We should seek to enlarge the body of universal principles of proper institutional behavior even more.
Conflicting opinions are unavoidable, and the only alternative to resolve conflicts is intellectually, which means rationally and openly, or through Machiavellian subversion. But I think most people do not really comprehend this alternative explicitly, and thus sincere people become the tools of Machiavellians.
The greatest enemy of true unity and true progress is silence in the face of disagreement. Such silence is worse than mere tacit consent, it is compliant and obedient submission, it is yielding your humanity to a Machiavellian regime. An evil institution propagates the idea that “contention is of the devil”, but the truth is that sincere, rational, and honest contention, particularly in the face of widespread opposition, is the most virtuous and courageous thing you can do. It is this sort of contention that lifts humanity up – consider the example of Galileo teaching mankind about the nature of the solar system. Rational and open contention should be celebrated and encouraged, not suppressed and denigrated, and yet most institutions thwart this healthy activity.
It doesn’t matter what kind of institution you participate in – rational contention should be considered a top virtue. And for political institutions in particular it is a top virtue. Since every human action is either a right or a crime, any disagreement in a political institution usually means that everyone’s natural rights are at stake. Consider the meaning of keeping silent in the face of such stakes.
“The dreamer is the designer of tomorrow. The practical man […] can laugh at the dreamer; they do not know that he, the dreamer, is the true dymanic force that pushes the world forward. Suppress the dreamer, and the world will deteriorate towards barbarism.”— Ricardo Flores Magon
It is easy to blame the various institutions for their strong Machiavellian tendencies, but at root the problem lies not primarily with the leaders of these organizations, but with the expectations and lack of courage in their followers, most of which seem to be under the spell: “don’t rock the boat.” It is true that if you rock the boat, you will likely be expelled, but you should at least try. And if you are expelled, why not try to form more healthy institutions? Indeed, even if you choose to participate in established institutions why not also try to form more fundamentally sound institutions as well?
Imagine that you could bring any person now alive to the distant past, say the Dark Ages. They could not bring any special technical knowledge or skills, but only the knowledge of the basic moral values of their society. What chance would they have of convincing the Dark Age mentality to improve how their institutions function, to embrace our modern values of relative liberty? They’d not only have zero chance, they’d probably be jailed and executed. But do you imagine that this person would learn to embrace Dark Ages values? That’s not very likely. He’d have a clear vision in his mind about how good things can be when men are more rational and moral.
Now imagine you brought someone from the Dark Ages into our time. They would probably eagerly adapt to our institutions, because our institutions and values are clearly better, and he’s no longer being intimidated by the corrupt institutions around him. This is the problem we face: compared to the way institutions should be, our institutions are like those Dark Ages institutions. It is true that a person from the future would probably not be jailed or executed for advocating radical institutional improvements – and this is a testament to the achievements of our most noble ancestors. But how much improvement would he be able to cause? We need to figure that out. We need to be the visionaries who foresee what is possible, and then work to make it actual.
“As long as people believe in absurdities they will continue to commit atrocities.”— Voltaire
Mankind has gone through various eras of progress and regress. It reflects a gross defect of vision and ethical standards to think that our era is the final era: consider the unnecessary death, violence, imprisonment, poverty, disease, social and political chaos, and environmental problems. But more importantly, there are those of us who, intellectually at least, have arrived in a new and better era – we have come to understand that there are radically different and better ways for individuals and society to operate. How do we get there? What obstructions are in our way?
First we must understand in at least some small measure the eras that have come before us. We can divide up history in various ways, depending on what we are trying to explain. For my purpose here I would divide it into five eras: 1) prehistory; 2) pre-Academy; 3) Academy; 4) The Dark Ages (pseudo-Academy); and 5) Science (semi-Academy) – our present era (ca. 2017). The next, or sixth, era I would call The Age of Reason (Academy Restoration).
Prehistory is so because mankind had not yet developed the means and desire to systematically create and carry forward descriptions of phenomena of any kind. Progress in this era was slow, precisely because not only was knowledge not much pursued but it was also easily lost. The pre-Academy era typifies the opposite: mankind begins to understand the value of knowledge, including recorded knowledge; thus, recorded history begins. Such recording gives an enormous advantage to future generations, assuming that they bother to study it and that they know how to pick out which small portion to study out of the mass of irrelevance – and that brings us to the idea of Academy.
The Academy era was the ultimate expression of the love of knowledge: it was recorded knowledge, but rationally evaluated, pruned, systematized, and disseminated; in other words, it pursued The True and The Good. In a word, The Academy was most centrally about Wisdom. This is where we get the distinctively Ancient Greek idea of philosophy, which means “the love of wisdom.”
“The period which intervened between the birth of Pericles and the death of Aristotle is undoubtedly, whether considered in itself or with reference to the effect which it has produced upon the subsequent destinies of civilized man, the most memorable in the history of the world. What was the combination of moral and political circumstances which produced so unparalleled a progress during that period in literature and the arts; – why that progress, so rapid and so sustained, so soon received a check, and became retrograde, – are problems left to the wonder and conjecture of posterity. The wrecks and fragments of those subtle and profound minds, like the ruins of a fine statue, obscurely suggest to us the grandeur and perfection of the whole.”— Percy Bysshe Shelley
During the pre-Academy era, enough material was created that later thinkers could reflect on the value and potentialities of human thought, and thus proceed to its ultimate expression: self-reflection, the rational analysis of thought itself. Here knowledge becomes not merely rationally evaluated and systematized, but also underpinned by conscious philosophic principle. This is the ultimate in rational systematization. And so Plato founded his Academy; and Aristotle, Plato’s student at Academy, founded his Lyceum. These were the first forerunners of our contemporary decrepit University system.
Not long after being handed this historic gift, the gift to outmatch any other possible gift (for in the spirit of Academy lies our destiny as a species), mankind quickly set about to destroy it. The fact was and is that a pursuit of what is True and Good is a threat to what is established, false, and evil. The crimes of barbarians against Reason are legion and varied and continue to this day. The murder of the philosopher and mathematician Hypatia of Alexandria at the hands of Christians serves both as a symbol and a historical marker of the end of the Academy era and the beginning of The Dark Ages:
“Some historians intimate that the monks asked [Hypatia] to kiss the cross, to become a Christian and join the nunnery, if she wished her life spared. At any rate, these monks, under the leadership of St. Cyril’s right-hand man, Peter the Reader, shamefully stripped her naked, and there, close to the altar and the cross, scraped her quivering flesh from her bones with oystershells. The marble floor of the church was sprinkled with her warm blood. The altar, the cross, too, were bespattered, owing to the violence with which her limbs were torn, while the hands of the monks presented a sight too revolting to describe.”— Mangasarian
And so the bright clean health of Academy was replaced by the dark filthy dank of Scholasticism, an era where instead of The True and The Good being the primary aim, the aim becomes putting on the show of pseudo-arguments for the sake of buttressing bogus religious dogma. It is apparently not good enough for evil to be evil for its own sake; it also dons a superficially civilized appearance, putting on not only the robes of the Priest but also the ill-fitting skin of the Reason it has so heinously murdered.
Then comes Francis Bacon, who in Novum Organum laid out a plan for reviving The Academy; and Galileo Galilei, who both set the example for the era that was to come and paid the price for it at the hands of religious dogma. The Era of Science had begun.
“[Some may raise the question] whether we talk of perfecting natural philosophy [i.e. those fields we now call “science”] alone according to our method, or the other sciences also, such as logic, ethics, politics. We certainly intend to comprehend them all.”— Francis Bacon
In spite of Bacon’s clear intent, our Science era was neutered and constrained from the start. This era was regrettably but inevitably not to be about a pursuit of Wisdom, but rather about a pursuit of that subset of “scientific truth” which is “politically-correct”; i.e. palatable to the dominant dogmas of the era. For the most part, pursuit of reason-based ethical truth was banned, and when not explicitly then implicitly as the precondition of being taken seriously by the prevailing academic institutions. But rational understanding of The Good is intrinsic to authentic Wisdom – the Sage was replaced by the mad scientist and by that obedient and faithful servant, the morally-neutered science nerd.
“The ghosts of scholasticism – of a pursuit of knowledge divorced from its social end – hover about the microscopes and test-tubes of the scientific world; … The blunt truth is that unless a scientist is also a philosopher, with some capacity to see things sub specie totius [a complete perspective on the whole], – unless he can come out of his hole into the open, – he is not fit to direct his own research. … without philosophy as its eye piece, science is but the traditional child who has taken apart the traditional watch, with none but the traditional results.”— Will Durant
“Science tells us how to heal and how to kill; it reduces the death rate in retail and then kills us wholesale in war; but only wisdom – desire coordinated in the light of all experience – can tell us when to heal and when to kill. To observe processes and to construct means is science; to criticize and coordinate ends is philosophy: and because these days our means and instruments have multiplied beyond our interpretation and synthesis of ideals and ends, our life is full of sound and fury, signifying nothing. For a fact is nothing except in relation to desire; it is not complete except in relation to a purpose and a whole. Science without philosophy, facts without perspective and valuation, cannot save us from havoc and despair. Science gives us knowledge, but only philosophy gives us wisdom.”— Will Durant
In the early phase of our Science era, the dominance of the Church and its hostility to the rational Good gave authentic thinkers the choice of either yielding significant ground, or death. The Church murdered the philosopher-mathematician-poet Giordano Bruno. Galileo got off easy – imprisonment for life. His and others’ efforts rewarded later thinkers with relatively greater intellectual liberty, with their efforts ultimately culminating in The United States of America’s Bill of Rights and a posterity that has often tended to be less drawn to authoritarianism than most. But The Bill of Rights has always been harassed and oppressed, from above by tyrants, and from below by clamoring ignoramuses.
That we (in America at least) maintain our freedom of expression demonstrates how far we have come, but that murderers of reason (dogmatic orthodoxy of all kinds) still rule in our institutions is a measure of how far we yet have to go. Since the time of Galileo dogma has been on the defense, but there it still firmly stands, and on the same battle lines drawn long ago, the very battle lines that define the boundary between this era and the promised era, long since due.
Bacon had heralded and inspired a successful movement to reclaim what we now call “science” from the stubborn arrogance of dogma. But the field of Ethics, and that which follows from it, Politics, still remain in its clutches to this day, and where this backwardness isn’t overt it’s insidiously cloaked in the guise of a “tolerance” that means, not that we should be open to new evidence coming from different perspectives because they may prove that we are wrong, but rather that we should fund every kind of vileness, degrading and destroying our children’s minds and so then our civilization. And here we have arrived at one of the most pernicious myths of our own era.
“There have been four sorts of ages in the world’s history. There have been ages when everybody thought they knew everything, ages when nobody thought they knew anything, ages when clever people thought they knew much and stupid people thought they knew little, and ages when stupid people thought they knew much and clever people thought they knew little. The first sort of age is one of stability, the second of slow decay, the third of progress, and the fourth of disaster.”— Bertrand Russell “On modern uncertainty” (20 July 1932), p. 103-104
In every successful myth is an element of truth. It is, of course, evil to physically attack (including through force of law) others for a mere difference of opinion; we must support freedom of expression, and especially for ideas with which we disagree. It is, also, foolish to be so socially rigid and petty that we stifle and discourage the free expression of sincere philosophers and artists. A civilized society needs a broad-minded liberality, not just to maximize individual happiness, but to find the best solutions to the problems of how to evolve in the future. And, of course, individuals need room to think and grow on their own terms, without a tyrannical culture of paranoia breathing down their necks. Indeed, everyone deserves equal rights and protections under the law, without regard to race, creed, gender, or sexual preference.
But rational toleration has its limits. Our society doesn’t tolerate rapists or murderers. But that is no major achievement; few ignorant and backward societies today fail to understand they shouldn’t tolerate such. Again, society ought to respect everyone’s freedom of speech, but that doesn’t not mean society should grant someone who purveys abysmally false doctrines a professorship. So yes, we should be tolerant of (say) Marxism – in the sense of not attacking or jailing Marxists for voicing their opinions. But as a condition of being granted a professorship, we should verify that the professor is actually committed to rational ideas. There is no right of cowards, idiots, and lunatics to, at our expense, brainwash the next generation with nonsense.
But, as those brainwashed would reflexively and rhetorically ask: Who determines what constitutes a “rational” idea?
The very point of University is that we can, through a rational process, eventually find the truth. Those who don’t buy into this idea should leave University and go join or found a religion. Or, if they insist on staying, let their presence serve as evidence for the case that the University system has in fact devolved into a religion, a devolution to the barbaric Scholasticism of the past.
What this religion preaches, so very conveniently, is precisely that which permits religion to flourish while retaining its veneer of respectability: that no system of morals is better or worse than any other, that everyone has a right to opine whatever they feel like without being criticized or judged incompetent, that “moral truth” is a contradiction in terms. Just like any common criminal, the status-quo does not want the light of reason to pass judgment upon it.
Who indeed should judge what constitutes a “rational idea”? This question goes to the very beginnings of philosophy (Plato’s Republic) and to the heart of institutional legitimacy. In our era we’ve been told that peer review by unaccountable and opaque government-funded institutions (often using “proprietary data” the public cannot view) virtually guarantees scientific integrity and legitimacy. But we should reflect on who it was that told us this.
The disdain of Wisdom in our era has serious destructive consequences to society – observe the moral and political chaos of our day – all rooted in not knowing basic (and what should be obvious) differences between right and wrong. Is it right (to pick one of countless examples) to break into someone’s house in the middle of the night, shoot their dogs, and sometimes accidentally their children, all because they allegedly smoke pot? What is the University system’s position on this glaring instance of state-sponsored terrorism? It is true that many university professors oppose the War on Drugs, but then they endorse our might-makes-right legal system (legal positivism) or similar nonsense, which leaves them no real grounds for their opposition. But anyone with a proper education should have left University with a basic understanding of civics, which would include knowing that this act was a heinous political wrong with no sound legal basis whatsoever. A legitimate University system would have wiped out the War on Drugs – a crime against humanity – long ago.
The hallmark of our era, and the clue to what the next one must be, is that it’s the one where Bacon’s original vision is only half achieved. Just as we now know (in contrast to our slightly more dogmatic Dark Ages ancestors) that following reason leads to scientific truth, following reason also leads us to moral truth.
“If you want to tell people the truth, make them laugh, otherwise they’ll kill you.”— Oscar Wilde
“In [corporate] religions as in others, the heretic must be cast out not because of the probability that he is wrong but because of the possibility that he is right.”— Antony Jay
The System is terrifying;
I don’t want it to be terrifying;
Therefore, it is not terrifying.
Everyone likes being coddled by their mother as a child and by their social institutions as an adult (yes, even including rugged individualists). To an extent this is all fine, but we also tend to become blind to our benefactor’s defects; we don’t want to “bite the hand that feeds us.” In this way, institutional impunity is enforced – institutions become immune to reform, because their many blind supporters viciously attack all critics.
This tribalism manifests throughout society, whether it be concerning criticism of government, police, doctors, authors, cell phone brands, political philosophies, music bands, parents, trucks, sports teams, religions, podcast hosts, cultures, and so on. Human beings in their natural unrefined state are just really really weird.
Thus, if we criticize police brutality, we expect someone to angrily shout back “You must hate all cops!” If we criticize educational institutions, then we expect to be branded “Anti-intellectual!” or “Show me your credentials!” And so on.
When an institution becomes non-optional in society, then it is virtually inevitable that it will consist of both “good guys” and “bad guys” doing good and evil things. Consider the police. None of us has the choice to opt-out of their services. Our only choice at present is to impact the trend of the institution, to make it worse or better, according to some standard of “good.” There are various ways of doing this, but the most direct and immediately impactful way is to become a cop. But any good cop who refused to uphold bad laws would not be one for long – he’d be fired. This helps no one (except, perhaps, the fired good cop). What helps is when the good cop learns the art of exercising his discretion to its furthest extent, pulling back from the enforcement of bad laws where possible, and vigorously enforcing good laws where possible (i.e., being humane and decent). In this manner, the policing institution is improved, and the more good cops we have, the better it gets. Indeed, if all the cops were good, then their example would start impacting related institutions, such as the legislative and judicial systems. The War on Drugs – a heinous evil that should be completely abolished immediately – is only possible because bad cops support it.
And this is how my criticisms here should generally be taken. If I criticize Universities, I’m not intending to indict all professors (on the contrary – I know of many great ones); nor if I criticize the judicial system am I intending to indict all judges and lawyers; etc. The point of this essay is to judge the standards put forth, not people; everyone is individual and unique and has to do the best they can in our very imperfect society. But if some individual does violate such standards, then the proverb “if the shoe fits, wear it” applies.
“The ultimate result of shielding men from the effects of folly, is to fill the world with fools.”— Herbert Spencer
Why do people so readily yield their rights to corrupt authority? Because corrupt authority didn’t teach them not to.
It is interesting how little human societies have changed in this respect. Recorded history begins roughly 5000 years ago, with Ancient Egypt and Ancient India being prime examples. What we find socioeconomically is the caste system – a socioeconomic regime where a group of government-approved and regulated experts is permitted to monopolize in a given area of the economy, and where the “lower classes” may not participate, on pain of fine, imprisonment, or death.
Now, a “caste” is typically defined as hereditary: you could only be a medical doctor if your father was too. But from a human rights standpoint, it doesn’t matter whether you’re blocked from practicing medicine because your father wasn’t a doctor or if you are blocked for other reasons. The fact remains: your natural rights are being infringed. Now obviously, to the modern caste-infected mind, what I’ve just stated contains at least a few heretical premises, that, curiously enough, contradict the ones they learned from the educational caste. (A roughly equivalent term for the sense of “caste” I’m using here would be “cartel.”)
There is of course nothing whatsoever wrong with the idea of institutionally-licensed practitioners and institutionally-approved products and services. We all need to know that the products and services we receive are from trustworthy sources – our health and safety depends on this. However, when these institutions attack our health and safety by seeking to criminalize freely-chosen alternative institutions and sources, this is a confession: the ringing message they send is that we shouldn’t trust them, and that much of what they do we could do for ourselves, more efficiently and inexpensively. The fact that most of us have been brainwashed to think that the government gun to our heads is “for our own good” doesn’t change a thing; the message is clear enough to clear heads.
We do need medical institutions to exist. We need to know that our doctor has a license from one. We need to prosecute doctors who fraudulently claim to have institutional approval but do not. We do not, however, need to be blocked from accessing medical care via other routes. For example, thyroid patients know how to read their blood labs, they know how to adjust their dose, and so on. They don’t need to pay a fortune to doctors so they can only do the obvious. Contacting the professional for the actually difficult problems and doing the easy things for yourself is not only good for you it’s good for the economy: why waste a highly-educated professional’s time for something trivial, which can only artificially drive up prices? Leave him be so he can do the difficult work, if indeed he’s truly fit for it.
Gullible ignoramuses claim “But some people are stupid, if they are allowed to exercise their natural right to treat themselves, then they will only hurt themselves.” This is obviously grossly illogical and authoritarian reasoning: 1) just because someone else might hurt himself, that doesn’t mean that my rights should be infringed; 2) if someone else really wants to do something you deem risky, that’s his own business, whether you believe the risk is “stupid” or not.
Of course the cartels are going to preach to us that the violence they sanction against us is “for our own good.” It is extremely difficult, if not impossible, to control a society through pure violence. Myth makes the bitter pill of exploitation go down much easier, for both sides. What doctor wants to believe that he is, at least in some part or respect exploiting his patients? (And sometimes against his own will – there are indeed plenty of licensed medical doctors who agree with everything I’m saying here.) What patient wants his exploiter cutting him open? The whole process is less psychologically painful if we just decide to believe that unjustifiable and heinous violence is completely justified; or on another level, that a fervent faith in myths wasn’t the reflection of gross intellectual incompetence and moral depravity that it is.
This clinical insanity of society is a hallmark of our era, and is the natural consequence of trying to sunder The True from The Good. Indeed, what else could result from the systematic dissection of Wisdom into two halves, arbitrarily labeling one as “rational” (science) and the other as “whimsical” (morality), than schizophrenia? As this sort of insanity is a hallmark of this era, a hallmark of the next era will be its opposite, namely that it will be seen as obscene and unprofessional for professionals to use violence against the population by forcing us to use their services and attacking alternative producers.
I am focusing on the medical profession here but we can multiply the examples to include all the castes-cartels, i.e. any profession where newcomers are forcibly blocked unless they get government approval first (in our insane era this even means hairdressers).
And again, there’s nothing wrong with professional licensing systems that do not block the non-licensed, or that prosecute them for fraudulently claiming to be licensed. Indeed, it is incredibly valuable to create institutions that can certify a given level of skill. But it not only undermines the institution’s credibility when it resorts to threats and violence – making it appear not to be made of professionals but of paranoid, jealous, authoritarian control-freaks – it’s plainly evil.
“The defense of the state in all civilized countries is quite as much in the hands of teachers as it is in those of the armed forces.”— Bertrand Russell
“Men are born ignorant, not stupid. They are made stupid by education.”— Bertrand Russell
“The function of education in the eyes of a dominant class is to make men able to do skilled work but unable to do original thinking (for all original thinking begins with destruction); the function of education in the eyes of a government is to teach men that eleventh commandment which God forgot to give to Moses: thou shalt love thy country right or wrong. All this, of course, requires some marvelous prestidigitation of the truth, as school text-books of national history show. The ignorant, it seems, are the necessary ballast in the ship of state.”— Will Durant
“An education that is purely scientific makes a mere tool of its product; it leaves him a stranger to beauty, and gives him powers that are divorced from wisdom.”— Will Durant
In the ideal, Academy forms the core of civilization as its conscience and guardian angel of the integrity of the various disciplines that permit our society to function. Consequently, educational institutions, as living branches of Academy, serve as a microcosm and leading example of how society should function generally. So when University insanity ratchets up, expect general social insanity to ratchet up in the coming decades.
Our education system reflects the era of which it is a part. Let’s be generous and say that it does a fair job at educating people interested in science and technology (leaving aside various and sundry bizarreries in the more abstract areas of these disciplines). There’s plenty to criticize regarding how we educate in science and technology, but at least the general thrust of it is in the right direction.
But it does a horrific job educating people in the ends to which technology ought to be put. The humanities preach precisely the opposite of the truth for many important things, such as that: war and hurricanes improve the economy, morality is arbitrary, genius is insanity, liberty is racist tyranny, all cultures are morally equal (except Western culture, which is the lowest type), rationality is a myth, degeneracy is art, racism is evil except when it is anti-white, sexism is evil except when it is anti-male, logic is white male supremacy, speech is violence, philosophy is delusion, law is social whim, what’s legal is moral, etc. etc. ad nauseam.
What’s to be done with this mess? The fact is that the humanities have mostly devolved into what can accurately be called both a state-sponsored religion and a colossal never-ending bullshitting session. By their very own sermons we know that they aren’t teaching that which follows from logic and evidence (the proper foundation of Academy), so why do we as a society fund them? Just like any other religion, they ought to be free to preach their dogmas, but to fund them using tax dollars is to violate the religious freedoms of the rest of society. Furthermore, by pretending to be professors when they really are only preachers is to perpetrate a massive fraud upon society and one that only brings the University system into disgrace.
“Academies that are founded at the public expense are instituted not so much as to cultivate men’s natural abilities as to restrain them. But in a free commonwealth arts and sciences will be better cultivated to the full if every one that asks leave is allowed to teach publicly, at his own cost and risk.”— Spinoza
“Do not send your children to the humanities! They’re corrupt! They won’t learn to think because the post-modernists don’t believe in thinking. They won’t learn logic because the post-modernists believe that logic is one of the tools that the oppressive patriarchy uses to sustain its oppressive patriarchal nature. They won’t learn to write [because] teaching people to write takes a tremendous amount of effort … [and they are more] interested in producing cult-like clones to go out and do their activist work… So you know things are not so pretty and it’s very embarrassing as a member of the Academy and to come before a group of of public citizens and say you’ve been betrayed by your institutions of higher learning…”— Jordan Peterson, Ph.D.
Preachers pretending to be professors have been using “Academic Freedom” to run amok and it’s past time we put an end to it. Nothing entitles them to hardworking citizens’ tax dollars and given the poor results they as an institution have achieved, it is better to cut the “professors” loose and see if they can fend for themselves through teaching, or whether they are perhaps better suited to flipping hamburgers or cleaning toilets.
There comes a time when the established authority has made itself so ridiculous that chaos is better than conformity to it. It is true that closing down our state-funded religion departments will create a vacuum, but this is an opportunity for those worthy of the “Humanities Professor” designation. Society desperately needs authentic, rational professors of the humanities, and it needs them to create a better system of institutional self-policing this time around. The foundation of any worthy humanities institution can only be: logic and evidence as the only acceptable currency of discourse.
People will pay for an education that gets them a better job, but not for one that gets them a better society. But this was not always so. In Ancient Greece, independent philosophers earned a living through teaching. Nowadays the humanities departments have managed to achieve in the population an incredible cynicism regarding the value of the humanities. But by dismantling them on humanitarian grounds, we can have a rebirth of the field, and hopefully a rekindling of appreciation of the populace for it. Combined with technological innovation, we may find that genuine humanities professors have even brighter job prospects than before and can rebuild their institutions to be better than before, especially since they won’t have to bite their tongues for the sake of those parasitical frauds they are forced to regard as peers.
“We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regimen of their barbarous ancestors.”— Thomas Jefferson
Speaking of state-funded religion, we now come to the topic of patents, where the childish whine “But I thought of it first!” becomes the law of the land. More often, at best some employee thought of it first but the corporation he was working for at the time gets to own it; at worst, a patent troll company buys patents cheaply and in bulk and specializes in extracting money from the economy in a purely parasitical way.
But aren’t patents for the sake of the inventor? Isn’t “his idea” the “fruit of his labor”, and isn’t he therefore entitled to “keep” it? Or perhaps you prefer the utilitarian take: Aren’t patents for the good of society, where patents give the inventor a motivation to invent, and give society a means to reap these rewards and also the reward of his public disclosure?
These so-called “arguments” are what you might call “good intentions.” They present a superficial veneer of a plausible basis for patents, that when scratched yields nothing but rotted wood. But if you never intended to seriously investigate whether your intentions are actually good, then they aren’t actually good intentions. Or, if you prefer, “the road to hell is paved with good intentions”, not because God likes to punish those who “didn’t mean to cause evil”, but rather because advocating for things you do not really understand is sheer negligence – a species of evil. Better to keep your mouth shut and be thought innocent, than to open it and indict yourself as “negligent fool” or “state-sponsored charlatan.”
“Metaphors in law are to be narrowly watched, for starting as devices to liberate thought, they end often by enslaving it.”— Supreme Court Justice Cardozo (1926)
Yes, if something you create is actually the result of the “fruits of your labor”, then you deserve to keep it. However, patents are about taking things that aren’t actually the “fruits of your labor” on the grounds that, in the best case, they resemble something that was; or in the worst case, that they resemble something you imagined. In plain terms, patents embody legalized theft from those who actually made something to those who merely thought about making that thing. The gimmick of the argument for patents as “intellectual property” is sheer sophistry: it wants you to think that because we can create a metaphor – the comparison of abstract ideas to physical property – then that clinches the argument that ideas are property. But a metaphor isn’t an argument – it’s merely a comparison.
(I’ve never seen a pro-patent argument that wasn’t more flimsy than wet toilet paper, but I’ve met a lot of pro-patent people who were incapable of perceiving how awful their arguments are. But if you think you found an actually good pro-patent argument, please send it my way.)
Imagine yourself an Apple employee. You come to believe that iTunes is actually a horrible tyranny locking the user into Apple-anointed platforms and not providing obviously useful and necessary features. You notice that, instead of Apple engineering their music player to be the best possible system for the user, they are corrupting its utility by taking the advice of scheming MBA’s trying to extract the most money possible from the consumers. So you think to yourself “I can create something better.”
Not so fast. Without the backing of billions of dollars of venture capital and a massive patent portfolio to defend yourself using the tactic of mutually assured destruction, you’re a sitting duck – there are in fact thousands of software patents aimed right at your head. Now, you could go ahead and build your idea and take your chances, but this is really risky. Do you really want to spend your energy on a risky venture when any patent-holder who wants to can steal everything you made in the blink of an eye? Maybe you should just keep your head down at Apple and work there until retirement and leave the big ideas to the big boys. “Patents reward innovation” – Right.
Multiply this horror story across all sectors of our economy and millions of otherwise creative employees and you can start to grasp the enormity of the problem – patents are a mostly unseen crime against humanity.
Earlier in history we chained families to plots of land divvied up by a King (formerly known as “land patents”). If questioned about such land “ownership”, surely some knuckle-dragging right-winger at the time would have whined “It’s mine! It was granted to me by the King!” And probably no one at the time questioned how it was that the King came to own the land and whether he had a legitimate right to divvy it out in the first place.
In modern times, we chain employees to companies that own huge patent portfolios. These employees possess all the skills needed to create competing companies, which if not for the existence of patents, would constitute a huge bargaining chip in their favor and open up a plethora of alternatives for the rest of us.
“The proposal of any new law or regulation which comes from [businessmen], ought always to be listened to with great precaution, and ought never to be adopted till after having been long and carefully examined, not only with the most scrupulous, but with the most suspicious attention. It comes from an order of men, whose interest is never exactly the same with that of the public, who have generally an interest to deceive and even to oppress the public, and who accordingly have, upon many occasions, both deceived and oppressed it.”— Adam Smith, An Inquiry into the Nature and Cause of the Wealth of Nations, vol. 1, pt. xi, p.10 (1776)
“If exclusive privileges were not granted, and if the financial system would not tend to concentrate wealth, there would be few great fortunes and no quick wealth. When the means of growing rich is divided between a greater number of citizens, wealth will also be more evenly distributed; extreme poverty and extreme wealth would be also rare.”— Diderot, Wealth
Over the span from 1982-2017, the richest 400 people in the United States increased their wealth by 2,400%, whereas the median wealth only increased by 180%. Now, there is no problem per se with wealth gaps – you do want the most competent to have the most access to capital, in order to have those best at producing wealth to be making the key economic investment determinations. But are these 400 really that much more competent relative to the general population than their predecessors were in 1982?
If we actually had a property-rights respecting political system, we’d not ask such a question. But in the current one we need to wonder how much of this wealth is really based on unjust exploitation – if we abolished patents (and other regulatory corporate handouts), how much of this gap would, over the next few decades, disappear? No simple study could determine the full scale of the answer, for there are many would-be wealth creators who did not bother putting in the effort, given that they could easily be squashed by illegitimate corporate privilege. The Left feeds from the Right’s impunity.
Massive factory automation is part of of our destiny as a species and the next era will be luxuriously overflowing with it, but given the threat of patents, this potential boon to humanity represents a dire threat in our era. (Note that this threat is also constrained by patents: the person automating is constantly under the threat of being sued for infringement by a parasitical troll, particularly when they don’t have VC backing.)
In a proper economic context, everyone who can build the relevant automation would be free to do so, and this puts a sane upper limit on how much profit the original automation builder can extract from the economy. But if we wind up with automation cartels (like we do in so many other hyper-regulated fields), then because of the patent system these cartels will be able to block all newcomers and raise its prices with near impunity.
I don’t say total impunity, because as history has shown, there is usually a political limit to economic tyranny (North Korea may be an exception, and a future dystopian era filled with highly efficient killing machines might be another). At some point, the population becomes so angry at the wealth gap, correctly sensing on some level that something is unfair even while not knowing precisely what, that they either tax the rich or lop off their heads. These are both extremely crude and destructive measures, for it is not (necessarily) the rich’s fault that the whole population is blind to the real causative evils. Like everyone else, the rich work within the system as it is, doing their best with the framework laid down for them by society, and indeed many of these rich (but not the patent trolls) are among the best at wealth creation and it only harms society when they are oppressed. The cure for oppression is ending the oppression, not reversing the role of oppressor and oppressed.
In spite of the foregoing indictment of patents, we do need a means of protecting investors and creators from those who would simply copy their work. But such protections must be moral, which means, they must be quite modest in comparison to the current patent racket. One such candidate is copyrights. Just as a copyright can protect a book from mindless expropriation, so too a “design copyright” could protect a wide variety of presently patented creations. Any creation that can clearly be proven to have its source in a particular creator (and not in an act of independently coming across the same idea) would be able to use the design copyright. (Copyrights as they are currently implemented come with their own set of insane tyrannies that flout legitimate property rights and right of individual consent, so these would have to be remedied.)
Jurisprudence is the field that seeks a full and rational justification for the edicts issued by the reigning authority; where those edicts have no rational justification, it offers a ringing indictment of them. In our era the practice of law offers some beacons of light and hope, usually rooted in some wise tradition, such as the recent Supreme Court ruling upholding the First Amendment (Matal v. Tam, 2017); and it also offers ocean of criminal insanity, which is sometimes the legislator’s fault, and sometimes the judge’s fault
But the theory of law (jurisprudence per se) is, in our times, a complete fraud. No one in any position of authority on this subject takes seriously the idea that they should be offering rational justifications; on the contrary, what they offer is excuses for why, they believe, there can never be any such things as rational justifications. This was not always the case – Enlightenment philosophers did attempt to give rational justifications for political power, and due to their sincerity, they substantially advanced the side of truth and justice, leading humanity to ultimate political culmination in men like Thomas Jefferson and in the United States Bill of Rights.
Enlightenment philosophers failed, but they did at least try. In our era they don’t even bother trying anymore. Such is the nature of our educational institutions, that they do not permit the existence of that kind of moral leadership. Handmaidens of state impunity thrive, while staunch advocates of rational universal justice are (almost?) nowhere to be found (there are of course the counterfeits who pretend to stand for such – for a quick litmus test, ascertain their opinion on, say, patents). The case should of course be the reverse: handmaidens of tyranny should be disqualified; everyone in the Academy should be staunch advocates of Truth and Justice.
“The more laws, the less justice.”— Cicero
In this era of elephantine, byzantine, arbitrary legal codes (due precisely to the fact that jurisprudence nowadays is a fraud), ignorantia juris non excusat (“ignorance of the law is no excuse”) is a monstrously grotesque joke. If the law were actually based on justice, then there would be much to this saying, since everyone can know by reasoning from first principles whether what they are doing is or is not violation of another person’s rights, without having to be explicitly told. But as it is, of course ignorance of the law is, most of the time, a very good excuse.
For every law, we should in principle be able to trace its rationale to the very foundation of the rules of reasoning; otherwise the law is an exercise of arbitrary authority, and therefore, illegitimate. Granted, nihilists can pretend to have dissolved every valid argument in the acid of their own unreflective skepticism, but there is a vast difference between the good faith attempt to provide a rationale and the authoritarian impunity we see in our systems of law today.
“If you’re not the customer, then you’re the product.”
“Collaborate on standards. Compete on quality.”
This is only partly the fault of technologists: The most perfect technology for creating the most perfect Internet could be publicly available right now, and yet if nobody chose to use or support it the Internet would in every practical sense still be broken. Ultimately then, it is our own preferences, multiplied a billionfold, pandered to by morally-neutered technology nerds, that are responsible for this major problem.
It is our roughly averaged expectations of how the Internet should work that determines how it does work. But certainly, not everyone’s expectations have the same sway. Those who form active coalitions influence it more than those who merely stand on the sidelines and complain. The sad truth is that those who wish to use the Internet as a means of exploitation and manipulation have, if not a moral purpose, a clear one. And they are active. They organize, they legislate, they create. So they win.
Sincere expression of ideas, however irreverent or unpopular or mistaken, is the proper moral backbone of the Internet. While the many mainstream sites that wish to collect and herd people (and often, profit thereby) should be free to exist, they’re backward and barbaric and would over time become relics, but for skulduggery and government malfeasance. At present, and after the same historical pattern of established castes-cartels stifling newcomers described earlier in this essay, they are trying to pass legislation to hamstring anyone who doesn’t have massive VC backing.
Bulk censorship or privacy-violating bulk data-gathering is morally obscene. If the government has a case against us, then it should make one; if it wants our data it should bring a warrant. A company that is essentially just a convenient “dashboard” for bulk governance (whether by government or corporations or advertisers) has no place in a civilized world.
Is “cloud computing” an inside job? In any case, it needs to be ruthlessly neutered. What you create, you should own and control. This includes your data and your relationships. Technically speaking, there is no reason why these “cloud services” should be anything but dumb plumbing and data warehousing that moves and backs up your optionally encrypted data. Granted, it is not easy to solve this problem, but neither is it easy to build “one neck ready for one leash” cloud applications that scale to billions of users and give the government free access to the ill-gathered information. It comes down to what enough people are motivated to build. Do they want tyrannical systems made for control-freaks and their mindless data serfs? Or would they prefer facilitating individual privacy and liberty and voluntary exchange of ideas? Generally speaking, we got what people of this era wanted.
Among those who didn’t want this, too many want a free lunch. It’s our choice: do we want to be held hostage by advertisers, or would we prefer to be free, and pay for what we use?
What we most need to resolve this problem is a rallying point of sanity: a vision of how the Internet should work, and a band of professionals who will forge ahead, doing the intellectual, moral, legal, technical, and educational work necessary to get this done. But here again, we see the state-sponsored charlatan class not doing their jobs. This negligence by our Universities is – again – mostly the fault of the humanities disciplines, who should be giving us the overarching ideals that determine the ultimate technical solutions and thereby shape our future for the better.
Those doing the heavy lifting here are a ragtag band of social misfits-heroes whose livelihoods are constantly being threatened by the very status quo that is responsible for the problems. If we close the humanities departments (as recommended earlier), that would stimulate a plethora of new and actually decent humanities activities, give this ragtag band a stimulus to better organization, including rallying around the problem of securing a better Internet for the future.
“Well, when the president does it, that means that it is not illegal.”— President Richard Nixon
Consider the refrain “It’s my property, I can do what I want with it!” Or “They’re my children, I can raise them how I want to!” Or “It’s my business and my ‘skin in the game’, I can run it how I want!” I want, I want, I want. From the infant’s scream to the barbarian adult’s measures just how little the mere passage of time results in intellectual maturation. But be careful: he might try to kill you if you point that out. There’s nothing per se wrong with “wants,” but a want that wants to shun all criticism is one that rational philosophy can only indict as impunity.
Impunity is a great national pastime, always lurking just beneath the surface. For some it is not disguised at all – even friendly criticism of their behavior results in a shrieking hysterical reaction. For other, more civilized types, the impunity lies hidden in corners of their mind, as bogus beliefs they refuse to question, but that ultimately result in serious national mayhem.
Flaunted impunity is the trophy of wealth, licensed professionals, and the nation-state. These believe they have earned the right to flamboyantly inflict their wills upon others without answering any probing philosophical questions. The rest of the citizens sometimes become outraged at this – and then tell their children “because I said so.” The allergic reaction to “reasons why” is a pandemic, leaving no one unscathed. It should be no wonder then that the problem of metaethics had been such an historic bother to philosophy in spite of its simplicity – philosophy had learned early on that those who run society don’t tolerate rational methodology, since ultimately, rational methodology can never sanction impunity. (Consider the case of Plato’s near-death attempt to teach a young dictator that the basis good governance can only be found by following reason.)
Hospital mistakes are a leading cause of death in America, yet when was the last time you heard of a doctor or nurse being prosecuted for negligent homicide? We certainly don’t want our doctors unfairly prosecuted, but can gross negligence possibly be that rare? The stories of policemen being held to a far lower standard in court than non-uniformed citizens, or being put on paid vacation for violating a citizen’s rights instead of being prosecuted is legion. We can multiply the examples ad nauseam.
We live in a world where most people seem to want to do at least one wrong and don’t want to be questioned about it or held to account. “Who are you to say what’s wrong!” they say. So, they look the other way when other people are doing something wrong, on the premise that “Maybe if I look past his evils against others, then I can get away with mine.” The dictum “judge not lest ye be judged” has put people’s minds in the gutter, lowering their standards to the minimum possible, making them turn the other cheek and look the other way, desecrating civilization.
Scientific prowess gave to humanity the atomic bomb, who lack the moral prowess to safely handle it. Never in the history of mankind has the urgency of resolving the cleavage between The True and The Good been so clear and consequential. We’ve been living with the situation for so long that we tend to tune out the epic scale of the risks; however, it looks like something straight out of fiction, a setup to the ultimate Greek Tragedy: Mankind obtains the gift of fire from the gods, and then inadvertently uses it to burn down civilization.
The irony is that the key to moral prowess is contained in scientific prowess. The only way we could build an atomic bomb was that many scientists had practiced the virtue of rationality in at least a limited sphere. To use the key is to remove the arbitrary limitation. Unqualified rationality is the antithesis of and ultimate antidote for impunity.
“Close every door of escape, and the prisoners will forget they are in jail.”— Will Durant
Was it really necessary to Western civilization, and in every instance, to give the American Indian the choice between either joining it, or extermination? Did they really have no right of self-determination at all? Do we really want a world so uniform that it can’t possibly allow this ancient and radically different society to exist along with ours? Isn’t it tragic that we can never know what it is like to experience and learn from that different native culture, and that we spent more energy destroying them than gathering information about their history?
How on Earth can we justify our actions? Could we not have made a treaty with them, that in effect, required them as a society to learn how to respect our rights, while at the same time we respected theirs? Did we really need all the land for ourselves and for National parks?
Many would agree with my lament about the American Indian’s demise (something they can do nothing about), but then hypocritically deny the self-determination of their own contemporaries (something they can do something about).
At the root of this historic and unnecessary tragedy, one that has viciously subdued more than just the American Indian, is the megalomaniacal drive of Empire. Indeed, much of this essay is about the mental disease of megalomania – megalomania of too many priests, politicians, doctors, professors, lawyers, MBA’s, and on and on and down to the “good-intentions” of sundry useful idiots. But in Empire, we find a “mother-of-all megalomania”, for it is through this megalomania that all the rest of the megalomania is made possible.
I have already demonstrated why federations of City-States are the only rationally justifiable form of governance. To the extent that history can prove anything, it has proven that City-States create the best type of human beings – witness the produce of Ancient Athens. But history also proved that City-States are not enough – we need honorable City-States to band together in federations, to protect themselves from menacing Empires (a few in Ancient Greece knew this and warned their fellow-citizens; but tragically, they were too partisan to heed the call for federation).
Many Founding Fathers of the United States of America knew this to a great degree, and attempted to create a system whereby locales had the right of self-determination and where the federation had the prerogative to protect everyone’s rights. The problem was that the self-determining locales were not local enough. But they didn’t need to abolish the States, they just needed to add the City-States. Individual rights are universal and should be enforced by all three levels (Federal/State/City-State), but only the City-State should make determinations concerning the specific character of their society.
But the arc of our country is from the virtuous federation to the vicious nation-state, where everyone is expected to conform not to the idea of respecting everyone’s individual rights, but to the arbitrary and multitudinous social constraints of the nationalist. Note that many of these social constraints would be fine to have in a given City-State (where if you didn’t like them, you can always leave), but constitute tyranny when at the national scale (where you cannot leave). This arc needs to be incrementally reversed, retaining the basic structure and function of federation, and permitting City-State locales to determine their own cultures and characters, and then either flourish or shrink, depending on how good these really are.
Thousands of experiments on how to best govern a locale, subject only to respecting the rights of others, would bring about a fantastic and ever-improving variety of social structures, catering to the individual tendencies and preferences of all of us. At this point, Western Civilization would have come full-circle, recapitulating the best that Ancient Greece had to offer, while having created the political structures that make it possible to sustain it.
The Life of Greece, by Will Durant.
Our Oriental Heritage, by Will Durant.
The Fountainhead, by Ayn Rand.
Special thanks to Rusty Scott, Kayla Fox, and Johnathan Hubbard for their generous feedback and corrections to Obstructions.
“An army of principles will penetrate where an army of soldiers cannot. It will succeed where diplomatic management would fail. It is neither the Rhine, the Channel, or the Ocean, that can arrest its progress. It will march on the horizon of the world, and it will conquer.”— Thomas Paine
The morally decent must ally. The way you know that someone is morally decent is that, in contrast with the parade of impunity going on all around them, they sincerely and rationally engage on any matter of principle, all the way down to fundamentals, and are always open in principle to the better argument. Although they are a minority, they are the natural leaders of humanity. When they stand united, they act like a lever of Archimedes, embodying the power of momentous cultural change.
With few exceptions (see below) REASON and LIBERTY and FOR INDIVIDUAL RIGHTS were conceived and written independently from the work of any other thinker. Most of the quotations used throughout my works were added retrospectively to the initial writing, and with an eye toward underscoring the beauty and power of two minds seeing the same things in two unique ways.
Ayn Rand was my first real philosophical influence. It was her vision of the possibility of a civilization governed by the right use of reason that had inspired my own thinking. In this respect she herself (as I later discovered) had been inspired by and was carrying forward the tradition of philosophers such as Nietzsche, Schopenhauer, Hume, Aristotle, Plato, and Socrates. Where I agree with Rand’s epistemology, it is often the case that I am just agreeing with Aristotle – Rand had been the one that transmitted Aristotle to me. Where Rand has been philosophically original, I tend to disagree with her. Important areas of disagreement include her atavistic castigation of Hume’s epistemology, her unconsciously utilitarian metaethics, her particular theory of rights, and her incredibly negligent endorsement of patents. Generally I find that in the more abstract areas of her philosophy she is very careless, and unaware of critically relevant nuance.
As a young engineer in the mid-1990’s I had the desire to understand how great thinkers had originated their works and thus found myself reading portions of Issac Newton’s Philosophiæ Naturalis Principia Mathematica. His Rules of Reasoning left deep impression on me and influences every part of my philosophy.
Newton also inspired David Hume (and many other rational humanists, including John Locke, who influenced the American Founders). I read Hume’s Enquiry Concerning Human Understanding near the start of my work on this book and, as indicated herein, it greatly influenced Induction in a synergistic way with Newton’s prior influence. Most of Hume’s interpreters have bungled their interpretations of his epistemology; if you think you know what Hume stood for but haven’t carefully read his Enquiry for yourself, then you probably don’t know the first thing about Hume.
After the 2nd edition of this book was published, I began reading Will Durant (leading to the two references to him in Metaphysics). Durant has influenced my general sense of history and institution as reflected in Obstructions, including introducing me to the value and relevance of Francis Bacon’s Novum Organum to our own era. Although I disagree with Durant’s belittling of epistemology, I am thoroughly impressed with Durant, both as a grand thinker and as a human being. It is an indictment of our public education system that most Americans do not know who he is. If you want to be greatly entertained, educated, and inspired, start with his The Story of Philosophy and Philosophy and the Social Problem.
R&L Press; an imprint of Shayne Wissler.
Copyright © 2012-2017 Shayne Wissler.
For information about special discounts for bulk purchases or if you would like the author to speak at your event, contact the author at https://reasonandliberty.com.
Cover design by John Wissler.
Printed in the United States of America. First edition published 2013.