1 Mojind

Should Knowledge Be Free Essay

Whenever I read news stories about “activists” who demonstrate against nuclear weapons or nuclear power installations, or express outrage about toxic waste hazards, or allege the dangers to society and human life of research and development on frontiers such as biotechnology or ABC warfare (atomic, bacterial, and chemical), I cannot just dismiss it all as emotional imbalance. Instead a web of wonder starts to spin itself. This essay examines that web.

As an epigraph we might borrow the conclusion of Daniel Greenberg’s The Politics of Pure Science. He asks, “Is there such a thing as “socially responsible” science? At one time most inhabitants of the pure science community would have doubted this. But in recent years, more and more scientists have come to question the comfortable notion that science is absolved of responsibility for what is done with their work.”

My own guess is that ethical neutralists still abound in scientific circles, even though there are some distinguished ones like Leo Szilard and Hans Bethe and Linus Pauling who have stoutly denied that the neutralist doctrine is valid. The grounds on which they reject it have never been fully enough spelled out.

The first thing to be said is that the issue is subtle and complex. At bottom the question is whether our knowledge is in any significant way subject to or should be conditioned by our value judgments. People in general, it seems, are perplexed about the relation between facts and values. So much so that the old saying fits, “If you aren’t confused you simply don’t understand the problem.” Two hundred years ago the canny Scot David Hume made it clear that what “is” does not determine what “ought to be,” and ever since philosophers have been arguing about what his point really means. What bearing do morality and ethics have, or should they have, on the things we know or suppose we know?

There are many scientists who still contend that facts and values are not only of different orders but poles apart; that what ought to be has no acceptable place in the business of learning what is. Along with their notion that everything that can be known ought to be known goes the notion that everybody should know it or at least be allowed to. Without trivializing the question, the inescapable logic of this is that if burglars for their own purposes want to know the principles of electronic security locks, physicists and chemists should publish abroad the “secrets” of it. But if some people have no right to know some kinds of knowledge, might it not also be that nobody ought to have some kinds of knowledge? (We will have to allow, of course, for knowledge that comes by chance—serendipity when it is welcome and bad when it is not).

A paradigm for the complexity of the problem comes out of my own personal experience. A few years ago I had a phone call from somebody in Washington. We can dispense with names. He asked me if it was true, as he had read in one of my books, that I knew practically nothing about technology or the mathematics of probabilities? Yes, I said, it was true. He then went on, “In that case we want you to sit on the panel for medical technologies in the Congressional Office of Technology Assessment.”

Maybe, just maybe, he thought, a humanist, a nonscientist, might have something worthwhile to say sometimes about technical decisions. OTA existed precisely because Congress felt research and innovations ought to be conditioned on the public interest, that facts and values should be taken into account together. As it turned out at OTA, ethical considerations arose almost constantly in trade-offs between values as one alternative was weighed against another; our reports were consciously shaped to guide Congress as it passed regulatory legislation in the public interest.

There is, however, one supereminent question, an ethical question that goes deeper than ordinary inquiries about this or that particular “break-through” of scientific research or the technological use to be based on it. This question is about the morality of research itself, as research.

Are there any limiting principles relevant when we seek to know things of which we are ignorant? Is there such a thing as forbidden knowledge, such a thing as undesirable inquiry? Or, to put it another way, is knowledge really an absolute good, subject to no limiting principles, subject to no values? May we say that we ought to know anything we can learn? There are some who say this, especially in the science-oriented pursuits.

I am not going to recite a long list of the sins of science and technology. The public, for which (or whom) I am an entirely unauthorized spokesman in this discussion, is perfectly familiar and I might add increasingly upset by the stories of events such as the Russian nuclear reactor’s breakdown in the power plant at Chernobyl, and the near thing earlier at Three Mile Island in Pennsylvania, the toxic waste scandals of the Valley of the Drums in Kentucky and Love Canal in Buffalo, Adirondack and Canadian lake fish dead of acid rain from hypered coàl burners in the Midwest, cattle wiped out in Michigan by PBB’s (polybrominated biphenyls), or the possibility (probability?) of the Greenhouse Effect—holes in the ozone layer due to synthetic freon or CFCs (chlorofluorocarbons).

Anybody can scour the pages and cross-references of McGraw-Hill’s Yearbook of Science and Technology or The New York Times Index to the News and tie together a frightful story, making the kind of hazards our pioneer forebears had to live with seem like small potatoes indeed.

Science makes a difference to human beings through technology, sometimes in the form of unforeseen side effects, sometimes by deliberately ignoring them. The health of consumers and the general public has often been undermined by new technologies, allowing, for example, emissions to enter into the environment to pollute the soil, the water, and the air. All too often technology willfully minimizes the consequences of using scientific knowledge in ways which cause harm to human beings. This is the case in spite of so-called “hazard management” intended to avoid it.

Somebody will be sure to say at this point that if I have any blame to lay about these things it should be put at the door of industry or commerce, not technology’s door. Nonetheless, although business and technology are not coterminous they have become symbiotic functionally. Each lives by the other. Without subsidies from business, technology would starve to death, and in turn business without technology would be unable to go on multiplying the productivity of its human and machine labor.

There is, however, another symbiosis here. Just as technology and business are distinguishable but inseparable, so are science and technology. When marine biologists are funded by ocean farmers they are not just scientists; they are agents of the farmers. The same thing applies to industrial researchers, for example in pharmaceuticals. When scientists team up with technology are they not then in some measure responsible for the outcome? As parties (if not accomplices) do they not share whatever blame or moral culpability might attach to marketed products?

Out of a vast welter of things that might be said on the score of science and its practical partner technology there are two things which ethnically speaking seem to be fundamental. One is the fact that science is so linked to technology that when the public interest is affected whatever gains or losses can be imputed to the one can be imputed in some measure to the other. The other thing is, as I see it, far more dangerous—the mistaken notion that science is “value free” and that scientific knowledge is adiaphorous, that is, morally neutral, and should not be subject to any limiting principles.

Here, then, is confrontation; the scientific ethos versus the humanistic ethos. I am myself by training geared to philosophy, especially moral philosophy or ethics. By definition and tradition the humanities are oriented to individual human beings, but this also entails their ornnium-gatherum—the common interest. Surely social ethics (that is, value considerations on the public or “macroethical” scale) has every reason to be center stage. The private interest has to be weighed within the social context.

Before I go any further, then, I should plainly state the premise from which I approach questions of the public interest or, in a less politically flavored phrase, the social good. I am speaking not only of the humanistic disciplines but more significantly as a humanist, by confession. As the term indicates, this means that I put the interests of human beings, their well-being, in the prime position—in John Steinbeck’s “catbird seat.” Whether humanists find their ultimate support in a supernatural or a naturalistic worldview, human benefit is their first-order value. The interests of humankind are the summum bonum or highest good, and all other values are subtended.


By Their Consequences Ye Shall Know Them

Science and technology are tied together in two ways. First of all, technology is a creature of science because it is nourished by scientific knowledge, and the “higher” the technology becomes the more truly a creature of science it becomes. Second, science in its turn relies a great deal on technology for its apparatus, its research tools. Each would be crippled without the other; each in its own way is a symbiont. Imagine science without electronic microscopes or microscopy without science. Speaking as a humanist, therefore, I want to insist that the distinction between science and technology is merely conceptual, not actual.

It follows from this that scientists cannot say, as many of them seem inclined to say, that the debits or disasters of technology are no concern of theirs. For example, faced with the contamination of wild life by PCB’s (polychlorinated biphenyls), which led to the Toxic Substances Act of 1979, they might say, “Oh, well, that’s not the fault of science; blame the industrial technologists, or the corporations that hire them.” This line is almost naively defensive; psychologically it is a patent instance of displacement.

Spokesmen for science often express a belief that in the short or long run their successes and discoveries will always benefit mankind. This, surely, is only an assumption, and as a universal reassurance it ends up being Pollyanna and dubious. I suspect it is an assumption born out of the familiar mechanism we call wish-thinking. To make such an assumption seems to me, indeed, quite unscientific. George Santayana remarked on this as a curious anomaly in his Life of Reason as long ago as 1905, the year I was born.

A part of the absolutist’s case for knowledge is that to question it is, after all, merely academic since we cannot suppress or reject knowledge anyway, even if we wanted to. The belief behind this is that the proliferation of learning is inevitable and unbounded, except for whatever limits to it are given per naturam, since man’s powers of cognition are at best only finite. For pessimists this belief leads to apocalyptic prophecies and scenarios of man’s doom, while for optimists it ends complacently in a Jules Verne expectation of endless technical victories.

I recall a scientific friend once waxing rhetorical and saying, “We are, for good or ill, caught up in extrapolations of what we know, and there is no stopping or turning back. We can say we won’t inquire across our present boundaries of knowledge at this point or that but there is always somebody who will, and even if we tried to shut the door on something we disliked we’d stumble across it anyway as we run down some other road. Knowledge is a seamless robe, a single tapestry. You either buy it all or hide cowering in the caves.”

My friend described his stance as Promethean. But was it? Too often people think that to be a “pure” scientist, searching for knowledge so basic and unknown that you hardly know exactly what it is you are looking for, is to be “Promethean”— willing to search out the as yet unknown—but let it come whatever it is. Actually, that kind of blind truth seeking is the exact opposite of Promethean.

Our habit is to fix mentally on the brave and compassionate son of Titan, who risked the fury of the gods by stealing fire from heaven to warm men and fuel their engines, but this habit is false. Even Webster’s is guilty of it. In truth the term “Promethean” is to be understood etymologically, not mythologically. It comes from two Greek words, pro, the preclitic for first or beforehand, and mathein, the verb to think or ponder. To be promethean means literally to act with forethought, to think ahead, to consider consequences. Prometheus got his name not because he was fearless but because he was prudent, because he looked ahead and decided what to do by the standard of human need rather than divine monopoly. This linguistic understanding of the term makes it the key to the virtue of prudence, a consideration of the consequences in moral judgments.

Twenty years ago there was a flurry of interest in learning how to extend the human life span by some such means as genetics or biochemistry. Sir George Pickering, Regius Professor of Medicine in Oxford, decided the prospect was horrifying because, unlike other body cells, brain cells cannot regenerate and the “longevitists” would therefore end up with a great population of young bodies and senile minds. That was consequential thinking. But Lord Brain, Pickering’s neurologist friend, condemned reasoning of this kind on the ground that nobody can see the whole future; we cannot be certain what will follow if we engage in research projects, nor on the other hand can we be certain what will follow if we do not do the research. Ergo, find out how to lengthen human lives and “damn all” (my phrase, not his) to the consequences. Explicitly, Brain said man “certainly has not the capacity to decide that some particular line of scientific research ought to be abandoned because of its supposed evil consequences for mankind,” This, surely, is a devout confession of faith, of piety, of whole-souled belief, a priori, without any testable grounding in empirical data.

Ethics as a discipline deals fundamentally with two problems. One is values (“What do I or we want?”) and the other is obligation (“What ought I or we to do?”). There are two ways to decide what we ought to do, to judge whether actions are right or wrong. One way is to hold that goodness and badness are intrinsic in things and actions, that they are only variably or relatively acceptable, depending on circumstances. We have all heard people say, “Don’t give me any ifs, ands, or buts, or maybes; a thing is either right or it isn’t right.” There are those who on this model say that scientific knowledge is good in itself, or that a technological innovation based on it is right because the product is in and of itself valuable—not just relative to other values or by reasons of its effects. On this formalistic ground sterilization, to take an illustration at random from sex ethics, is said by some to be wrong, categorically wrong, regardless of any clinical reasons for it in a particular case.

Over against this notion of intrinsic value is the consequential kind of ethics. Its canon is that values or moral qualities are predicates, not properties. Whether a thing is good or bad, or an act is right or wrong, would depend on the foreseeable consequences. This ethics requires moral agents to make a kind of “impact” study before they act.

Appraising consequences usually comes down to a relativistic calculation of value pluses and minuses in outcomes; very seldom is there a simple and exclusive cause-and-effect choice of means and ends. On this relative basis sterilization could be judged consequentially to be right, morally acceptable, at least in some cases.

The consequentialist’s critical guideline essentially is non nacere, do no harm—to borrow the Hippocratic principle of medical ethics down through the ages. It means, of course, that we ought not to harm human beings, except when it is done as the lesser evil in a trade-off (as in surgery).

The thrust of all this is evident enough. In the eyes of a great many sophisticated and concerned members of the public, science and technology are tied together by their consequences and share in each other’s praise or blame. Furthermore, since science is the source of technology’s major innovations it is science that bears the primary responsibility for the whole process. In biotechnology, for example, science is obviously the vera causa.

To use the current phrase, “This is where I come from,” I should say candidly that I am a consequentialist and a humanist. I evaluate consequences by whether they yield human benefits. This philosophical profile should help readers to correct for whatever they think is my bias. Let me not be like the ship’s captain who cried to Lord Nelson at the battle of Trafalgar, “My lord, I have no prejudices, but God knows I hate a Frenchman.”

The ethical obligation to weigh consequences—what I have just referred to as a “calculation of pluses and minuses in outcomes”—places on conscientious persons the serious burden of relativities. Once moral responsibility ceases to be a simplistic adherence to prefabricated rules based on the notion of intrinsic right and wrong, turning instead to consequences, the ethical problem becomes one of measurement—of having to quantify qualities, so to speak, in order to weigh them against each other. Mathematicians will probably perk up their ears at this point. Humanists, to be sure, need numeracy as well as literacy, to use a phrase coined by Garrett Hardin in his Filters against Folly.

Historians often remind us that economics as a discipline came out of what 17th-century people such as William Petty and Dudley North called “political arithmetick.” They called it that because they were faced with the perennial problem of distributive justice, the allocation of social resources, and they realized they had to have some hard numbers to work with when they were ranking and ordering things according to their consensual values. (In their time and place they were not compelled to deal with the problems of cultural pluralism and differing values as we are in our modern social potpourri—an open society which forces modern Americans to twist and turn around almost every policy problem.)

In our century economists have in much the same spirit come to see that institutional analysis is not enough—that the problem of choosing between competing values, which is of course the central function of both ethics and economics, cannot be carried out in a mass society without more exact quantitative measurements. Thus “econometrics” was born from the marriage of economics and statistics.

This turning to measurement—not only in economics and politics but in ethics as well—is aimed at a careful calculation of relative values and claims in an effort to find a fair balance. In the classical language of moral philosophy and jurisprudence we would say it is aimed at distributive justice. I have coined the term “ethimetrics” for the task of consequential reasoning combined with measured values, as the way to approach problems of justice and relativity in our modern rapid transport, mass society of science and technology.

Up to this point what has been said can be summed up in three propositions. 1. Science and technology are partners in a process which can lead to evil as well as good consequences for human beings. 2. Science is the primary source of both the triumphs and tragedies of technology. 3. To weigh or evaluate ethically the consequences of the science-technology partnership, on a macro as well as a micro scale, we need the help of mathematics to serve justice.

It follows from this that there is some merit in the charges often brought against science in the public forum, especially that science is indifferent to its applications; that it seems sometimes to prostitute itself to industry and government in order to get funds, and—finally and at bottom—that it is dangerous because its first loyalty is to knowledge, not to people.

This last complaint, that science values knowledge more than human beings, leads me to the second of the two problems we ought to examine.


The Naïveté of Scientific Dogma

One evening a few years ago I heard our charming “biology watcher,” Lewis Thomas, say something as a visiting lecturer at Rice University in Houston that made my hackles rise. He said in so many words that scientists are not morally responsible for the uses made of their discoveries. I recalled that he had also declared that we ought never to say there are things which human beings can get along happily without knowing. In the earlier statement I had supposed he was only decrying a complacent indifference to ignorance. (An ignoramus, after all, is not just ignorant but loves ignorance.)

But lo and behold, I now found in Houston that Thomas meant there is nothing we ought not to know. In answer to a question from the audience he drove his point home by saying, “I know of no scientific discovery for which we have any cause to regret, not even the atom bomb.” This could only mean, plainly and affirmatively stated, that we ought to seek any and all knowledge regardless of the consequences. As a value assertion this was both moralistic and absolutistic. It was certainly not a scientific statement; it was an act of faith, an auto de fe, and I was not inclined to take it lying down, nor were many others in the humanities present there that night.

Dr. Thomas had just at that time written that “since we humans are curious, knowledge seekers, it would be wrong to suppose that the human mind can rise above its ignorance by simply asserting there are things it has no need to know.” Then he added, “This, to my way of thinking, is the real hubris, arid it carries dangers for us all.”

His position was a simple one, not to say oversimple. He called a careful scrutiny of the consequences of seeking new knowledge “hubris” (the Greek word for self-destructive pride), showing thus that he failed quite simple-mindedly to see the point of the Greek myths of Pandora’s box and Icarus’ collapse when his new wings were melted in the sun.

Scientists and science-oriented people should back off and take a hard look at the notion—a fairly commonly held one— that “there is no such thing as forbidden knowledge” or, in another kind of rhetoric, that knowledge is subject to no limiting principles (except, of course, that it be correct). What this amounts to is ethical absolutism; the claim, that is, that knowledge is a value supervenient and dominant over all others; that knowledge is in and of itself intrinsically good and desirable, self-validating; that like “art for art’s sake” and “good for good’s sake,” so “knowledge for knowledge’s sake” is a proposition needing no support.

The first thing to be said about this, obviously, is that it violates the consequential principle. Those who defend it will have to contend that all knowledge, without exception, has good consequences—that even if its immediate consequences are bad its remote result will, on the principle of proportionate good, balance out as desirable. Any such contention as this is so broad, and so arbitrarily ignores the force of variables, that it must be treated as an a priori assumption which violates the scientific method itself, as well as all principles of cognition based in science. To absolutize knowledge, and thus to pour holy water on it, is both unethical and illogical.

When Dr. Thomas spoke of the atom bomb he meant to suggest, I think, that nuclear power may well prove to be a blessing in the energy shortage now confronting our technological society, but beyond that he was presumably also implying either that the “balance of terror” in the possession of thermonuclear weaponry by rival superpowers will remain in status quo, or that in any case if first and second strike attacks were unleashed the human and material destruction would stop satisfactorily short of being catastrophic. This is the presumption behind Ronald Reagan’s strategic defense initiative (“Star Wars”). Moreover, many top people in the scientific community refuse to accept it no matter what hard numbers its protagonists use to establish when the “catastrophe” range would be reached.

Let’s not get bogged down, however, in the problematic issues of nuclear warfare. For illustration we can look instead at the angry debate a few years ago over the risks, and hence the wisdom, of continuing research to master recombinant DNA—gene splicing and cloning.

It was Paul Berg and his fellow microbiologists who called for a moratorium on their own work, and they did it as scientists because they were consequentialists and humane. Even with careful peer review could such biological research be carried out safely, without endangering the public interest? That was the question. It was the scientists themselves who called a halt, not the hysterics who were crying havoc if they went ahead. Although on their own part the biologists doubted that the fear of recombining DNA was realistic, if it could be shown that there was even a probability of disaster they were prepared to stop the research, stop seeking such knowledge. Their moratorium was by no means merely a dishonest pose, as some “anti-scientism” agitators charged.

In recent years I have often quoted DeWitt Stettin’s counsel, when the flap about DNA flared up. He pointed out that when risk-benefit questions are problematic, as they often are, there are two eminently sensible indicators, without which a ban on research would not be justified. He borrowed his criteria from judicial reasoning. The first was that there must be a clear danger, and the second was that the danger must be present, not a merely speculative forecast of the sort used in “slippery slope” objections to innovative new knowledge.

Recognizing, then, not only the ambiguities but also the ambivalences of so much human knowledge, actual or potential, are there any forms of knowledge we can rule out as outside the pale? Let me just mention three. Seeking to know how to destroy the biosphere would be wrong. Learning how to kill human beings by an uncontrollable means would be wrong. Seeking to know how to induce hatred of people, indiscriminate and irreversible hatred (that is, sociopathy or misanthropy), would be wrong. Thoughtful people could easily enough make the list longer.

This may at first sight (prima facie) seem to contradict what I have said elsewhere in expounding the theory of situation ethics. I have of course constantly insisted that there is no human act which is always wrong or without the possibility of being justified in conceivable circumstances, however rarely or atypically this might be the case. But I have also always contended that “unloving” or inhumane acts are immoral— unless in an unusual or untoward situation where the predominant consequences would prove to be of a loving or caring nature. The three kinds of research I have just noted as utterly culpable and outside the forum of conscience are indeed consistent with situation ethics theory, because even though they fall under universal prohibitions, it is for the reason that by their very nature they exclude human benefit.

Here we may have a key of vital importance. What is forbidden or forbiddable is not knowledge per se, not knowledge as such. Data or facts are in some legitimate sense “value free.” But scientists—a word which literally means seekers of knowledge—are human beings and therefore moral agents. As human beings they may not act without value considerations. It is a very simplistic kind of reasoning, and illogical, to suppose as some scientists do that since data or empirical facts are morally neutral scientists may be, too, eo ipso.

What we have been saying is that properly there is no such thing as knowledge for its own sake. The slogan ars gratia artis (art for art’s sake) has always run the gauntlet among art critics, essentially because they are humanists and as such reject the notion that anything, including art, is good for its own sake. Things are good for whatever value they have for human beings, even if for only one human being. The notion of “good” cannot have any meaning apart from sentient beings. Things and knowledge have value solely because somebody values them; they are not self-existent. They are not intrinsically valuable. They do not themselves assign their value to themselves. They are worth something only because they are worth something to somebody, some sentient being.

Peter Drucker, the doyen of industrial management consultants, has put it very bluntly. “It is becoming abundantly clear,” he says, “that the value of knowledge is no more an absolute good in itself than is anything else.” Drucker’s assertion reminds us that what we are saying here may be the first time since Socrates established knowledge as the foundation of the Western world view that it has been so directly challenged; it has always been taken for granted. We have found the watershed. Which side do we stand on? The human side or the nonhuman side? If you are on the humanistic side you have to ask of all proposed inquiries, investigations, and research, “How will it help people, or at least not hurt them?”

We are not talking about censorship. The right to know what is already known seems on the whole well provided for in legislation and statute laws providing freedom of information, due regard for reasonable secrets, libel laws, and the like. No, we are looking at the question whether in particular matters what is not yet known should be left unknown ab initio, as far as choice permits.

The DNA moratorium and controversy was a dramatic episode in recent science history, and it provided us with a paradigm or model case. It showed the readiness of top-flight scientists to deny by their own initiative that knowledge is an absolute good. They refused to act first and think later. As responsible moral agents they looked carefully at the probable consequences before they proceeded. Knowledge, they said in effect, is not sacred, not a mystical holy of holies, untouchable by taboo and thus outside the realm of human choice and control.

Now at last I have used the language appropriate to any claim that knowledge is good in itself and needs no moral justification. Theologians and spokesmen for religion are always telling us in circular fashion that God is good because he is good; that God need not “justify his ways to men” and we therefore should not “reason with God” as Job in the Bible did. This posture works in religion, it would seem, but scientifically it is absurd. Philosophically speaking, scientists cannot ignore and violate the frontier between science and religion.

It is often said that religious beliefs have to stand before the bar of ethics, meaning that human beings will not believe in gods that fall short of their developing moral ideals and standards. Such gods, being outgrown, have always been cast aside. If knowledge is absolutized or divinized or sacralized, treated as godlike and beyond human judgment, it becomes a religious entity; in that case it is no longer open to science or subject to scientific analysis because it is neither verifiable nor falsifiable.

Science and technology have to stand before the bar of ethics, too. They are not self-authenticated; they are not their own judges and juries nor is the knowledge they want or discover or build upon exempt from critical ethical examination. Those of us in the humanities sector too often find that individual scientists seem to speak and deport themselves in a very priestly style, a sort of New Priesthood, as when Lewis Thomas or Russell Brain say it is wrong to subject knowledge to any moral restraints. All attempts to absolutize the relative and finite are demonic, to use a classical word for it. Trying to lift knowledge “beyond good and evil,” as religion does gods, is entirely inappropriate. When science turns religious, it turns sour.


A Pro-Science Postscript

It might be objected, perhaps, that I have committed myself to a fatal contradiction. On the one hand I attack scientists for giving knowledge an absolute value—knowledge for its own sake—while on the other hand, as a humanist, I raise up another absolute—human well-being for its own sake. If the one cannot be absolutized, it might be asked, why should the other?

Let me answer this complaint. If I were to say that all human beings are sacrosanct and that the interests of all human beings, without exception, should always prevail, this would put me in the position of the pot calling the kettle black. In point of fact, I do not universalize human well-being, I do not claim it for everybody all of the time, and this is for the simple reason that to do so would land me in many impossible multilemmas, since human interests often cut across each other’s.

Rationally we can at most only seek to optimize human well-being as the good. At best we can only hope for a finite success in fairly distributing benefits. Sometimes we have to choose some people’s interests rather than others’—for example, by adhering to the democratic principle that if push comes to shove the interest of the many should prevail over the few. This is why mathematics or measurement is so important in distributive justice and so essential to just legislation. But however that might be, the ethical task of choosing between competing values and competing claimants is a far cry from the indiscriminate absolutism of the knowledge seekers who say, “There is no such thing as forbidden knowledge.”

We could not object to their having a “fix”—a first-order value in relation to which everything else, all other values, are relative. My objection is that their fix is the wrong one. And why wrong? How can I say it so positively? For a very simple and sufficient reason. It is that the survival of our species, Homo sapiens, is the necessary and indispensable prerequisite to the whole human enterprise, including both the sciences and humanities. Science, society, and ethics, along with man himself—in short, “the whole works”— would be cancelled out if the species or its biosphere were to be lost. In the last analysis the case for humanism is pragmatic and total.

Some defenders of “whole hog” research will be sure to contend that even though a given study or protocol would result predictably in a balance of bad over good, we ought to endorse it anyway because we cannot know or be sure that unforeseeable good consequences may not shift the balance and yield, in the final event, a predominantly good consequence overall. This, however, is not scientific reasoning; it is religious, an act of faith.

Such appeals to ignorance, for such they are, are a “spittin’ image” of the old religious argument that the all-powerful God’s ways are mysterious (remember, in William Cowper’s Table Talk, “God moves in mysterious ways his wonders to perform”) so that what seems to be his indifference to human suffering turns out in the end to be protective love. In this kind of rhetoric terrible tragedies, cosmic or natural disasters, like epidemics and earthquakes, and heartbreaking cases of innocent suffering and death due to accidents or disease, are really “all to the good,” and we should know it if only we could share in God’s infinite knowledge. So the line runs.

Just as the clergy “comfort” victims of misfortune by saying that it is only because of our limited human knowledge that we think such things as cancer or an earthquake or a fatal automobile accident are evil, so some scientists try to reassure us that all new knowledge is really good and we doubt it only because of our finite understanding and lack of faith— that knowledge which in our limited perspective we think is dangerous will in fact bring benefit. As such scientists listen to humanistic objections or cautions they feel impatient and irritated, reminding us of the words of the unknown ironist: “People who think they know it all are very irritating to those who do.”

Scientific piety is often accompanied by a less attractive appeal to the idea that “if you can’t lick ‘em, join ‘em.” “Don’t suppress any knowledge—somebody else will be sure to find it out sooner or later.” Ethically regarded, this is equivalent to “somebody else will do it if you don’t,” which is often said if you protest against charging higher prices than cost accounting indicates, or against cheating on tax returns when the language of the law is imprecise, or if you urge retailers to forego legally high prices for the sake of the consumers. Teen-agers in high school say, “Why not cheat on exams? Everybody else does.” The heart of the matter is, “If you don’t do it, somebody else will,” and as a companion buttress, “Since somebody is going to do it, I might as well do it myself.”

Looked at from another angle, to treat knowledge this way is to make it a self-sufficient Kantian thing-in-itself (Ding an sich)—a piece of sheer rationalism. It insists on reason to the exclusion of feeling. Reason is indeed essential to creative living and a sine qua non of humanness, but reason is not the whole of our understanding: it has its limitations. To be “coldly rational” is to be naïve. It falsifies the range and depth of human experience.

It is true, I suppose, that in a religious world view the human race could survive its extinction. This massive Armageddon would be what theologians call the “eschatological event,” the last and final state of things, and on that faith ground (fideism) we could, presumably, escape the “tyranny of survival,” as some have called it. We could stop worrying about our future, or our children’s, or human destiny in the large. This religious speculation, if it were to serve as one’s “bottom line” or the ultimate basis of a value system, has to be believed—believed, that is, in the genuine sense that those who profess it would be ready to act on it—as the Donatists did when they removed themselves from this life for the sake of life in heaven.

Only this eschatological outlook could make sense of the moral claim that knowledge is good regardless of the consequences. Biologically it would not be realistic; it would not make sense scientifically. Its only real appeal is to a handful of “true believers” who still wait hungrily and literally for the end of the world.

To make an end, I reiterate my thesis: the humanist subordinates everything, including science and knowledge, to the test of human benefit. My Parthian shot, therefore, is that to assert that knowledge is always good is immoral as well as unscientific. In the Western ethical heritage prudence has always been and still is a cardinal virtue.



The advocacy of free education, no tuition fees, may appear at first sight as irrational and ideologically driven – especially in the age of ‘neoliberal consensus’ where market forces are revered and the prosperity of private capital and financiers is sanctified.

There is no question that advocacy of free education is ideologically driven, as far as believing access to higher education should not be contingent on parental income, background, ethnicity, or gender, and as far as believing it is an excellent pathway to reduce inequality in the long term and foster a more prosperous, cohesive society. However, while principles of justice and equality should indeed guide policy making, neither these principles nor the popular slogan “education is a basic right” automatically imply publicly funded tuition fees. This short essay thus intends to give a defence of free education predicated on economic arguments.

The (im)practicality of loans.

Across advanced capitalist western economies, student loans are becoming increasingly common. According to data from the Department of Education in the UK, 48 percent of pupils from state schools go to university as opposed to 90 percent from private schools (7 percent of the total population). This is the premise on which the argument in favour of tuition fees rests: further education is regressive on the taxpayer (the poor subsidise the education of the rich), and there are huge private financial benefits from education, therefore it makes sense for individuals to bear a significant part of the cost of their education.

While there may be a certain theoretical basis for tuition fees, practically, financial experts anticipate that 73 percent of UK students will never repay their loans, even including those who secure work in respectable professions (Garner, 2014). Countries with similar systems like the US and Australia also report similar problems (Barr, 2013, p. 318). Unlike mortgages, student loans have no physical collateral, where the bank can expropriate the house or where assets can be sold to pay for debt. There is also an adverse selection problem. It is impossible to know whether the student borrower will be able to get and keep a job that will pay for a loan after university. With low growth and increasing job insecurity, the prospects for students to repay debts are stark.  This raises the question of whether tuition fees even help ‘to balance the budget’ of governments committed to austerity.

Tuition as a problem in itself.

Even operating under the assumptions that the loan system works and that tuition fees have no impact on social mobility, improvements in information to the least advantaged do not address the complexity of the issue. Firstly, there are obvious sizeable opportunity costs of spending three years or more in education for people from low income backgrounds, as these are years of foregone income earning. With student grants recently withdrawn and the moves towards lifting the tuition fees cap in the UK, information obstacles can be but a smaller nuisance for poor families’ accessing education (Streeting, 2015; Shaw, 2015).

Secondly, research has found that the poor tend to be more debt averse, meaning they are less likely to buy into the loan regime. Debt is also found to be detrimental to mental health and the well being of students, particularly those of low income backgrounds (Abdallah, 2010). Since this world is still painfully dealing with the effects of the 2008 economic crisis, with most crises circumnavigating around bubbles of unsustainable debt, debt aversion is certainly not something we should be seeking to correct.

Finally, it is believed  by psychologists that activities that are seen as intrinsic (doing something because it fulfils you as an individual) leads to better outcomes than activities that are extrinsic (in order to obtain financial remuneration or gain status)(Abdallah, 2010). The commodification of education through tuition is a threat not only to people’s intrinsic motivations, as they put jobs in finance and ‘well paid’ jobs above their real interests and passions, but to the purpose of education as a whole. Martin Luther King rightly argued that more than serving an instrumental purpose, education should be the medium through which we grow morally and where we clarify our central convictions as human beings (Luther King Jr., 1947).  

Free education is of course not a panacea for equality of opportunity and a fairer society, and in isolation is still highly regressive. It has to be part of a holistic drive to address inequalities across the social system and early stages of the education system (Lawlor et al., 2011, p.6).


Rather than shaping the education system to increasingly look like a market, it should be shaped to become more accessible, democratic, and fulfilling for people from all incomes, races and backgrounds. A free, meaningful education is within our reach, we only have to be prepared to fight for it.




I believe that higher education is a right. I believe that it produces positive externalities for society as a whole and that the ideal higher education funding system is one of equality of opportunity. This is why I believe higher education should not be free.

This article focuses on addressing 3 common misconceptions about what the rationale for free higher education is and what it implies: 1) the idea that higher education can be free; 2) the notion that tuition fees prevent low-income students from accessing tertiary education; 3) the thought that, under the current HE funding system, graduates will crumble under unmanageable amounts of debt.

Addressing common myths about free HE

HE can be free

What does free higher education mean? Is it a system in which we ask academics and professional staff to work for free? Of course not. Free higher education refers to the taxpayer taking the full burden of the costs of higher education. In such a funding scheme, whether one attends university or not, one subsidises higher education through taxes. The question we need to ask ourselves is who should be paying for higher education?

The benefits of HE are twofold: there is a substantial private benefit for the student attending HE, and a social benefit, which economists call ‘positive externality’, from having an highly educated population. The net present value of the private benefit – that is the ‘discounted’ long-term economic advantage of having a degree instead of a secondary education, minus the associated costs – is estimated in the UK to be at £168,000 for men and £252,000 for women (Walker and Zhu, 2013). Society at large benefits from positive externalities through extra tax revenue (£89,000 extra per individual who gets a university degree) (BIS, 2013).

This should lead debates on HE funding to focus on the fairness and efficiency of different cost-sharing schemes between the state and the student.

Free HE would help students from low-income backgrounds

In 2010, the Sutton Trust (2010) reported that 16 percent of pupils who claim free school meals entered higher education compared to 96 percent of independent school pupils. Jake Anders (2012) found that children born in the top income quintile group are three times more likely to attend university than those born in the bottom quintile. In other words, the better off your family the greater the likelihood that you will attend university. This means that, in a system in which higher education would be fully tax-funded, 84 percent of the families with children who claim free school meals will essentially be subsidising the cost of the higher education of those children from wealthier families. Economists call such a system ‘regressive’ and in fact, it has been documented that before the introduction of fees in the UK in 1998, the HE system was indeed regressive and did little to help those who were at the bottom of society (Mishkin and Straub, 2014).  

A related point on this matter is that there is no evidence that the level of tuition fees leads to lower access to HE. In the past ten years the entry rate for the most disadvantaged students has increased by 72%, and the university entry ratio between the most advantaged relative to most disadvantaged students declined from 4.37 to 2.77 (Carr, 2014). Interestingly, Filipa Sá has found in her recent research that despite the tripling of fees in the UK, students from ethnic minorities “were actually less affected by the increase,” and that “there is no evidence that attendance has decreased more among students from local authorities with lower rates of participation in higher education” (Sá, 2015).

The burden of debt is too great for students to shoulder and is responsible for lower application rates

As we saw previously, the net private benefit to students is greatly larger than the amount of debt they incur. However, these are average figures, meaning that some students will end up getting a better return while others will not. Ingeniously, under the current HE funding system, those students who will not reap much monetary advantage from HE will not find themselves in trouble because they cannot pay back their debt.

Picture a typical student in a 3-year undergraduate program costing £9,000 per year. Once she has graduated, her debt-load will be around £35,000 – £40,000 (Boursnell, 2015). At first sight you might think the student has an unmanageable burden. However our student only repays, each year, 9% of what she earns above £21,000, which means that if she earns less than the £21,000 threshold she does not repay anything that year, The median graduate is expected by the IFS to repay, in net present terms, £26,731 (Chowdry et al.,2012, Table 4). This number is surely not insignificant, but it should be compared to the benefits the student gets from having a degree. Once we do so, it does not appear unreasonable.


References – YES 

Abdallah, S. (2010) “Government in terrible debt? Let’s pass it on to the students.” Available:http://www.neweconomics.org/blog/entry/government-in-terrible-debt-lets-pass-it-on-to-the-students [Accessed 5th September, 2015]

Barr, N. A. (2012). Economics of the Welfare State. Oxford: OUP Oxford.

Department for Education (2013), “Statistical First Release, Destinations of Key Stage 4 and Key Stage 5 Pupils, 2010/11” Available: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/207749/Main_text_-_SFR19_2013.pdf
[Accessed 5th September, 2015]

Garner, R. (2015), “Tuition fees: Three quarters of students won’t be able to pay off their debt.” Available: http://www.independent.co.uk/news/education/higher/tuition-fees-three-quarters-of-students-wont-be-able-to-pay-off-their-debt-9866446.html [Accessed 5th September, 2015]

Luther King Jr, M. (1948) “The Purpose of Education.” Available:http://kingencyclopedia.stanford.edu/encyclopedia/documentsentry/doc_470200_000/ [Accessed 5th September, 2015]

Shaw, C. (2015) “Maintenance grants scrapped and tuition fees to increase – your reaction.” Available: http://www.theguardian.com/higher-education-network/2015/jul/08/maintenance-grants-scrapped-and-tuition-fees-to-increase-your-reaction-budget-2015 [Accessed 5th September, 2015]

Stone, J. (2015) “Jeremy Corbyn: Scrap tuition fees and give students grants again, says Labour leadership contender”. Available:http://www.independent.co.uk/news/uk/politics/scrap-tuition-fees-and-give-students-grants-again-says-labours-jeremy-corbyn-10392769.html [Accessed 5th September, 2015]

Streeting, W. (2015) “Buy now, pay later as student grants become loans.” Available:https://www.timeshighereducation.co.uk/opinion/student-grants-become-loans-buy-now-pay-later [Accessed 5th September, 2015]

Webb, R., et al (2008) AS Level Sociology AQA. Brentwood: Napier Press


References – NO

Department For Business, Innovation And Skills (2013), “The Impact of University Degrees on the Lifecycle of Earnings: Some Further Analysis”, BIS Research Paper 102 (2013): Government of the United Kingdom, Aug. 2013. Web.

Department For Business, Innovation And Skills (2013), “The Benefits of Higher Education Participation for Individuals and Society: key findings and reports “The Quadrants”” BIS Research Paper 146 (2013): Government of the United Kingdom, Web.

Higher Education Academy (2015), “The HEPI-HEA 2015 Student Academic Experience Survey.” HEPI-HEA. Web.

Anders, Jake (2012), “What’s the Link between Household Income and Going to University?”, DoQSS Working Paper 12-01 (2012), Mar. 2012. Web.

Mishkin, Elizabeth, and John Straub (2014), “The Redistributive Effects of British Subsidies to Higher Education.” Social Policy and Society13.03: 337-55. Social Policy and Society. Cambridge University Press, Web.

Carr, Daniel (2014), “Higher Fees Don’t Mean Fewer Working Class Students – Look at the UK for Proof”, The Guardian, 27 May 2014. Web.

Sá, Filipa (2015), “What Happens to Student Applications When University Tuition Fees Go Up?”, IZA Newsroom, 15 May 2015. Web.

Boursnell, Peter (2015), “How Much Debt Will You Actually Get into by Going to University?”, Which? University, 17 Feb. 2015. Web.

Chowdry, Haroon, and Lorraine Dearden (2012), “Higher Education Funding Reforms in England*.” Fiscal Studies 33.2 (2012): 211-36. May 2012. Web.


is a 3rd year BSc Political Economy student at King’s College London. He is also a political campaigner, a libertarian socialist, and a students’ union representative.


is a graduated in BA International Politics from King’s College London in 2015 and is now pursuing a Masters in Human Resources at Birkbeck University. He is a former President of King’s Think Tank.



Leave a Comment


Your email address will not be published. Required fields are marked *