Tuesday, July 26, 2016

Three Human Dimensions of Conceptualization

The limits of your language are the limits of your world. -- Wittgenstein

Dimensions of Conceptual Definition: emotive, descriptive and communal. There are three often overlooked dimensions of importance to be considered when attempting to define a concept used by humans. They are

A: The Emotive-Evaluative Dimension. This focusses on expression of feeling or evaluation, or lack of such. e.g. creepy, cute, three pounds, light, 6 feet. These terms range from expressive to dispassionate with either positive (celebratory) or negative (deprecatory) connotation.(For more on celebratory/deprecatory comparisons, see Philosophy, Race and Language


B. The Descriptive-Uncertainty Dimension. This focusses on the extent to which judgment is based on presumption as opposed to demonstration, e.g. honest. happy, graduate, felon, equilateral, topology. These terms range from defeasible (also characterized as presumptive, ascriptive or concessive) to summative (invoking the logical sufficiency of necessary conditions).

C. The Personal-Communal. This dimension focuses on the breadth of community invoked (not necessarily actual) by the judgment. e.g. cute, annoying, criminal. These terms range from idiosyncratic to social.

It is important to note that these dimensions are not merely embellishments on the distinction in common parlance between “subjective” and “objective.” Daston and Galison in their book, Objectivity, present much evidence that this distinction has been historically problematic, particularly in the sciences. (For more on “objectivity,” see comments and references at Can Criminal or Immoral Behavior Be Dealt With Objectively?)

Unexamined, a vague concept of objectivity probably best serves the interests of those engaged in the academic, professional or commercial competition for status or markets. For more on such considerations, see Tony Becher(2001) Academic Tribes and Territories: Intellectual Enquiry and the Cultures of Discipline.)

Note well that a disagreement among humans on how to describe something need not require a difference in sensory perceptions among them. A group of people could disagree, and often do, on whether a tarantula is creepy or cute, even though they all agree that it is black and hairy.

Those who find it creepy might disagree with those who think it cute on the conditions under which the tarantula would be also dangerous. Very individual fears (or indifferences) may not be shared among members of the same community. This is common experience.

Descriptive uncertainty, on the other hand, is often overlooked or assumed to indicate a (possibly mistaken) perceptual difference underlying a disagreement. What is ignored is the judgment of uncertainty that may be quite independent of agreed-on perceived characteristics.

For some problematic examples consider the uncertainty involved in deciding that:
1. John, a convicted embezzler just released from prison, can be treated as an honest man;

2. John having memorized X number (3 or 3000?) of Spanish phrases can speak Spanish;

3. John, having acquired some number of certificates and diplomas, is educated; and

4. John, playing above average basketball in his first season, is a star player.
Each of these claims, 1 through 4, is far more vulnerable to rejection (defeasible), and would require far more consensus on evidence than the following:
5. John is six feet tall;

6. John has low blood pressure.

7. John throws the ball well.

8. John speaks some Spanish.
Why? Recognized standards of practice or broad familiarity reduces uncertainty in judgment. But uncertainty in judgment, even if supported by some measure of mathematical uncertainty, is a psychological state evidenced by our (un)willingness to pursue action. Its connection to the "real world" is at best a "psychological" one. (See Cause and Effect: essential belief, misapprehension, moral hazard?)

Isn’t this just obvious, just old hat? Apparently not. In the frantic pursuit of the imagined treasures of creating robotic homunculi, defeasible concepts of mental capacity are reduced to a fool’s gold list of sensory inputs hoped to produce sufficient conditions for characteristics of human intelligence. The three dimensions discussed above are ignored as “unscientific,” e.g. too “sociological” or “philosophical” or “legalistic,” by researchers whose conceptual hammers have been tempered in the hard sciences.

So, for example, traditional myths about teaching and learning abound among some researchers in artificial intelligence – also among researchers in other fields, e.g. psychology, organizational studies, and education. (See Five Essays: Causation in Teaching & Learning )

So, we are advised, if we want to protect humans from killer robots – or robots who just don’t give a damn about humans -- we need only “teach them human values,” or “teach them right from wrong.” (For more on this see United Nations tackles the 'fight' against killer robots; teaching them right from wrong may be the first step.— )

The 3 Dimensions as Indicators of Human Capacities.
With artificial intelligence we’re summoning the demon. — Elon Musk (2015 MIT AeroAstro Centennial Symposium)
I suspect the demon is soon to be, if not already, loosed. There are too many highly intelligent and skilled people working in AI today who, but for a misorientation away from promising sources for modeling the human mind, would have come up with something by this time. (The present situation recalls the German attempt during WWII to develop an atomic bomb: they almost got it but for having gone down an unfruitful byway.)

Some promising sources, if one wants to create an artificial homunculus, are to be found in evolutionary biology, linguistic investigations and social, especially organizational theory. ( (See, for example, Joan Bybee (2010) Language, Usage and Cognition).

The three dimensions examined above lead fairly directly to those human – and not only just human – capacities supporting the intelligence with which our evolutionary history has endowed us. But that is grist for a later discussion.



For more relating to these issues, see Trait vs. Behavior: the sometimes non-science of learning

Cordially, EGR

Wednesday, July 13, 2016

Belief, Disbelief, Truth, Falsehood and Faith

I would never die for my beliefs, because I might be wrong. -- Bertrand Russell
It seems simple common sense that a person’s belief is no guarantee of its truth. Nor does that person’s disbelief by itself establish any falsehood.

However, we use the verb “believe” in an interesting variety of ways. It need not merely indicate some person’s state of mind relative to a possible fact or falsehood, but in many contexts morphs into an expression of confidence; or, even more strongly, a declaration of commitment -- often called “faith.”

In some parts of our multicultural USA, prefacing a statement with "I believe that..." serves as a warning that a somewhat "Sacred Value" is being expressed and is not to be challenged. (See Thumper's mother on this etiquette at Goodspin.)

Compare this first set of expressions, (call it group A):
a. John is out taking a walk
b. John will keep his promise to us.
c. The God of the Bible exists
Consider, now, the following group B
a. I believe that John is out taking a walk.
b. I believe that John will keep his promise to us.
c. I believe that the God of the Bible exists.
Why use these longer forms? Prefacing a sentence with “I believe that…” seems to be hedging our bets. We are already somewhat on the defensive, since merely asserting something from group A would normally be understood to be something we believe.

Now, the group B expressions vary substantially in their relation to the following disclaimer: “…but I might be wrong.” They range from expressing almost throw-away facts (So what if I’m mistaken. Big deal!) to possibilities of disappointment (I hope John doesn’t let us down!) They can go even further into the area of deeply held convictions, where it is difficult for many to imagine adding Russell’s complement, “… but I might be wrong.”

For example, take a statement held by some community of believers (whether theistic, philosophical or secular) to be fundamental. (See Sacred Values) Let FB symbolize some fundamental belief. Anyone who would say “I believe that FB, but I might be wrong,” would not likely be perceived by that community as a member, especially if community members expressed their belief as “I believe in FB." (Even such as Credo quia absurdum est.) Just as Hope tends to deprecate Experience, so Fidelity does Truth.

“Believe in” indicates a commitment to a presumed consensus, a trust. Such commitments are not evaluated in terms of their clarity, or their truth, but in terms of their fervor, and the strength of the bonds of allegiance they are believed to generate to a community of consensus.

That someone bothers to preface an assertion with “I believe that …” is likely prima facie evidence of the shaky grounds upon which that belief rests. Or perhaps a signal-call for kindred spirits.

For references and to examine these issues further, see The Indeterminacy of Consensus: masking ambiguity and vagueness in decision.

Cordially
--- EGR

Friday, July 8, 2016

The “Mea Culpa” Culture in Public Education: failure is the worst sin.

Mea Culpa, Mea culpa, mea maxima culpa. (Through my fault, through my fault, through my most grievous fault — Prayer of Confession, RC Church.)


The media has something similar, except it’s known as ‘mea culpa’ culture. Essentially, it allows them to print and broadcast any amount of utter codswallop and then later hold their hands up and admit: “We got it wrong.” A good example of this phenomenon is the case of The New York Times and Saddam Hussein’s non-existent “weapons of mass destruction.” — Bryan MacDonald, RT Question More, 8-17-14
A Teacher’s Mea Culpa. On July 14, 2014 the Philadelphia Inquirer ran a front page story about a teacher who — after a seven years in teaching — had decided to leave her job in the School District of Philadelphia.

Originally with ten or so friends from Teach of America, she decided to stay beyond their original two-year commitment and work in Philadelphia. “Starry-eyed” and looking forward to a decades-long career teaching in the School District, she started teaching in a middle school. (Teaching middle school is a hard way to begin. My first day was rough. See My First Classroom Teaching Experience.) She really seems to have worked hard at reaching her students. She spent a lot of her own money to buy needed school supplies when they were not provided by the school district.

She began her Philadelphia career in a middle school where I had supervised education students from Widener University in 1984 and ’85. The school was big, crowded, architecturally very difficult to provide supervision for and, by my observations, full of disorder, rough-housing and bullying in the stairwells and secondary halls.That she persisted as a teacher there is an indicator, I think, of her dedication — or hubris?

Then our teacher was transferred to a troubled high school (my wife spent a year there, ’95-’96) where she had to cope with trying to teach algebra to classes with special education students mixed in despite her never having had the training to teach them. Electronic teaching aides — with missing pieces — were provided. She bought what she needed to make them work. Most of her “algebra students” knew nothing above the third grade level of math. Fire alarms would go off frequently, necessitating either evacuation of the school, or illegal disregard by administration, risking a catastrophe. When she called for help with students’ fighting, it would take “seven minutes” for someone to come.

After being transferred again to an even more troubled high school, she needed to take a leave for medical reasons. After a 10 days’ absence for oral surgery, she was told by a school district doctor that she could not return to work for three months for fear that she might “get hit in the face.” The Inquirer printed a statement — on page 1A — she made after deciding to leave:
My students weren’t able to make progress this year. I did them an injustice. The School District did them an injustice. They did not get the education they should have.
One wonders whether the moral education she has as a child had ever sunk in: how could she possibly confess to having inflicted an “injustice” — an “injustice”! — on her students when she had continued work so hard and long with them under very difficult conditions? What about administration? Did they have no responsibility? The teacher “corrects” this tendency on our part to spare her a martyrdom:
It’s not the administration’s fault. “My principal was working hard, but there was one of him.”
The Evidence of Fault? There are several issues here:
a. What has the teacher construed as evidence of such injustice?
b. Has an injustice actually occurred?
c. Is there a deeper issue she’s just not ‘fessin’ up to?
What did that teacher take as evidence of injustice being suffered by students? Their behavior:
"Students don’t verbalize it. They act on it. They run the halls. They smoke in school. They cut class."
In other words they acted like many students in schools, both public and private, in affluent neighborhoods; also, as well as in colleges and universities.

Has injustice actually occurred? Usually the justness of a person’s act is judged by whether they have the ability and resources to prevail in a situation. If the circumstances they faced would be normally overwhelming, their failure is not a case of injustice. Patients who die from ebola have not been unjustly treated if they die despite earnest and professionally correct treatment by their doctors.

There is a deeper issue. Another comment the young teacher makes points to ambition, a motive, which, — though not uncommon in newcomers and often admired — throws a different light on the whole story. “I’m going to get a nursing degree. Teaching degrees just don’t get you anywhere!”

Is this teacher’s mea culpa, a confession of injustice; or, merely a strategem to acquire new employment by showing herself ready to make excuses to cover for administrative and organizational faults? “Hire me and I’ll cover for you!" "Buffering" personnel are much in demand. (See Buffering: Enhancing Moral Hazard in Decision-Making? ).

Failure is the Greatest Sin. In this Land of the Free and Home of the Brave have appeared over the years numbers of — shall we be generous and call them — “confused, misguided or misinformed people,” who have promulgated the idea that despite all external circumstances that impede us, the positive thinking person will prevail in achieving whatever his or heart truly yearns for.

Failure is the fault of the individual who fails in life, because positive thinking, alone, is sufficient to avoid failure. Many education departments in many a university teach this dogma despite otherwise fervent secular commitments. (See Lurpofactsomosis: Reinvigorating American Education).

The teacher featured in the Philadelphia Inquirer seems to have a “positive attitude” of that ilk. Initially, she joined Teach for America, helping her to avoid the training period and costs of more conventional university access to experimenting with public school pupils. (See Can Education Be Made A Science?). She was probably inculcated with the attitude that her above average personality and intelligence — as proven by joining TFA — , as well as avoidance with “typical educators” would, if coupled with “positive thinking” allow her Michelle-Rhee-like ascent into to the upper echelons of educational reformers and policy-makers.

Far from the problems of dealing with “unjustly” troubled kids in middle- or high school classrooms.

For more information and to examine these issues further, see Cannonfodder: preparing teachers for public schools.).


Cordially
--- EGR




Tuesday, July 5, 2016

Can Education Be Made A Science?

No amount of experimentation can ever prove me right; a single experiment can prove me wrong. -- Albert Einstein
Let's specify more precisely what the question of the title, Can Education Be Made A Science?, is intended to mean. It should be understood to ask, Can technology and methods be developed which will give us (human beings) control over the production of outcomes designated by educational goal statements?

We can analyze this question by breaking it into several related questions:
a. Can we cause people (the average person?) to learn? That is
b. Can we cause people to be either moral, skilled, or well-behaved?
c. Can we cause them to be this way whether or not they want to be this way? i.e. against their will.
d. Better yet, can we cause them to want to be this way? (Advertising seem to manage to do this surprisingly well -- check your teenager's clothing,)
The short answer to all of these questions is: it all depends. What might surprise you is what it depends on.
The first big, very big thing an education science would depend on is
1. our conceiving of educational goals whose outcomes would be the effects of our solely our actions.
For example, suppose a religious community chose as a goal of education, Being accepted by God into Heaven. If they also believed that it is by an omnipotent God's grace alone that people achieve such acceptance, then, logically they must relinquish the hope of a science of education, since to think that humans have the power to compel God to accept them is incompatible with this belief.

Two other conditions, no small things, either, are:
2. that there be a workable consensus on goal outcomes (and the steps toward them); and
3. that the outcomes be unambiguously defined. (This crucial step is too technical to discuss in a blog. See the article and linked articles given in the reference on the bottom.)
Many an educational reform proposal has died for failing to meet conditions 2 or 3. Just consider the widely encountered goal statement, Prepare the student for the 21st Century. Who knows what is being specified here? And how many people agree on it? Would it have come to be a political issue in the U.S. if much of any consensus existed? (See What does Consensus mean, anyway?)

Recalling Einstein's quote above we might wonder about another issue. How much reliable experimentation would we have to conduct to determine both ends and means of our sought-after "scientific" education? After much fanfare most so-called "Best Practices" research has yielded minuscule results despite grand expenditures! (For a set of pertinent blogs, see Best Practices Search Results.)

If we are desperate to answer yes to the question as to whether education could be made a science, we might wish that the words science and education slip from their traditional moorings and come to be applicable in ways that satisfy us. (Which "us?" Which "ways?")

Advertising campaigns might do the trick. Celebrities might be pressed into service to convince people that educators are primarily scientific technicians, rather than the kind of magician they are often made out to be in the sitcoms. (See Technician, or Magician: Can You Tell the Difference?) Free promotional samples of technology would no doubt be welcomed by cash-strapped school districts whose political governors are almost always happy to concede on the minor matter of language: science? education? Whatever!

I am reminded of the principal in a high school in Philadelphia (circa 1990) who, when directed that all students were to take a course in Algebra -- even as staff ratings were to depend on student success -- did the following. He arranged for the lowest accomplishing students to be given an arithmetic-review course, which was called "Algebra-X." This "reform" was promoted throughout the systems high-schools. And so the superintendent could publicly claim that all students in Philadelphia high schools were studying algebra. (See Reform by Redefinition.)

For references and to examine these issues further, see Can There Be a Science of Education?

Cordially
--- EGR

Bigotry: A Cheap, Traditional Analgesic for Extremists.

Everything in moderation. -- Kleovoulos o Lindios (6th century BC.)

From Mothers Milk to Hate-Mongers Bile. Bigotry promotes ideas of inferiority: moral, mental or physical. Bigotry develops from an almost universal natural predisposition for affiliation into often extreme ideologies and practices of exclusivity, suppression and abuse.

A child's natural attachment to its mother can, under certain circumstances of upbringing, develop into the participation in the abuse of other human beings because Mother or family didn't "like" them. As the song from Rogers and Hammersteins South Pacific goes:
You've got to be taught before it's too late,
Before you are six or seven or eight,
To hate all the people your relatives hate,
You've got to be carefully taught!
Hate, in itself, may not be an evil. It is often a natural, understandable emotional response by those who suffer mistreatment. Hate gets to be morally suspect when it overgeneralizes to innocent people on the basis of characteristics common to both innocents and miscreants. But it is important not to treat every case of hatred, where inter-group relations are involved, as bigotry.

Extreme Group Antagonism: A root of evil? We grow up into groups often bound together by little more than habits of association built on fond hopes that, on occasion, may disregard bitter experience. In many a society around the world, where people experience inequality, suppression and alienation, there will be those -- giving merely lip-service only to equality, liberty and fraternity -- who believe they have found soul-mates merely because they all spout the same slogans. Unscrupulous leaders not infrequently exploit such misapprehensions of fraternity among potential bigots. (See Masks: Words that Hide Agendas)

Groups maintain their very existence through antagonism, competition, if not for survival needs, then for the attention of would-be members. So long as their participation is tolerable, these soul-mates look to build their self-esteem in friendly company pursuing what they take to be common concerns. (See SLOGANS: junkfood, dead-weight or poison? ) Provided, that is, that the cost of their membership is tolerable. The leadership gains in self-esteem, and very often in substantial benefits at the likely cost of its followers. (See A Leader's Primary Pursuit: Incumbency) The task of leadership, of providing motive for the organization, and for its continuation, is made easier, the more they can convince members that there is a threat from competing groups.

Enter, the Heroes? However, there are not a few who see this competitive dynamic as a primal evil. They seem to believe that doing away with all forms of competition would usher in peace and prosperity, as well as equality, liberty and fraternity the world over.

This is most likely a forlorn hope. The very virtues many of us pursue, e.g. respect for individuals, justice, rights, equity and the like rest on the idea of a basic competition among individuals, e.g. children for their parents attention, families for community status, personal self-esteem in comparison with friends and acquaintances.

This is an important conceptual point. Lacking competition, i.e. desire plus impediment to access between groups or individuals, there would be no issues to be raised with respect to equality, or justice; nor any issues of liberty or fraternity.
The moon belongs to everyone
The best things in life are free
The stars belong to everyone
They gleam there for you and me
Lyrics by J. Harris, T. Lewis, and M. Bivins

Because groups often facilitate, or more important, restrict access to desired scarce objects or situations, membership becomes important in and of itself, both for followers and leaders. The values of belonging, group-membership, are inculcated not only within families, but in informal children's groups, in schools, through sports, in political parties, and, especially, in religions. The common sanction for deviants who resist the blandishments of belonging is love-withdrawal in one form or another. The common reward is an increase in esteem for the adherent. (A religious person might deprecate all of these as Sins of Pride, committing, ironically, the very sin he or she is criticizing.)

Would-be reformers require the existence of violators. The virtues they seek are to bring balance, fairness, to the competitions we cannot avoid. But there is a critical ambiguity in our notion of fairness to be taken into consideration: fair share vs fair play. See Fair Share vs. Fair Play: Two Competing Conceptions of Justice )

A persistent confusion about the distinction between fair share and fair play confuses many attempts to compensate for past prejudices, particularly in policies of affirmative action. Another dilemma faced by uncareful reformers is that cures for the effects of discrimination are offered that threaten to spread the disease. See Positive Discrimination )

Languages of Contempt. Almost every group uses words which help them celebrate themselves or, more usually disparage possible competitors. Social class or other in-group distinctions are expressed by such terms as mouth-breathers, not our kind, unhip commuter-student, hill-billy, redneck, townie, hippy, suits -- not to mention a whole host of words I will -- for the sake of not confronting readers with obscenities and vulgarities -- forego mentioning.

Ethnic differences, often thought of as racial differences, can be treated derogatorily with such terms as polak, wop, slant, spic and the like. Such terms can be very upsetting. When I taught in a junior high school many years ago I had a Puerto Rican colleague who formed a multi-ethnic club for students called SPanish , Indian and Colored. Parents found the acronym, SPIC, to be objectionable. They would not accept the teacher's explanation that he deliberate named the club SPIC in order to de-sensitize his students to the words being used as insults, inculcating, instead, pride of belonging. The parents thought such theoretical benefits to be unlikely, considering the insult potential that actually existed.

Political Correctness is an attempt to suppress the derogatory use of certain words in the hope that prohibiting the expression of inciting words will quench the contempt that might underlie them. So it is that no one -- in a public school, at least -- can be referred to as crippled, or stupid, or even ignorant. They are (merely) challenged. But so long as social practices of contact and association -- and funding thereof -- remain unchanged, word play is empty incantation.

The likely greatest influences on reducing some (while increasing other) kinds of social group conflict in the United States of America, I suspect, might well NOT have been policing, schooling, revivals, churches, or famous graduation speakers; but, rather MTV, new musical fads, sit-ins, and a general obstreperousness on the part of young people in challenging tradition, all this sound-bitten as sexndrugsnrocknroll. (Even so, the same kinds of debauchery their parents and grandparents indulged in can still be enjoyed but for a different content in a different manner.)

It is important to note that conflict may produce benefits as well costs. The important question, however, is, Who benefits from the conflict and who pays the costs? These this may be followed up on here: The Functions of Conflict, )

Immorality and Extremism. Any intended intervention into peoples lives that might be based on possible untruths or misperceptions demands prior careful examination. This is so not only to make such intervention effective, but to forestall injustice. Proceeding with such an intervention is not only likely to be ineffective, but immoral, in the eyes of many. (See Intervention: helping, interfering or just being useless? )

Bigotries of all kinds support such flawed interventions. They are most most objectionable, not as individual hidden, inner seethings of hatred toward some target group, but as open action based on questionable assumptions of fact and biased perceptions of motive. It is exhilarating for the bigot to feel contempt for others; it gives the illusion of raising the spirit, and invigorating the soul. Bigotry is the curdled, rotted residue of mother-love, socially fermented in falsehood and contempt that serves as a cheap, easy fix for diseased or underdeveloped self-esteem.

For examples and to examine these issues further, see Philosophy, Race and Language

Cordially
--- EGR


Monday, July 4, 2016

Honor Killing: Serving the Idols of the Tribe

Pakistan: Pregnant woman stoned to death -- Philadelphia Inquirer, Wednesday, May 28, 2014, A1

Don't Just Blame Religion. So what's new? The United Nations Population Fund estimates that the annual worldwide total of honor-killing victims may be as high as five thousand women.[1]

But don't jump to any conclusions about religion, especially the likely one, for most Americans, that Islam is more prone to this than others. This jump relies on common ignorance about the histories of Christianity, Judaism, Confucianism, etc.

In a similar case in Nice, France in 1994, Koranic authorities testified that Koranic justice did not sanction the family's actions in killing their "westernized" daughter.[2] It's as much a matter of location, as of religion.

Wherever religious institutions, and particularly their leaders, rely on secular power for their protection or their survival, the universalist virtues they might espouse, e.g. mercy, forgiveness, etc, will be stifled to disarm emotions that might upset traditions of obedience to secular leaders. (See A Leader's Primary Pursuit: Incumbency)

Just consider, for example, how Constantine's demand of early Christian bishops for unified doctrinal criteria of membership brought about the Nicene Creed (381 BCE). But even today in these United States of America, modern sectarian Constantines have been scarcely able to refrain from attempts to season the public scholastic pot with state-enforced "intelligent design" curricula. See "Intelligent Design," "Unintelligent Curriculum")

So Why Follow the Leader? A dumb question(!) ejaculated rarely except by those enjoying the frissons of self-righteousness. Answer: because our personal status, our individual pride is bolstered by those we profess to follow. (In how many congregations does seating reflect personal status?) With all too rare exception, the dominant tribal idol of every member is inevitably Ego. (See God, Church and Schooling for Democracy: American Faith in "Faith.")

Making Convenient Sacrifices: Our Kids. In Genesis 22, Abraham is commanded to, then restrained, from offering Isaac as a burned sacrifice. (Is God surprised, or gratified, by Abraham's obedience? Either interpretation belies His omniscience.) Some biblical scholars take this story to indicate the renunciation by Israel of the ancient practice of child sacrifice, commonly done in times of extremity by a variety of peoples.

Many of us point, with barely hidden Schadenfreude, to the "honor-killings" by others -- whom we secretly look down on -- of their own children. "Thank God we are not like them!

The reality is that we, in "modern-day America" have reverted to that ancient practice of child sacrifice, not physically, perhaps, but psychologically and spiritually. And the gods we sacrifice to are unworthy ones: ambition, self-aggrandizement, and reputation.

How much more preferable for many parents to put their kids through the gauntlet of status-conscious early schooling, the grind of academics-manqué of high school, and the circus of college admissions than to have to select a college of "lower standing" or have the student enter later than in the freshmen year.

Comfortable sacrifices of minor import? Alfred North Whitehead weighs in with this comment:
When one considers in its length and breadth the importance of a nation's young, the broken lives, the defeated hopes, the national failures, which result from the frivolous inertia with which (education) is treated, it is difficult to restrain within oneself a savage rage.[3]

It's a matter of family honor! Mom's honor and Dad's honor. Grandpa's and Grandma's, too.

To examine these issues further, see "Tracking" in Public Education: preparation for the world of work?


Cordially --- EGR

References

[1] See "Ending Violence Against Women and Girls" United Nations Population Fund http://www.unfpa.org/swp/2000/english/ch03.html

[2] See Terrill Jones, "Family Sentenced for killing westernized girl," Philadelphia Inquirer, Sunday, December 4, 1994. For more dismal information see LEADERSHIP AS USURPATION, at http://goo.gl/9SwYSp

[3] Alfred North Whitehead. The Aims of Education and Other Essays (New York: Macmillan, 1929) p.22.

Sunday, July 3, 2016

Are Terrorists Such as Suicide Bombers, Nazi or Khmer Rouge Killers Crazy? Is Evil-Doing Clearly Irrational?

The only good is knowledge and the only evil is ignorance. - Socrates as quoted in Diogenes Laertius' Lives of Eminent Philosophers

What are we to say about madness when it takes hold of an entire society? How are we to conceptualize madness that becomes normative within a particular culture?-- R. Koenigsberg ((March 3, 2014)) Social Madness/Collective Delusion [1]
Is Evil-Doing Madness? Underlying much comment on the horrendous genocides that continue into the present day appears to be the assumption that rationally acting persons could not deliberately choose to behave "evilly," no matter what we might take to be their values.

Thus, if, as many believe, God is the model of goodness and rationality -- The Book of Job notwithstanding -- then when people pursue "evil," they must be "mad," or "irrational," at least. This assumption that "rational" people never -- by definition, as it were -- consciously, deliberately pursue evil is a "fantasy" which derives much earlier than from much faulted Enlightenment philosophers. (See Koenigsberg to the contrary, cited.)

Is Irrationality a Mental Condition? If you take something like "all carefully considered action is rational" to be something like Newton's First Law of Motion, then it is not so much descriptive of actual but of ideal behavior. Newton's laws were applied to objects which are idealized as single point masses. They were meant to provide possibilities for investigating deviations from the (axiomatic) ideals postulated. The identification of Good with Rationality is part of a long tradition and is reflected in the Goya etching reproduced to the right of this text.

For example, we may say upon finding out a friend has started using heroin, "How is that going to solve anything?" In doing so we have already formulated a hypothesis "explaining" his behavior: he is trying to stand up to stress, perhaps; or, to forget what bothers him, etc. We try to understand the drug-user's behavior at somehow rational, even if ineffective in the long run. But if we find he lives a generally unproblematic life, full of happy occasions, we might wonder at the "irrationality" of his action. Perhaps he "needs help," whether he thinks he does or not!

There's a real issue. People thought to be "really irrational" risk losing their freedom and even, their possessions. For example, many people advocate compulsory supervision of the homeless, particularly when the weather is cold. To these would be controllers -- generally described in the media as "saviors" or "helpers" -- it is irrational to sleep out under a bridge in a cardboard box, when offered the hospitality of a "shelter." (See also Are Humans Rational? What’s At Stake?)


Can We Agree on What is Good? In order to bypass a long controversial discussion about what "really" is good, we adapt a framework such as that suggested by Paul Ziff [2], i.e. "good" means "answering certain interests." Thus, we can understand how a person who advocates strong control of firearms can still discriminate between, say, a "good gun" and an "inferior gun." More important, we can understand how a rational person can pursue evil. This is nothing new or startling. We all understand someone making a choice for the least evil of offered options. And we don't normally regard such as "irrational." (See Choosing The Lesser Evil: a Moral Failure?)

(Idealists will likely find Ziff's analysis bothersome. It gives up on trying to discover what is "really" good, focussing on how people actually use the word in practical situations. It does not try to disregard the existence of conflicting "goods" since it concedes the existence of possibly conflicting interests.)

Our modern hope, (fantasy?) is that the spread of knowledge, i.e. the near universal acceptance of certain privileged sources of belief -- e.g. Science, less likely, Religion -- will forestall conflicts caused by the rational pursuit of dominance which can favor a minority of individuals, families, classes, tribes, without necessitating the extirpation of those less powerful.

Evil-Doing is not, per se, Irrational. The Nazis were generally not insane; nor were their enthusiasts or followers. Nor are the many people we deign to call criminals, perverts, and the like. And it does not follow purely logically that "evil-doers" ought to be punished. This takes additional commitments to values -- some of which great many adhere to -- to make such infliction rational. (See Nassir Ghaemi (2011) A First Rate Madness p.245.)

The attempt by Koenigsberg and others debating this issue is their trying to address -- and even "solve" -- moral problems while disavowing commitments to anything more substantial than a "scientific" pursuit of practical knowledge. But moral issues may not be sufficiently "objective" to permit of such solution.

The Really Basic Problem. Koenigsberg and others professionally concerned with mental health suggest that "irrationality" be redefined from a characteristic of individuals to one of collectivities. But "redefining" irrationality as a group madness is hardly going to make a great dent here. It would most probably only mask the basic problem.

What is this basic problem? Maintaining and recognizing for a selected cadre of professionals the ownership of certain human behavior -- and its treatment rights -- despite their lack of consensus on fundamental ethical warrants, terminology and therapy.

For examples and to examine these issues further, see Can Criminal or Immoral Behavior Be Dealt With Objectively?

See also, Is it Reasonable to be Logical?


Cordially
--- EGR

REFERENCES

[1] Richard A. Koenigsberg, Ph.D (March 3, 2014) Social Madness/Collective Delusion. LIBRARY OF SOCIAL SCIENCE

[2] Paul Ziff, "On the Word Good" in Semantic Analysis (1960).