Hildegarde

Jane Haddam’s WordPress weblog

Archive for the ‘Uncategorized’ Category

Too Early

with one comment

Too early is what it is at the moment.  It’s election day in the US, and I got up very early in order to get started very early in order to get to my polling place before seven, because I usually leave for school at seven, and…

Well, and.

I got to the polls before six thirty, voted in no time flat because there weren’t all that many of us there, got all the way to school before seven, got up to the office before the secretaries had opened the division, got all my coping done…and now it’s not even eight, or not as much as five minutes past eight, or something, and here I am.

It’s beginning to look like an interesting evening, at least in this state.  And I discovered something I hadn’t known before–turnout here in midterm elections usually tops 60%.  Which is a higher percentage than most states manage even in Presidential years.

But that’s not what we’re here for.

I will say that I’m a little too addled at the moment to go into the entire set of arguments I’ve been organizing over the past couple of days, but there are a couple of things I can’t pass up.

First, as for John’s thing about how he doesn’t count on anything humans do is interesting on a number of levels.

To begin with, it’s not true.  He’s had medical care in his life, I know that for a fact.  He does expect the medicine aimed at his body to be objectively based, even though it’s not based on anything “out there,” but only on what’s going on inside human bodies.

What’s more, since he can read and write intelligbily, he must accept the objective basis of rules of language, too–he must accept that there are some ways languages “work” and some they do not, and that what “works” is set irrespective of our wishes and desires. 

But the real kicker here is this:  in order for the human mind and human behavior to be as arbitrary and beyond the reach of general formulating rules as John says it is, God would have to exist.  Or something like God would.  Because what that would mean would be something like the existence of the soul in the most radical definition of the term–a part of the human being that is entirely divorced from nature.

“Objective” is not the same as “out there.”  It means “operating independently of our whims and wishes, something we discover, not something we invent.” 

Robert says this:

>>>

“When I say there is an objective basis for morality, all I am saying is this: human beings are not infinitely malleable. There are facts about human nature–about the way human beings feel and think and respond to events and other stimuli–that can be known.

These facts are beyond our ability to change, but they are within are ability to know.

And once we know these things, we can formulate rules about the way they behave.

And that, right there, is an objective basis for morality.”

NO. Cats, fish and ants have a fixed range of behaviors, too. But no one studying them speaks of “cat morality” or claims a deviant ant is “immoral.” The people who study them are “behaviorists” and some study people in a similar fashion. It’s an “objective basis for morality” only to the extent you can claim certain moral codes are literally humanly impossible<<<<

And to that I say–YES.  That is an objective basis for morality.  It tells us not only that certain moral codes are humanly impossible, but also that certain moral precepts will result in certain largely predictable forms of human behavior. 

And that–that predictability, however difficult it is to put it into practice–is, indeed, exactly an objective basis.

It matters not a flaming damn who considers what to be “immoral.”  Throughout the ancient and Medieval world, people routinely assumed both alchemy and astrology to be science–not supernatural, not a matter of magic and incantations, but actual science, part of the attempt to understand the natural world in natural terms.

They were wrong, but the fact that they were wrong does not mean that there is no actual science, or that science is impossible.

Or, as Lymaree put it:

>>>People CAN be taught that all sorts of terrible things are not only acceptable, but desirable and yes, moral.<<<

But people can be taught that the moon is made of green cheese and that if you sail your ship too far to the West, you’ll fall off the edge of the world.

That doesn’t mean that there aren’t objectively based facts about the moon and the shape of the earth.

I’ll say it again:   most of the people involved in this discussion have an absolute passion for the ad populam fallacy, at least when it comes to this one thing.  The entire argument against an objective basis for morality so far amounts to “well, lots of people think different things are moral and immoral.”

Lots of people think that the earth is flat and it was created ex nihilo in the last ten thousand years.  So what?

The big deal, though, is the constant assertion, by more than one poster, that–to put this as Lymaree did–

>>We know that humans individually and societally, can and do live, thrive and reproduce with foot-binding, genital mutilation, slavery, suttee, and all sorts of morally objectionable behaviors<<<

Let’s put aside what should be obvious: morality concerns individuals.  It only concerns societies collaterally.  So I’m not talking about what effect knowing the moral rules will have on societies. 

But.

The simple fact is that the quotation above is wrong.  We do not know “that humans individually and societally, can and do live, thrive and reproduce with foot-binding, genital mutilation, slavery, suttee, and all sorts of morally objectionable behaviors.”

In fact, we know exactly the opposite. 

Take a look at those societies, historically, that practiced the kinds of things you’re talking about.  What do you see?  They can certainly reach a level where their aristocracies are fairly comfortable.  Most of their citizens, however, live those “nasty, brutish and short” lives Hobbes spoke of.

What’s more, those societies are largely stagnant.  They constitute worlds in which wealth is something you have, not something you do.  Not a single one of them developed anything like what we now call science.  Most of them were either entirely ephemeral–ah, to Shelley’s giant head in the sand–or utterly stagnant. 

Hell, take a look, today, at societies with different and alternate codes of morality–you can start with North Korea, for instance, which consists of a small ruling clique that’s eating and a population that’s slowly starving to death.  Or you can go for something pleasanter to look at, like Saudi Arabia, where there’s lots of money, state of the art medicine and all the rest of it–it’s just that it’s all bought from, built by and run by foreigners with different moral codes and political convictions. 

Physics can tell you what the rules are if you want to build a bridge.  It cannot tell you if you want to build a bridge.  But the if question is another subject.  The rules for bridge building are objective.  If you follow them, your bridge will stand–and only to the extent that you both understand those rules and do follow them will it stand. 

And yes, you can certainly build a rickety bridge on partial information.  But so what?  The rule remains.  To the extent that you undersand and follow the rules, you will build a bridge that will stand.

Virtually every poster to this blog would take exactly the opposite tack they do here on morality if the subject were, for instance, politics and government.  Most of you expect that certain kinds of actions in the world–overregulation and taxation by governments,  for instance–will yield predictable results in national economies.

If you didn’t assume that, there would be no point in going out to vote today or any other day.

Hell, if we didn’t assume that, there would be no reason to resist attempts at founding a Communist state–after all, if human beings can thrive and prosper willy-nilly, then there’s no reason they couldn’t thrive and prosper that way.

I’ve got to stop doing this.  I’m practically catatonic, and I’m teaching Shakespeare and Yeats today.

That’s always interesting.

Written by janeh

November 2nd, 2010 at 8:44 am

Posted in Uncategorized

Definitions

with 8 comments

So, it’s Sunday morning, and on Sunday morning I usually write a bit here and then go off and listen to harpsichords.

But this Sunday morning I have a lot to do, which means leaving the house by seven thirty.  That’s later than I leave the house in the middle of the week, but not by much.

And it’s not like I’m going to run out and do something and then run back.  I’m going to be gone most of the day.  So my mind is already boggled, if you know what I mean.

I was going to skip the blog altogether this morning, but I’ve got tea I’m waiting to cool down and some time here, and I thought I’d make something clear I’d never made clear before.

How do I get into these sentences?  Is it the time of day?  How soon it’s been since I woke up?  What? 

Anyway.

Yesterday, I got accused, in e-mail, of circular reasoning–of assuming what I need to prove.

That is, of assuming that there must be an objective basis for morality.

But my arguments of yesterday did not, in fact, require that there actually be an objective basis of morality.

They were arguments about why other arguments were invalid–and those arguments remain invalid whether we decided, in the end, that an objective basis for morality exists or not.

That means that even if an objective basis for morality does NOT exist, you can’t prove it by the kinds of arguments I outlined yesterday. 

The arguments themselves are false, no matter what might be the underlying truth or falsity of the proposition they are meant to “prove.”

There really are rules of rhetoric. 

But as to why I think there is an objective basis for morality–well, here it is.

First, you have only two possible choices.

Either human nature is largely fixed, or it is largely malleable.

That is, either we are born with certain predilections, responses, tendencies, and behaviors that no amount of education, training, or childrearing will change at their foundations.

Or we are born capable of being molded into anything at all.

I would think that it would be fairly obvious that the second option is wrong.  There have been dozens of experiments in attempting to “nurture” infant human beings into the kind of human beings we want them to be–to make them noncompetitive, without greed, withat attachments to their particular children or parents.

Every one of those experiments has not only failed, it has failed spectacularly.  Sometimes–as in the “shared child” policies of the early Israeli kibbutzim–such experiements have simply fallen apart.  Sometimes–as in the long night of Soviet attempts to raise “perfect citizens” in state orphanages–they have caused untold damage and pain.

But there is no indication anywhere, in experiment or in history, that it is possible that an infinitely malleable human nature exists.

There is no blank slate. 

But if there is no blank slate, then something else must be true–human nature must have fixed characteristics.

That is, there must be facts about human nature and how it operates that must be true.

And these facts are objective–they are not constructed by us, because if they were, we’d have that blank slate after all.

But if there are facts about human nature that are fixed–just as there are facts about rocks, and outer space, and the digestive systems of house cats–then we can both discover those facts and derive from them rules about how they will behave under certain circumstances.

Since there are facts about steel, for instance, we can discover those facts and then use them to answer the question:  can I use this stuff to make a platform strong enough to carry five articulated lorries?

And after we answer that question, we can answer the question:  what rules do I have to follow to actually make that happen?

The same is true of things that might seem, on the surface, less cut and dried than steel.

Language has rules that are also objective–that operate outside us, that are not merely a matter of “we want to.”

Even people who try to invent entire languages of their own–think Tolkein, and Elvish–find that, in order to produce something that can be made comprehensible in any sense, they must follow certain broad rules. 

And once they have so followed them, the language itself takes on its own internal knowledge, and that knowledge cannot be breached as long as you want language to remain comprehensible.

I can say:  the cat was wearing a jetpack and hoping to get chocolate in Moscow in the morning.

I can say:  pig iron can be basted in butter and used to make an excellent omelet.

I can say:  London is a city in Venezuela where pink penguins live.

I can say all those things, and they can be wrong and silly and even idiotic.  But they are all comprehensible.

But if I say:  marshmallow coldcuts running then upmarket mushroom as.

It makes no sense at all.  Languages are not infinitely malleable.  You cannot do anything at all with them and still have them work.

When I say there is an objective basis for morality, all I am saying is this:  human beings are not infinitely malleable.  There are facts about human nature–about the way human beings feel and think and respond to events and other stimuli–that can be known.

These facts are beyond our ability to change, but they are within are ability to know.

And once we know these things, we can formulate rules about the way they behave. 

And that, right there, is an objective basis for morality. 

What those facts cannot tell us, of course, is whether we want to build a bridge or a house or a SnoCone. 

But the laws of physics can’t tell us if we want to build a bridge or an apartment complex or a prison.

We don’t say that, since they can’t, there are no laws of physics.

I’ll repeat what I said yesterday:  if you judged any other human endeavor by the standards some of you want to apply to morality, you would end with a world of no science at all.

And no knowledge.

So that’s why I think there’s an objective basis for morality.

If you can make a case for the blank slate–for the idea that we are born with no fixed human nature, that human beings are whatever they’re brought up to be and nothing else (all nurture and no nature), go right ahead.

But nobody on earth has ever managed to prove that yet, and I’m not holding my breath.

I’ve got to go start the car.

Sigh.

Written by janeh

October 31st, 2010 at 5:53 am

Posted in Uncategorized

Stranger

without comments

So, it’s Saturday.  And, I know, I know.  I’ve been saying for days that I was going to get around to writing about Sam Harris’s new book, The Moral Landscape, and why it’s a train wreck.

And I am, sort of, because the book is a train wreck, on a number of levels.

At the moment, however, I’d actually like to write about the one way in which Mr. Harris and I agree–or, really the one way in which he and I came to the same conclusion about a particular area of moral thinking.

It was odd, really, because while I was reading this book–sort of, I’m finding it very difficult to get through, precisely because it should not be difficult to get through–

Anyway, while I was reading this book, I was having a discussion through e-mail about whether or not morality could be objectively based.  And my correspondent was saying it could not be, because:

*no matter what moral rules you devised, some people somewhere would not agree with them

*no matter what moral rules you devised, some people–maybe even most people–wouldn’t follow them.

Now, I want you to think about those requirements for a minute.

If we applied them to, say, chemistry–would be be able to say that chemistry has an objective basis?

After all, there are plenty of people who reject findings in the chemical sciences all the time, and even more who reject the technology (in, say, agriculture) arising from it. 

Does the fact that many people reject chemical fertilizers because they are potentially poisonous mean they don’t work?   Does the fact that many other people reject the entire idea of chemical science in favor of various religions (especially in Africa) mean that chemistry is not objectively based?

For no other area of human study other than morality do we make the requirement that it cannot be rejected unless everybody agrees with it. 

If we did make such a requirement, all science would cease to exist.  There isn’t anything “everybody” agrees with.  There isn’t anything that somebody cannot come up with another preference, idea, counterfact for. 

The second requirement is even stranger.

Consider, for a moment, the science of nutrition.  We certainly study human nutrition, what the human body needs to be healthy and to function well.

And we certainly turn the results of these studies into rules for diet and exercise.

It’s even the case that most people accept that these rules are objectively based and rightly formulated.

Does that mean that most people follow them?

Of course not.  In fact, most people don’t follow them.  Steamed veggies and poached chicken may be good for us, but the vast majority heads for the fried chicken, spare ribs and chocolate layer cake every time.

Does that lead us to conclude that nutrition science is not objectively based?

No, of course not.  And if we did come to any such conclusion, we’d feel like idiots.

But you don’t have to go into the hard or even the applied sciences–as the word “science” is used in this century–to find the same phenomenon.

There are, for instance, rules for rhetoric and valid argumentation.  These rules are not arbitrary.  They are indeed objective, in that they accurately describe the ways in which language can be used to reach certain results, and the ways it cannot be used.

The ad populam argument is not invalid because we don’t like it.  It’s invalid because it’s objectively wrong. 

If 50 million people declare that the earth is flat, the earth is still round.  Facts are not determined by their popularity.

The ad ignoratem argument is not invalid because we don’t like it.  It’s invalid because it’s objectively wrong.

It is not true that ghosts must have moved my toaster this morning, just because nobody can think of any other explanation for it having een moved.

Why is it, then, that we are able to see the objectivity of so many other forms of human endeavor, even when lots of people don’t agree with them and lots of people will not follow the rules that result from them? 

Why is it only the objectivity of morality we try to reject on these grounds?

And I’ve been thinking about it, and I’ve come up with this:

I think we try to reject the objective basis of morality because we don’t want morality to be objectively based.

And we don’t want it to be objectively based, because we all fear that we will find, on at least some points, that we don’t want to follow that objectively based morality.

The moralist will tell us we need steamed veggies, and we’ll want the chocolate cheesecake instead.

The problem is that rules of morality are not like rules of nutrition.  If we want the chocolate cheesecake instead, we tend to feel justified, and more annoyed at the rules for making us worry about it.

But when we want the equivalent of chocolate cheesecake in morality, we feel guilty, and we feel guilty even when the rules we break are fairly minor.

To break the rules of nutrition is to get fat, or die young, or have an upset stomach.

To break the rules of morality is to be branded unfit to live as a human being.

Those are stakes with implications far more serious than looking like an indiot in Spandex. 

I think that, deep down, most of us do feel unworthy to live as human beings.  I think we create an enormous safety barrier over those feelings, shoving them deep down inside, ignoring them as much as possible. 

But I think those feelings are there–hell, Christianity recognized them from the beginning, as did Hinduism from what I understand of it.  I’m willing to bet Islam understood them, too.

The fact that those feelings are there, however, means that we have an enormous stake in denying that an objectively based moral code is even possible–because if it’s not possible, then nobody has any grounds on which to judge that we are not fit to exist.

And we have no grounds on which we must pass that judgment on ourselves.

I think we’re all running for something we almost never let ourselves think about.

And that’s serious enough for a Saturday.  I’m going to run off and listen to something.

Maybe Bach.  Maybe Thelonious Monk.

Written by janeh

October 30th, 2010 at 9:44 am

Posted in Uncategorized

Bride of Fan Mail

with 3 comments

Okay, I have to explain something here, because it will tell you how I ended up in this particular place on this particular morning.

I have an alarm clock, bright yellow travel model, from L.L. Bean.  It runs on two batteries, a watch battery and a regular little cylindrical one. 

I probably just spelled that wrong.

I need a new watch battery for it.  Which means that at the moment, it does not reliably work as an alarm.  And getting the watch battery will require something of a drive, so I haven’t gotten around to it yet.

This would not matter, except that I need an alarm, and the only other available alarm in the house is on my phone.  So at night these days, I bring the phone to bed with an alarm switched on, and that wakes me up in the morning.

But here’s the thing:  my phone is also set up so that I can read e-mail on it.  And the first thing I do in the morning when the alarm goes off is to grab my glasses, grab my phone, and check my e-mail before I sit all the way up.

Most days, this is an excerise in boredom–I wake up at four thirty in the morning, at the latest.  There’s not much on the e-mail except new user registrations for the blog from Russia and little flurries of argument from my harpsichord list.

This morning, I had something else–a fan letter, of the kind that wants to beat me about the head for being on the wrong side in politics.

I get quite a few of these–as I’ve mentioned before–and usually they come down to a reader who has decided that anything any character says is “my” opinion, and therefore it is me, not Franklin or Catherine or Henry, who has X point of view about X subject.

This may be one of these, too, but I can’t tell, because for the first time, ever, I can’t figure out which side this writer thinks I’m on.

I know, I know.  Usually, there are clues.  In this case, though, there are clues going in both directions.  On the one hand, she says that I think anybody who doesn’t agree with me is an ignorant redneck hick–and that sounds like a conservative complaining I’m liberal.  On the other hand, she says I sound like I’m full of hate–and that sounds like a liberal deploring my conservatism.

What’s even more confusing to me is that she complains that my last few books have been full of political rhetoric–but of the last three books, only one has a political theme.  As far as I know, neither Cheating at Solitaire or Wanting Sheila Dead contain any politics at all.

My best guess is that the book she’s taken offense to is Living Witness, in which there are, certainly, two characters who are portrayed as ignorant redneck hicks, and both of them are on the ID side–Franklin and Alice, a pair of hateful, stupid people.

But there are other people on the ID side–Gary and Nick–who are two of the three most admirable people in the book.  And neither of them is ignorant, redneck or hick-y.

Is that a word?

Nick certainly started off as a hick, a child of the Appalachian hills, but these days he reads English, Hebrew, Greek and Latin, takes on Thomas Aquinas for fun, and has a library twice the size (and of considerably more heft) than the one the town runs.  He’s also dedicated his life to making life better for the people he grew up among and built an enormous, prosperous and well-run complex in the process. 

Gary is a war hero, and a near moral saint.  He faced the kind of moral dilemma most of us would fail one way or the other (if only by dying out in the cold), and passed it at considerable cost to himself.  He’s a good, honest, intelligent small town lawman, with more integrity than anybody else in the novel except maybe Catherine Marbledale.

That being said, the other side is equally mixed. 

The woman who brought the lawsuit is a poisonous little twit, a snob and an idiot on almost every level.  Her best friend isn’t much better.  The town’s liberal and atheist icon–Henry Wackford–is worse than that, self-satisfied, smug, arrogant, dishonest and only about a tenth as well educated as he thinks he is. 

(Put Nick in a room with Henry and have them discuss books, and Nick would end up so far over Henry’s head it would be funny to see Henry explode.)

Then there’s Henry’s friend in the local “humanist” movement, a monumental ass with delusions of grandeur who wants to pass laws to stop parents from bringing their children up with religion and whose basic idea of the rights guaranteed by the Constitution is that they shouldn’t count if people use them to do things she doesn’t want them to do.

But then, of course, there’s Catherine Marbledale, who’s very admirable indeed.

Living Witness was unusual in the books I’ve written in that I did in fact signal which side of the basic political argument I’m on in the acknowledgements. 

But the reason that thing in the acknowledgements is there is that I tried very hard–as I do in all the books with topical themes–to make sure that I did not signal my opinion in the novel. 

When I’m working well, there should be no way for you to tell which side I’m on in any of the political battles that come up, or any of the philosophical ones, either. 

Of course, I’ve gotten used to the fact that most people simply decide that I must be on the other side from whichever one they’re on, as if only a book in which their side is portrayed as pristinely good and right could be anything but a partisan hatchet job from the other end of the spectrum.

And acknowlegments or not, I’ll stick with my feeling that I did a good job of portraying both good and evil characters on both sides of the debate in Living Witness.

The real divide between the Darwinists and the ID-ers in Living Witness is twofold:

First (and least important), the ID-ers are all “deep local,” people whose families have lived in town for generations.  The Darwinists are by and large people who’ve moved in to town with the new high tech industries in the area, or people who, in spite of having been born deep local, have spent a significant amount of time living elsewhere.

Second (and most important), the issue for the ID-ers has little or nothing to do with science and everything to do with morality.  It is the moral implications of Darwinism that bother them.

And, come to think of it, that’s true for most of the people on the Darwinist side.  Judy and Henry know no more about evolution than I know about the internal combustion engine.  They “believe” in evolution the way Alice believes in God–they accept it on authority and take it on faith, because they certainly don’t understand it. 

With the exception of Catherine Marbledale, nobody in that book cares about evolution at all. 

The big conflict in Living Witness is not between ignorant rednecks and intelligent, educated cosmopolities.

There are intelligent, educated people on both sides, and vicious idiots on both sides, too.

Ah, well.

I find this an odd post to be writing when what I had intended to write about today was the new Sam Harris book.

I’ll get to that tomorrow.

It’s really, truly, and honestly a completely idiotic mess.

So that should be some fun.

Written by janeh

October 28th, 2010 at 6:08 am

Posted in Uncategorized

Doom, Doom, Forever and Always Doom

with 6 comments

Okay, maybe not that bad.

It’s just that it’s suddenly hit me that we have an election in the US this coming Tuesday, and the only way in which you can say I’m prepared for it is that I am, in fact, registered to vote.

But then, I’ve been registered to vote in the same place for the last decade, so it’s not like I’ve recently accomplished something.

I spent most of last evening watching various Intemperate Fuming Pundit Shows–Glenn Beck, Keith Olbermann and Bill O’Reilly (they’re on at the same time and I switch on and off), Rachel Maddow. 

Interspersed among the shows were various political campaign ads for various candidates.  In Connecticut at the moment, that means lots of negativity and virtually nothing at all about what any of the candidates is supposed to stand for. 

It’s the one thing I’ll give Dick Blumenthal, our Democratic attorney general, who is running for the Senate seat Chris Dodd is retiring from:  there are in a fact a few ads out there about what a great guy you’re supposed to think he is.

Linda McMahon, his Republican opponent, used to have those kinds of ads, but lately I’ve seen nothing but the attack stuff. 

Of course, it’s not like Blumenthal lacks attack ads against McMahon.  It’s just that I also see other things.

But mostly, the ads this campaign year have been oddly bizarre and beside the point.  In the governor’s race–our incumbent governor, Jodi Rell, is not seeking reelection–Dan Malloy, the Democratic, has ads claiming that Tom Foley, the Republican, supports letting health insurance companies drop your coverage when you’re sick.

I assume that that’s a reference to Foley’s lack of support for the recent health care reform bill.  As such a reference, it is, the say the least, wildly misleading.

But what gets me is that there’s no point to it.  Foley isn’t a Congressman or a Senator.  He’s a private citizen.  And he’s running for governor, not for a chance to go to Washingtion.  Who cares what he thinks of the health care reform bill?

For what it’s worth, I’m leaning towards Foley–not because he’s a Republican (although I don’t mind Connecticut Republicans), and not because I’m particularly for anything he’s selling.

Hell, I don’t know what he’s selling.  We’re back to the attack ad problem again.  Here’s the real problem with attack ads:  these days, I virtually never see anything about any candidate in a positive light.  When I see candidate X at all, he’s in candidate’s Y’s attack ad as a scumbag.  And the same for candidate Y.

This seems to me to be a counterproductive way to run political campaigns.

No, if I’m leaning to Foley, it’s not because of something I know about him, but something I know about Malloy:  there are indications, in the way he ran Stamford, that he’s on the wrong side (for me) of Kelo.

Of course, Foley could be, too, but I’d never know.

Blech. 

I’ve got tea and Sam Harris and all my correcting done. 

Maybe I should go start the day.

For what it’s worth, I think all the grave pronouncements–on both sides of the political divide–about how this is “not an election, but a referendum” on Obama’s policies is probably overblown.

Most of the races I’m interested in are at the state level, not the federal one, and have to do with things like religious freedom and free speech on college campuses (Colorado regents) and state policies about homeschooling (state boards of education, about four places).

Then, on Sunday, it’s Halloween.

I don’t get as big a kick out of that as I used to.

I do wish the pundits would go back to doing something besides calling each other names.

And more and more, I’m with half my students, who get all their news from Jon Stewart.

And that’s just the half that actually watches any news at all.

Written by janeh

October 27th, 2010 at 5:33 am

Posted in Uncategorized

Book Report: Higher Education?

with one comment

Way back in the mists of time, somewhere, my idea for this blog was to write about the things I was reading.  I do a lot of reading, and I don’t have many people to talk to about it around here.  I have been on some Internet discussion forums that were supposed to be about books, but mostly they weren’t.  They tended instead to function as reader recommendation sites, where the only question anybody was really interesting in asking about the books in question was:  did you like it?

I don’t know if it’s something about the blog in particular, or just that I didn’t notice before–but since I’ve started writing this thing, I find myself not being happy with the books I read more and more often.

Part of that is my source of supply.  I don’t pick my own all that often.  One of the nice things about having worked in publishing for so long–and of having gone to the sort of college that produces absolute phalanxes of people who work for publishing companies–is that I have probably a dozen editors at different houses who know what I’m interested in and love to send it to me.

They especially love to send it to me if it’s something they didn’t acquire themselves and they know I’m going to hate.

I can’t say I actually hate  Andrew Hacker and Claudia Dreifus’s Higher Education? How Colleges Are Wasting Our Money and Failing Our Kids–and What We Can Do About It.

To tell you the truth, there isn’t enough there there to hate.

Okay, I shouldn’t quote Gertrude Stein in this particular instance.  But I have always loved that quote.  And I’ve always thought it said a lot in very few words.

The Hacker and Dreifus book also has very few words, lots of white space and type big enough that I can read it with my distance glasses on.

And it has absolutely nothing to say that you haven’t heard before. 

If you really want a book about the real mission of the university and the place of the liberal arts, the book is Alan Bloom’s The Closing of the American Mind, which is still the best defense of liberal learning in the post-War period.

What this is is a sort of vague meandering nudgy sort of thing that makes a lot of points that have been obvious for years–colleges are too expensive and most of them aren’t worth the five figure price tags; the teaching of undergraduates at research universities is piss poor; the endless emphasis on vocational training undermines what true education might be available (if any); big time college sports are a financial and moral sinkhole. 

There’s absolutely nothing wrong in all that, except that there’s also no point.  You can find much the same kind of thing elsewhere, and usually it doesn’t take up more of your time than it takes to read a brief article.

If the book holds any interest at all, it’s because of its peculiar myopia about issues of political and ideological bias. 

For instance, it makes a lot of vague references to McCarthyism, and then takes off from there to defend Ward Churchill against attempts of the University of Colorado to fire him.

To make this defense, the authors not only cite “McCarthyism,” they cite a couple of other cases that have occurred recently, including the attempts of legislators to get a woman fired because she speculated, in a paper, that it wasn’t entirely possible that we would one day accept pedophilia the way we now accept other sexual orientations we used to abhor.

But the Churchill case is nothing like that.  It’s certainly true that nobody would have checked into his scholarship much if he hadn’t made the comment about “little Eichmanns,” and that the University of Colorado probably wouldn’t have tried to fire him over those research depredations if they didn’t want to fire him first because of his comments.

But Churchill’s research depredations were very real, and not the sort of offhand thing that can be half explained by inattention.  At one point, the man had a book he had written himself published under another name so that he could site it as a source in a book he was writing under his own.  He faked a Native American ancestry he didn’t have in order to get a job under Affirmative Action provisions that he would never have been offered otherwise.  His academic credentials were so weak, he wouldn’t have made it to the interview if he hadn’t claimed to be an American Indian.

I am perfectly willing to believe that the University of Colorado would not have acted on any of these things if the state legislature hadn’t been up in arms over the “little Eichmanns” comment–but the real scandal there is:  why the hell not?

Certainly if you’re going to stand up for higher standards and a return to the Western tradition, you shouldn’t be defending the retention of a man with no scholarship, no ethics, and no real credentials.

In much the same vein, the authors have nothing to say about campus speech codes, biased orientation sessions meant to indoctrinate entering freshmen into fashionable political pieties, or the near obliteration of alternate political and moral views on some college campuses. 

Towards the end of the book, they actually applaud Notre Dame for its courage inviting an Islamic cleric to speak on campus when the Bush administration had tagged him as having ties to terrorism.

But no courage was involved in that instance. Your standard college administration has nothing to fear from the Bush administration, and can actually get quite a lot of mileage out of being targetted by the “fascists.”   Hell, Columbia invited Mahmoud Ahmadinejad. 

What’s much more telling about the “courage” of Notre Dame’s administration is the way it acquiesces in organized student attempts to shout down speakers whose views really are unpopular in academia–prolife speakers; speakers opposed to gay marriage; speakers opposed to Affirmative Action.

The rest of the book is, mostly, mush–a lot of vague happy talk about stretching minds and engaging intellects. 

It’s the kind of thing that makes no sense to me when I read it, mostly because I don’t think it’s meant to make any sense.  It’s the kind of thing that oozes from every article by every alumnae looking back on her wonderful days on campus, and the kind of thing that makes up most college “mission statements.”

Have I ever said anything about how much I hate mission statements?

Anyway, there’s no point in reading this one.

It goes on the list I call TGIDHTPFI–thank God I didn’t have to pay for it.

Written by janeh

October 26th, 2010 at 4:56 am

Posted in Uncategorized

Sleeping Grouchy

with 2 comments

Okay, before I start this, you have to understand something.  It’s nearly nine o’clock in the morning–in fact, when I look at the computer clock, it’s 8:59–and I just got up.

I never get up this late unless I’ve been out very late the night before.  That’s unusual, but it didn’t happen this time.  I just slept, nonstop, for whatever reason my body felt like doing it this morning. 

And I find this very disorienting.  Never mind the fact that I’m going to start on the caffeine in a couple of minutes, which means that I haven’t had any yet.

At any rate, I was going to start in today on one of those things that just sort of catches my attention–things I write about because I don’t know about them rather than things I write about because I do.

You’re not supposed to do that, but I know from experience that at least under some circumstances it works quite well.   Maybe I should think of it as my own private Socratic method.  Socrates believed that we could “bring out” what we knew but didn’t know–which was pretty much everything–if we only subjected ourselves to the right kind of questioning.

Of course, Socrates also seemed to believe that we are born with all the knowledge in the world and then forget it at birth, so there’s that.  It’s the kind of thing I usually have no patience for.

Anyway, what was on my mind to write about today was people who have either been convicted of famous crimes, or been tried for them and not convicted, who are now living among us like ordinary citizens.

I put that badly. 

I mean that there are people–think, for instance, of Mary Bell, or Carol Ann Fugate–who have, pretty certainly, done the crimes they were charged with, but who for some reason did not spend the rest of their lives in jail.  Some of them were acquitted.  Some of them went to jail for a time and were released.

Eventually, they became free people again, and they lived ordinary lives in ordinary places. 

Except, of course, that there is a sense in which they can never live ordinary lives in ordinary places.  There is always the chance that someone, somewhere, will find out who they are.

Mary Bell was married and the mother of children when the British press discovered where she lived and the assumed name she lived under–and proceeded to camp out on her doorstep for days. 

It must have been–as one of the writers I read on the subject said–an interesting mother-daughter talk that followed that one.

It seems to me, however, that outside of the most obvious and stupid of murders–the guy who gets drunk and bashes in the head of his girlfriend’s baby because it’s crying too much; the idiot who shoots up everybody in the convenience store because he doesn’t want to leave witnesses–there is a lot about any murder that can and will never be known except to the persons who were there.

And that means to the victim (or victims) and the perpetrators.

In some murders, what can be known is only known by the perpetrators, because the victims never see it coming.

I am, I realize, sort of blithering around with this. 

When I was younger, there was a play that was later made into a movie with Hayley Mills called The Chalk Garden.  In it, a mysterious woman is hired to be the governness to an out of control idiot of a girl, and it is later discovered that this governness is a notorious murderess, convicted in her teens of killing (I think) her stepsister, and now, having served her time, out and in the world again.

We don’t get much from the governness’s point of view in this thing, and we get virtually nothing at all about the crime.  The impression is left that there was some doubt, at the time, if the woman was really guilty.  Or at least guilty in a way that would make her criminally liable.

If I had written it, I would, of course, have concentrated much more on that.  But it is one of those things that I’m not sure I could write credibly from the inside of the character’s head. 

I do know what it feels like to be the only person to know the truth about some incident that everybody else in interested in–well, at least a small subset of everything else, like a family.

I am not sure if that experience would be similar to the ones I’m talking about.

What went on in Lizzie Borden’s head, all those years after the trial, when she was acquitted but virtually shunned by the entire population of the town she’d grown up in?

If she could have changed her name and moved away to live in anonymity–what would that have been like?  Would she always have worried about being discovered?  What would it have been like if she had been discovered.

We’ve got lots of reports about Lizzie before and after the murders, and lots of speculation about whether she killed them or not–but nobody knows what went on in that house but Lizzie, and that’s especially the case if she did kill them.

Maybe I’m just being idiotic here, and there is nothing much to know.  Maybe these people are just psychopaths, a group I find singularly boring.

But it does seem to me that it’s an interesting circumstance.  And in some cases, there are enough questions about the crime itself that I wish I could get into people’s heads and have them explain it to me.

But I really am blithering this morning, and maybe I should have waited for the caffeine.

I’m going to go do that, and listen to Bach.

And I’ve got a copy of the Sam Harris book on morality, courtesy of Marguerite, for which I thank her.

Maybe I’ll have something to report in the next day or two.

Written by janeh

October 24th, 2010 at 9:35 am

Posted in Uncategorized

Continuity Girls

with 3 comments

In case you read fewer thirties novels than I do–a continuity girl was a person on a film set who kept notes on what every character was wearing, how their hair and make-up was done, from scene to scene.  This helped stopped big problems from happening when you filmed scenes out of sequence.  If Martha was wearing a red dress and carrying a straw handbag when she walked out of the living room to the dining room, she shouldn’t be wearing a blue dress and carrying a leather bag when she finally entered that same dining room.

If you see what I mean.

If you want to catch a really bad example of somebody not paying attention to continuity:  in the first Harry Potter movie, the positions of Harry, Hermoine and Percy at the dinner table after the sorting ceremony are one thing when you first see them, and entirely another a few seconds later when the camera has gone off them for a few seconds and supposedly come back on.

The damned thing drives me nuts every time I see it.

The continuity that’s bothering me this morning, however, is the continuity of series characters in mystery novels.

The very minor characters cause one sort of problem–a series character that comes in only every third book or so tends to build up a backstory that the writer can’t always remember.   The most conscientious of writers keep charts and maps to make sure they remember that Susan broke her leg on a keg of ale in Book 3 and Tom was sent to juvenile hall for shoplifting when he was twelve but is now the world’s biggest stickler for honesty.

I’ll admit, right here, to not being one of the most conscientious writers.  Part of my problem has always been that I’m not always sure, when I introduce a minor character, that I ever intend to use him again. 

But the problems caused by characters of this kind are, well, minor.  The real problem with continuing characters in series come with the major but secondary ones–the sidekicks, they second-in-commands.

And these problems come because, given the nature of a series with continuing characters, you can’t leave them out. 

Once a character is established as a vital and important part of the equation of the series as a whole, you need to say something about him every time you write. 

And in any case where that character would ordinarily play a part,  that character must play a part, even if the story itself has nothing for him (or her) to do.

If two detectives always work as partners, if a Detective Inspector is always assisted by a particular detective constable–well, they’d better be there, even if the main character is going to have no actual practical use for them on the case in question.

One of the ways of handling this is to give the unneeded secondary character a subplot, something well removed from the mystery, just to make it feel as if he’s doing something. 

That’s why secondary characters often have home lives or love lives that sound like the notes from some psychiatrist’s case file–think of Barbara in Elizabeth George’s Lynley books.

This sort of thing can get really annoying, and I have had people tell me that they stopped reading one series or another because the whole thing just got too confusing, or unbelievable, or worse.

The problem with secondary series characters is, often, a mirror of the problem with main series characters–there is just so much to say about any one person, at least in a way that will work in a novel. 

Real people may live lives of quiet desperation–I always thought Thoreau was wrong about that one, at least in the case of the majority–but they almost never live lives of uninterrupted drama.  Most people don’t want to.  Only some people want the characters in their fiction to–some people must, because soap operas are not losing money.

In general, I have  very little patience with the kind of person who absolutely has to read an entire series in order, at least when it comes to mystery novels.

Something like Harry Potter or The Lord of the Rings do need to be read in order, but mystery novels are discreet and often self-contained plots.  Sleeping Murder and At Bertram’s Hotel are completely comprehensible, and satisfying as mysteries, even if you’ve never read The Body in the Library, or any of the books that came between it and those two at the front.

It’s things like the problem of the secondary characters that makes me just a bit sympathetic to the people who need to read everything in order.   If you have read everything in order–or at least read it–you know why this perfectly useless person is running around the book getting divorced and remarried every five minutes.

I’ve been trying to think if mysteries are unique among the genres in this regard.  Most romance novels are not parts of series, science fiction novels can be but don’t have to be.  I don’t know enough about westerns to comment. I do know that most of the horror novels I’ve read have been stand alones.

But I’ve talked about this before–there are advantages to series novels.  For one thing, readers prefer them, and once a reader is drawn into the continuing story, he’s likely to come back for more.  Even if a particular novel falls flat, he’s still likely to be back for more.  He knows how good it can get, and he knows that everybody has a bad year.

Series novels also work very well as frames.  If you think of the mystery, as I do, as a kind of literary frame for other things–much the way the form of the sonnet is the frame of what the poet wants to say, but not the point in and of itself of the poem–then series work better than stand alones for any number of reasons.

A novel about the lives of six not very important people in a dying rust belt city is a hard sell.  It’s an easier sell when it’s a mystery novel.  It’s an even easier one when it’s part of a series whose readers are committed to the continuing character of the detective.

Welcome to Precious Blood, still, I think, one of the best of the Gregors.

Eck.  I don’t really know where I’m going with this.  I’m spending my mornings these days orchestrating the debut of a new series detective–in fact, two–and the first book in such a series has to be a lot of things subsequent books do not.  That means I’m spending a lot of my time flailing at things that, if this series sees publication, won’t matter much in a book or two down the line.

But I also find myself being brought up short again by my eternal question–whether it is possible for a genre novel to be good as a novel, and not just as an example of the genre.

On some metalevel, of course, the answer is yes.  I can name half a dozen genre novels that are also good as novels first.

On the ground, though, the question remains–if it is possible that a genre novel can be good as a novel, then genre novels should be good as novels every time.

But that doesn’t happen, and sometimes, novels that are good as genre, that are among the best examples of the genre, aren’t very good as novels at all.

Written by janeh

October 23rd, 2010 at 8:19 am

Posted in Uncategorized

Typing in the Morning

with 3 comments

So, yesterday, I actually wasn’t using a random title for a post.  I was thinking about writing about personification, which was a staple technique in writing in the late antiquity and the Middle Ages, but that sort of disappeared over time.

Personification was the use of dialogue for things like philosophical treatises, in which various qualities–Fortune, say, or Philosophy “herself”–would be presented as a character and speak to the author.

This is the way Boethius’s Consolation of Philosophy is written. It procedes as a dialogue between Boethius himself and Philosophy, who is a woman, and the whole thing is interspersed with poetry, sort of like the way, in a musical, the cast will suddenly break out into song.

There are some things I value greatly about the Middle Ages, and some things I think have been badly misrepresented and dismissed without a hearing, but this is not one of them.

I find this technique distracting and annoying, and I’m not sure why it was ever popular.  In Medieval morality plays, the personification of virtues and vices as designed primarily to make a theological message more palatable (and easier to understand) for a largely illiterate audience.

I can’t imagine that illiterate people would have listened to a dramatizaion of The Consolation of Philosophy, or any of the other half dozen or so serious investigations of the nature of life and fortune that take this form in this extended period.

If anything, I find the form making it more difficult to understand what is being said and more difficult still to retain it, so that confident pronouncements on things I don’t agree with at all–say, that if you have good fortune but later that fortune changes to bad, then there was no point in actively pursuing the good fortune to begin with–sort of zipping by before I’ve realized what they meant.

I’ve never really understood the idea that everything is useless and futile unless it lasts forever.   I am, I think, definitely with the better-to-have-loved-and-lost people. 

I was trying to explain this to my younger son the other day.   I went to New York to be a writer, and it worked out.  But even if it hadn’t, I had a good time there, I learned a lot I didn’t know before, I met Bill and got married and had children.   The ride would have been worth it even if the goal hadn’t worked out.

Still, that idea–that what does not last, what fails in any way, is utterly worthless–pervades the thinking in classical Athens, and in Rome, and in the Middle Ages. 

Part of it may be simply the change in sensibility.  As late as the American Revolution, people still thought of death as something that could come at any time.  In the eras before that, all kinds of changes in fortune–loss of property as well as loss of life, loss of freedom due to being conquered by an enemy neighbor–seemed to happen more frequently and with less warning than they do now.

Aristotle said that we should “count no man happy until he is dead,” and I always thought that would make a good quote for the section heading of a Gregor.

I also think it’s a crock.

But as for Sam Harris and his book about morality–well, it’s published by Free Press, and I’ve got people at Free Press, so we’ll know in a few days.

In anticipation, I’m not worried that it will be utilitarianism.  In fact, I largely expect it to be.

But the problem with utilitarianism for me is in its assumptions.  “The greatest good for the greatest number.”  Well, okay.  Define “good,” and then tell me why it’s good. 

Or, for that matter, why I should care.

I have to commend Harris on having figured out that the assertion of relativism won’t get him where he wants to go, and on having made the attempt.

But I do worry that this book is going to be much like Paul Kurtz’s Forbidden Fruit,  so that Harris starts out with a laundry list of things he already finds to be morally right and then tries to plaster proofs all over them–rather the way Medieval theologians began with the assumption that God exists and then threw the kitchen sink at the problem in order to “prove” what they already thought they knew.

I think it would be interesting if, sometime, somebody would approach this project–finding a secular foundation for morality–the right way around. 

I mean if they’d just start with the raw evidence and not with a set of rules they already want to see installed as “moral.” 

For a moment, at the beginning of the Kurtz book, I had hopes that he was going to do it.  But, in no time at all, the search for “common human decencies” that every society everywhere had acknowledged broke down in the face of those very same near-universal moral precepts–the adamant rejection of homosexuality and what we the rights of women; the near-universal rejection of abortion; the absolutely universal assumption of a double standard of sexual behavior between men and women.

In the examination of that kind of thing, I actually got more out of Steven Pinker’s Blank Slate than I have ever yet out of any modern philosopher.

But, like I said, let’s see.  There’s no point beating up on Harris before I’ve read the book, and I don’t really trust that reviewer to tell me what’s in it. 

That said, I have a pile of mystery novels (Christie and Grimes) and an almost bigger pile of Poirot, Marple and Perry Mason DVDs for the week-end.

I’m going to do that first.

Written by janeh

October 22nd, 2010 at 5:44 am

Posted in Uncategorized

Persons

with 2 comments

Some days, you just get up and everything seems to be calculated to annoy me.

It’s actually fall now, something I’ve been avoiding for a really long time.  But yesterday we had to turn our heat on for the first time, and this morning my office–which is in a sunroom, overlooking the back yard–is cold.

But that isn’t really the problem, of course.  The problem is that I signed online, checked my e-mail, and then went (as usual) straight to Arts and Letters Daily, to find a review of Sam Harris’s new book. 

Right there.  In the middle of everything.

And the review starts:

>>>It used to be a given that religion was the source of all important knowledge. Both the “how” of the universe—what it is like, and how it works—and the “why”—why it exists at all, and why human life has a place in it—were to be answered by referring to religious stories and authorities. With the rise of modernity questions of the first sort were removed from religion’s purview: we think of them now as scientific questions, to be answered by empirical investigation.>>>

Now, let me be fair here.  That’s the reviewer talking, a guy named Troy Jollimore.  I can’t blame Sam Harris for that bit of hash.  And you expect the author to be more knowledgable than the reviewer.

But still.

The removal of questions about how the world works because matters of scientific–or proto-scientific–investigation over three hundred years before the birth of Christ.  If you don’t believe me, go look at Aristotle’s On Nature.

Nor did the rise of Christianity suddenly return questions of “natural philosophy” to the purview of theology.   Any look through the surviving literature of the European Middle Ages could tell you as much, as could any perusal of, say, one of Norman Cantor’s histories of the period.

Western “philosophers”–in those days, all intellectual inquiry was called “philosophy,” even if it was what we would call “science” today–did practical experimentation in chemistry and biology.  They collected samples from all around the world as men became to travel through it.

If you want to find a time in European history when the preferred explanation for, say, how frogs give birth or how best to grow wheat in France was “God did it” or “pray,” you have to go back to prehistory, to the time before men and women could write.

In the literate West, science–meaning the search for natural explanations for natural phenomena–starts early, much earlier than the Enlightenment. 

If it hadn’t, there wouldn’t have been an Enlightenment.

I know that nobody believes much in learning about intellectual history these days, and I’m not going to go on another rant about how we should be teaching it started in kindergarten.

And I also know that the myth of the Enlightenment is the founding narrative of a lot of what we do, including most probably the United States.

But do people hear themselves when they spout this kind of stuff?  Do they feel any need whatsoever to make any kind of sense?

Where do they think the Enlightenment came from? 

Do they honestly believe that one day some guy looked up from his Bible, a lightbulb went on over his head, and–eureka!

I’d find it a lot harder to swallow that than I would to swallow the idea that God did it.  God, after all, is supposed to be the creator ex nihilo of all things.  He’s supposed to be good at making something out of nothing.

How Harvey over at the butcher’s shop is supposed to have managed it is another question.

And, now that I think about it, what about alchemy?  Alchemy, like astrology, was assumed in the Middle Ages to be a science.  It was not magic, and it was not theology.  It was an attempt to understand the natural world by natural means.

The problem with both of those things are not that they were mystical or religious, but that they were wrong.

The people of the Middle Ages, the citizens of Greek city states and the Roman Empire, were fully invested in studying the natural world by natural means.   Their means were inadequate, and often wrongheaded, and sometimes–from our more sophisticated point of view–embarrassing.

But they were not religion.

Ack. 

It’s a dismal day, and I have much too much to do.

Tea.

Written by janeh

October 21st, 2010 at 8:25 am

Posted in Uncategorized

Bad Behavior has blocked 261 access attempts in the last 7 days.