Tuesday, December 12, 2017

Dreidel: A seemingly foolish game that contains the moral world in miniature

[Also appearing in today's LA Times. Happy first night of Hannukah!]

Superficially, dreidel appears to be a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and (apparently) meaningful strategic choice. From this perspective, its prominence in the modern Hannukah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This perspective misses the brilliance of dreidel. Dreidel's seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

For readers unfamiliar with the game, here's a tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of several foil-wrapped chocolate coins, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put one coin in. Then the next player takes a spin.

It all sounds very straightforward, until you actually start to play the game.

The first odd thing you might notice is that although some of the coins are big and others are little, they all count just as one coin in the rules of the game. This is unfair, since the big coins contain more chocolate, and you get to eat your stash at the end.

To compound the unfairness, there is never just one dreidel — each player may bring her own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels 40 times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27/40 spins.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or how low you should let the pot get before you all have to contribute again. No one agrees how many coins to start with or whether you should let someone borrow coins if he runs out. You could try to appeal to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using what seems to be the "best" dreidel, always argue for rule interpretations in your favor, eat your big coins and use that as a further excuse to contribute only little ones, et cetera. You could do all of this without ever breaking the rules, and you'd probably end up with the most chocolate as a result.

But here's the twist, and what makes the game so brilliant: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather enjoy being kind and generous than hoarding up the most coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, just to be fair to others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint.

Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context where the rules are unclear and where there are norm violations that aren't rules violations, and where both norms and rules are negotiable, varying from occasion to occasion. Just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

Friday, December 08, 2017

Women Have Been Earning 30-34% of Philosophy BAs in the U.S. Since Approximately Forever*

* for values of "forever" ≤ 30 years.

The National Center for Education Statistics has data on the gender of virtually all Bachelor's degree recipients in the U.S. back into the 1980s, publicly available through the IPEDS database. For Philosophy, the earliest available data cover the 1986-1987 academic year. [For methodological details, see note 1].

The percentage of Philosophy Bachelor's degrees awarded to women has been remarkably constant over time -- a pattern not characteristic of other majors, many of which have shown at least a modest increase in the percentage of women since 1987. In the 1986-1987 academic year, women received 33.6% of Philosophy BAs. In the most recent available year (preliminary data), 2015-2016, is was 33.7%. Throughout the period, the percentage never strays from the band between 29.9% and 33.7%.

I have plotted the trends in the graph below, with Philosophy as the fat red line, including a few other disciplines for comparison: English, History, Psychology, the Biological Sciences, and the Physical Sciences. The fat black line represents all Bachelor's degrees awarded.

[if blurry or small, click to enlarge]

Philosophy is the lowest of these, unsurprisingly to those of us who have followed gender issues in the discipline. (It is not the lowest overall, however: Some of the physical science and engineering majors are as low or lower.) To me, more striking and newsworthy is the flatness of the line.

I also thought it might be worth comparing high-prestige research universities (Carnegie classification: Doctoral Universities, Highest Research Activity) versus colleges with much more of a teaching focus (Carnegie classification: Baccalaureate's Colleges, Arts & Science focus or Diverse Fields).

Women were a slightly lower percentage of Philosophy BA recipients in the research universities than in the teaching-focused colleges (30% vs. 35%; and yes, p < .001). However, the trends over time were still approximately flat:

For kicks, I thought I'd also check if my home state of California was any different -- since we'll be seceding from the rest of the U.S. soon (JK!). Nope. Again, a flat line, with women overall 33% of graduating BAs in Philosophy.

Presumably, if we went back to the 1960s or 1970s, a higher percentage of philosophy majors would be men. But whatever cultural changes there have been in U.S. society in general and in the discipline of philosophy in particular in the past 30 years haven't moved the dial much on the gender ratio of the philosophy major.

[Thanks to Mike Williams at NCES for help in figuring out how to use the database.]

-----------------------------------------

Note 1: I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. Before the 2000-2001 academic year, only first major is recorded. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who complete the degree are included in the data. Although gender data are available back to 1980, Philosophy and Religious Studies majors are merged from 1980-1986.

Friday, December 01, 2017

Aiming for Moral Mediocrity

I've been working on this essay off and on for years, "Aiming for Moral Mediocrity". I think I've finally pounded it into circulating shape and I'm ready for feedback.

I have an empirical thesis and a normative thesis. The empirical thesis is that most people aim to be morally mediocre. They aim to be about as morally good as their peers, not especially better, not especially worse. This mediocrity has two aspects. It is peer-relative rather than absolute, and it is middling rather than extreme. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of people we regard as our peers and we aim to behave broadly within that range. We -- most of us -- look around, notice how others are acting, then calibrate toward so-so.

This empirical thesis is, I think, plausible on the face of it. It also receives some support from two recent subliteratures in social psychology and behavioral economics.

One is the literature on following the (im-)moral crowd. I'm thinking especially of the work of Robert B. Cialdini and Cristina Bicchieri. Cialdini argues that "injunctive norms" (that is, social or moral admonitions) most effectively promote norm-compliant behavior when they align with "descriptive norms" (that is, facts about how people actually behave). People are less likely to litter when they see others being neat, more likely to reuse their hotel towels when they learn that others also do so, and more likely to reduce their household energy consumption when they see that they are using more than their neighbors. Bicchieri argues that people are more likely to be selfish in "dictator games" when they are led to believe that earlier participants had mostly been selfish and that convincing communities to adopt new health practices like family planning and indoor toilet use typically requires persuading people that their neighbors will also comply. It appears that people are more likely to abide by social or moral norms if they believe that others are also doing so.

The other relevant literature concerns moral self-licensing. A number of studies suggest that after having performed good acts, people are likely to behave less morally well than after performing a bad or neutral act. For example, after having done something good for the environment, people might tend to make more selfish choices in a dictator game. Even just recalling recent ethical behavior might reduce people's intentions to donate blood, money, and time. The idea is that people are more motivated to behave well when their previous bad behavior is salient and less motivated to behave well when their previous good behavior is salient. They appear to calibrate toward some middle state.

One alternative hypothesis is that people aim not for mediocrity but rather for something better than that, though short of sainthood. Phenomenologically, that might be how it seems to people. Most people think that they are somewhat above average in moral traits like honesty and fairness (Tappin and McKay 2017); and maybe then people mostly think that they should more or less stay the course. An eminent ethicist once told me he was aiming for a moral "B+". However, I suspect that most of us who like to think of ourselves as aiming for substantially above-average moral goodness aren't really willing to put in the work and sacrifice required. A close examination of how we actually calibrate our behavior will reveal us wiggling and veering toward a lower target. (Compare the undergraduate who says they're "aiming for B+" in a class but who wouldn't be willing to put in more work if they received a C on the first exam. It's probably better to say that they are hoping for a B+ than that they are aiming for one.)


My normative thesis is that it's morally mediocre to aim for moral mediocrity. Generally speaking, it's somewhat morally bad, but not terribly bad, to aim for the moral middle.

In defending this view, I'm mostly concerned to rebut the charge that it's perfectly morally fine to aim for mediocrity. Two common excuses, which I think wither upon critical scrutiny, are the Happy Coincidence Defense and The-Most-You-Can-Do Sweet Spot. The Happy Coincidence Defense is an attractive rationalization strategy that attempts to justify doing what you prefer to do by arguing that it's also for the moral best -- for example, that taking this expensive vacation now is really the morally best choice because you owe it to you family, and it will refresh you for your very important work, and.... The Most-You-Can-Do Sweet Spot is a similarly attractive rationalization strategy that relies on the idea that if you tried to be any morally better than you in fact are, you would end up being morally worse -- because you would collapse along the way, maybe, or you would become sanctimonious and intolerant, or you would lose the energy and joie de vivre on which your good deeds depend, or.... Of course it can sometimes be true that by Happy Coincidence your preferences align with the moral best or that you are already precisely in The-Most-You-Can-Do Sweet Spot. But this reasoning is suspicious when deployed repeatedly to justify otherwise seemingly mediocre moral choices.

Another normative objection is the Fairness Objection, which I discussed on the blog last month. Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to make such sacrifices. If the average person in your financial condition gives X% to charity, for example, it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, lie, and flake X amount of the time, it's only fair that you should get to do the same.

The simplest response to the Fairness Objection is to appeal to absolute moral standards. Although some norms are peer-relative, so that they become morally optional if most of your peers fail to comply with them, other norms aren't like that. A Nazi death camp guard is wrong to kill Jews even if that is normal behavior among his peers. More moderately, sexism, racism, ableism, elitism, and so forth are wrong and blameworthy, even if they are common among your peers (though blame is probably also partly mitigated if you are less biased than average). If you're an insurance adjuster who denies or slow-walks important health benefits on shaky grounds because you guess the person won't sue, the fact that other insurance adjusters might do the same in your place is again at best only partly mitigating. It would likely be unfair to blame you more than your peers are blamed; but if you violate absolute moral standards you deserve some blame, regardless of your peers' behavior.

-----------------------------------------

Full length version of the paper here.

As always, comments welcome either by email to me or in the comments field of this post. Please don't feel obliged to read the full paper before commenting, if you have thoughts based on the summary arguments in this post.

[Note: Somehow my final round of revisions on this post was lost and an old version was posted. The current version has been revised in attempt to recover the lost changes.]

Wednesday, November 22, 2017

Yay Boo Strange Hope

Happy (almost) Thanksgiving (in the U.S.)! I want to share a recent family tradition which might help you through some of those awkward conversations with others around the table. We call it Yay Boo Strange Hope.

The rules:

(1.) Sit in a circle (e.g., around the dinner table).

(2.) Choose a topic. For example: your schoolday/workday, wilderness camping, Star Wars, the current state of academic philosophy.

(3.) Arbitrarily choose someone to go first.

(4.) That person says one good thing about the topic (the Yay), one bad thing (the Boo), one weird or unexpected thing (the Strange), and some wish for the future related to the topic in question (the Hope).

(5.) Interruptions for clarificatory questions are encouraged.

(6.) Sympathetic cheers and hisses are welcome, or brief affirmations like "that stinks" or "I agree!" But others shouldn't take the thread off in their own direction. Keep the focus on the opinions and experiences of the person whose turn it is.

(7.) Repeat with the next person clockwise around the circle until everyone has had a turn.


Some cool things about this game:

* It is modestly successful in getting even monosyllabic teenagers talking a bit. Usually they can muster at least a laconic yay boo strange and hope about their day or about a topic of interest to them.

* It gives quiet people a turn at the center of the conversation, and discourages others from hijacking the thread.

* Yay Boo Strange Hope typically solicits less predictable and more substantive responses than bland questions like, "So what happened at school today?" Typically, you'll hear about at least three different events (the Yay, Boo, and Strange) and one of those events (the Strange) is likely to be novel.

* The Boo gives people an excuse to complain (which most people enjoy) and the Yay forces people to find a bright side even on a topic where their opinion is negative.

* By ending on Hope, each person's turn usually concludes on an up note or a joke.


Origin:

When I was touring Pomona College with my son in the summer of 2016, I overhead another group's tour guide describing something like this game as a weekly ritual among her dormmates. I suspect the Pomona College version differs somewhat from my family's version, since I only partly overheard and our practice has evolved over time. If you know variations of this game, I'd be interested to hear from you in the comments.

Thursday, November 16, 2017

A Moral Dunning-Kruger Effect?

In a famous series of experiments Justin Kruger and David Dunning found that people who scored in the lowest quartile of skill in grammar, logic, and (yes, they tried to measure this) humor tended to substantially overestimate their abilities, rating themselves as a bit above average in these skills. In contrast, people in the top half of ability had more accurate estimations (even tending to underestimate a bit). The average participant in each quartile rated themselves as above average, and the correlation between self-rated skill and measured skill was small.

For example, here's Kruger and Dunning's chart for logic ability and logic scores:


(Kruger & Dunning 1999, p. 1129).

Kruger and Dunning's explanation is that poor skill at (say) logical reasoning not only impairs one's performance at logical reasoning tasks but also impairs one's ability to evaluate one's own performance at logical reasoning tasks. You need to know that affirming the consequent is a logical error in order to realize that you've just committed a logical error in affirming the consequent. Otherwise, you're likely to think, "P implies Q, not-P, so not-Q. Right! Hey, I'm doing great!"

Although popular presentations of the Kruger-Dunning effect tend to generalize it to all skill domains, it seems unlikely that it does generalize universally. In domains where evaluating one's success doesn't depend on the skill in question, and instead depends on simpler forms of observation and feedback, one might expect more realistic self-evaluations by novices. (I haven't noticed a clear, systematic discussion of cases where Dunning-Kruger doesn't apply, though Kahneman & Klein 2009 is related; tips welcome.) For example: footraces. I'd wager that people who are slow runners don't tend to think that they are above average in running speed. They might not have perfect expectations; they might show some self-serving optimistic bias (Taylor & Brown 1988), but we probably won't see the almost flat line characteristic of Kruger-Dunning. You don't have to be a fast runner to evaluate your running speed. You just need to notice that others tend to run faster than you. It's not like logic where skill at the task and skill at self-evaluation are closely related.

So... what about ethics? Ought we to expect a moral Dunning-Kruger Effect?

My guess is: yes. Evaluating one's own ethical or unethical behavior is a skill that itself depends on one's ethical abilities. The least ethical people are typically also the least capable of recognizing what counts as an ethical violation and how serious the violation is -- especially, perhaps, when thinking about their own behavior. I don't want to over-commit on this point. Certainly there are exceptions. But as a general trend, this strikes me as plausible.

Consider sexism. The most sexist people tend to be the people least capable of understanding what constitutes sexist behavior and what makes sexist behavior unethical. They will tend either to regard themselves as not sexist or to regard themselves only as "sexist" in a non-pejorative sense. ("Yeah, so what, I'm a 'sexist'. I think men and women are different. If you don't, you're a fool.") Similarly, the most habitual liars might not see anything bad in lying or just assume that everyone else who isn't just a clueless sucker also lies when convenient.

It probably doesn't make sense to think that overall morality can be accurately captured in a single unidimensional scale -- just like it probably doesn't make sense to think that there's one correct unidimensional scale for skill at baseball or for skill as a philosopher or for being a good parent. And yet, clearly some baseball players, philosophers, and parents are better than others. There are great, good, mediocre, and crummy versions of each. I think it's okay as a first approximation to think that there are more and less ethical people overall. And if so, we can at least imagine a rough scale.

With that important caveat, then, consider the following possible relationships between one's overall moral character and one's opinion about one's overall moral character:

Dunning-Kruger (more self-enhancement for lower moral character):

[Note: Sorry for the cruddy-looking images. They look fine in Excel. I should figure this out.]

Uniform self-enhancement (everyone tends to think they're a bit better than they are):

U-shaped curve (even more self-enhancement for the below average):

Inverse U (realistically low self-image for the worst, self-enhancement in the middle, and self-underestimation for the best):

I don't think we really know which of these models is closest to the truth.

Thursday, November 09, 2017

Is It Perfectly Fine to Aim to be Morally Average?

By perfectly fine I mean: not at all morally blameworthy.

By aiming I mean: being ready to calibrate ourselves up or down to hit the target. I would contrast aiming with settling, which does not necessarily involve calibrating down if one is above target. (For example, if you're aiming for a B, then you should work harder if you get a C on the first exam and ease up if you get an A on the first exam. If you're willing to settle for a B, then you won't necessarily ease up if you happen fortunately to be headed toward an A.)

I believe that most people aim to be morally mediocre, even if they don't explicitly conceptualize themselves that way. Most people look at their peers' moral behavior, then calibrate toward so-so, wanting neither to be among the morally best (with the self-sacrifice that seems to involve) nor among the morally worst. But maybe "mediocre" is too loaded a word, with its negative connotations? Maybe it's perfectly fine, not at all blameworthy, to aim for the moral middle?


Here's one reason you might think so:

The Fairness Argument.

Let's assume (of course it's disputable) that being among the morally best, relative to your peers, normally involves substantial self-sacrifice. It's morally better to donate large amounts to worthy charities than to donate small amounts. It's morally better to be generous rather than stingy with one's time in helping colleagues, neighbors, and distant relatives who might not be your favorite people. It's morally better to meet your deadlines than to inconvenience others by running late. It's morally better to have a small carbon footprint than a medium-size or large one. It's morally better not to lie, cheat, and fudge in all the small (and sometimes large) ways that people tend to do.

To be near the moral maximum in every respect would be practically impossible near-sainthood; but we non-saints could still presumably be somewhat better in many of these ways. We just choose not to be better, because we'd rather not make the sacrifices involved. (See The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot for my discussion of a couple of ways of insisting that you couldn't be morally better than you in fact are.)

Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to do so. If the average person in your financial condition gives 3% of their income to charity, then it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, fib, and flake X amount of the time, it's only fair that you get to do the same. Fairness requires that we demand no more than average moral sacrifice from the average person. Thus, there's nothing wrong with aiming to be only a middling member of the moral community -- approximately as selfish, dishonest, and unreliable as everyone else.


Two Replies to the Fairness Argument.

(1.) Absolute standards. Some actions are morally bad, even if the majority of your peers are doing them. As an extreme example, consider a Nazi death camp guard in 1941, who is somewhat kinder to the inmates and less enthusiastic about killing than the average death camp guard, but who still participates in and benefits from the system. "Hey, at least I'm better than average!" is a poor excuse. More moderately, most people (I believe) regularly exhibit small to moderate degrees of sexism, racism, ableism, and preferential treatment of the conventionally beautiful. Even though most people do this, one remains criticizable for it -- that you're typical or average in your degree of bias is at most a mitigator of blame, not a full excuser from blame. So although some putative norms might become morally optional (or "supererogatory") if most of your peers fail to comply, others don't show that structure. With respect to some norms, aiming for mediocrity is not perfectly fine.

(2.) The seeming-absurdity of trade offs between norm types. Most of us see ourselves as having areas of moral strength and weakness. Maybe you're a warm-hearted fellow, but flakier than average about responding to important emails. Maybe you know you tend to be rude and grumpy to strangers, but you're an unusually active volunteer for good causes in your community. My psychological conjecture is that, in implicitly guiding our own behavior, we tend to treat these tradeoffs as exculpatory or licensing: You forgive yourself for the one in light of the other. You let your excellence in one area justify lowering your aims in another, so that averaging the two, you come out somewhere in the middle. (In these examples, I'm assuming that you didn't spend so much time and energy on the one that the other becomes unfeasible. It's not that you spent hours helping your colleague so that you simply couldn't get to your email.)

Although this is tempting reasoning when you're motivated to see yourself (or someone else) positively, a more neutral judge might tend to find it strange: "It's fine that I insulted that cashier, because this afternoon I'm volunteering for river clean-up." "I'm not criticizable for neglecting Cameron's urgent email because this morning I greeted Monica and Britney kindly, filling the office with good vibes." Although non-consciously or semi-consciously we tend to cut ourselves slack in one area when we think about our excellence in others, when the specifics of such tradeoffs are brought to light, they often don't stand scrutiny.


Conclusion.

It's not perfectly fine to aim merely for the moral middle. Your peers tend to be somewhat morally criticizable; and if you aim to be the same, you too are somewhat morally criticizable for doing so. The Fairness Argument doesn't work as a general rule (though it may work in some cases). If you're not aiming for moral excellence, you are somewhat morally blameworthy for your low moral aspirations.

[image source]

Thursday, November 02, 2017

Two Roles for Belief Attribution

Belief attribution, both in philosophy and in ordinary language, normally serves two different types of role.

One role is predicting, tracking, or reporting what a person would verbally endorse. When we attribute belief to someone we are doing something like indirect quotation, speaking for them, expressing what we think they would say. This view is nicely articulated in (the simple versions of) the origin-myths of belief talk in the thought experiments of Wilfrid Sellars and Howard Wettstein, according to which belief attribution mythologically evolves out of a practice of indirect quotation or imagining interior analogues of outward speech. The other role is predicting and explaining (primarily) non-linguistic behavior -- what a person will do, given their background desires (e.g. Dennett 1987; Fodor 1987; Andrews 2012) .

We might call the first role testimonial, the second predictive-explanatory. In adult human beings, when all goes well, the two coincide. You attribute to me the belief that class starts at 2 pm. It is true both that I would say "Class starts at 2 pm" and that I would try to show up for class at 2 pm (assuming I want to attend class).

But sometimes the two roles come apart. For example, suppose that Ralph, a philosophy professor, sincerely endorses the statement "women are just as intelligent as men". He will argue passionately and convincingly for that claim, appealing to scientific evidence, and emphasizing how it fits the egalitarian and feminist worldview he generally endorses. And yet, in his day-to-day behavior Ralph tends not to assume that women are very intellectually capable. It takes substantially more evidence, for example, to convince him of the intelligence of an essay or comment by a woman than a man. When he interacts with cashiers, salespeople, mechanics, and doctors, he tends to assume less intelligence if they are women than if they are men. And so forth. (For more detailed discussion of these types of cases, see here and here.) Or consider Kennedy, who sincerely says that she believes money doesn't matter much, above a certain basic income, but whose choices and emotional reactions seem to tell a different story. When the two roles diverge, should belief attribution track the testimonial or the predictive-explanatory? Both? Neither?

Self-attributions of belief are typically testimonial. If we ask Ralph whether he believes that women and men are equally intelligent, he would presumably answer with an unqualified yes. He can cite the evidence! If he were to say that he doesn't really believe that, or that he only "kind of" believes it, or that he's ambivalent, or that only part of him believes it, he risks giving his conversational partner the wrong idea. If he went into detail about his spontaneous reactions to people, he would probably be missing the point of the question.

On the other hand, consider Ralph's wife. Ralph comes home from a long day, and he finds himself enthusiastically talking to his wife about the brilliant new first-year graduate students in his seminar -- Michael, Nestor, James, Kyle. His wife asks, what about Valery and Svitlana? [names selected by this random procedure] Ah, Ralph says, they don't seem quite as promising, somehow. His wife challenges him: Do you really believe that women and men are equally intelligent? It sure doesn't seem that way, for all your fine, egalitarian talk! Or consider what Valery and Svitlana might say, gossiping behind Ralph's back. With some justice, they agree that he doesn't really believe that women and men are equally intelligent. Or consider Ralph many years later. Maybe after a long experience with brilliant women as colleagues and intellectual heroes, he has left his implicit prejudice behind. Looking back on his earlier attitudes, his earlier evaluations and spontaneous assumptions, he can say: Back then, I didn't deep-down believe that women were just as smart as men. Now I do believe that. Not all belief attribution is testimonial.

It is a simplifying assumption in our talk of "belief" that these two roles of belief attribution -- the testimonial and the predictive-explanatory -- converge upon a single thing, what one believes. When that simplifying assumption breaks down, something has to give, and not all of our attributional practices can be preserved without modification.

[This post is adapted from Section 6 of my paper in draft, "The Pragmatic Metaphysics of Belief"]

[HT: Janet Levin.]

[image source]

Tuesday, October 31, 2017

Rationally Speaking: Weird Ideas and Opaque Minds

What a pleasure and an honor to have been invited back to Julia Galef's awesome podcast, Rationally Speaking!

If you don't know Rationally Speaking, check it out. The podcast weaves together ideas and guests from psychology, philosophy, economics, and related fields; and Julia has a real knack for the friendly but probing question.

In this episode, Julia and I discuss the value of truth, daringness, and wonder as motives for studying philosophy; the hazards of interpreting other thinkers too charitably; and our lack of self-knowledge about the stream of conscious experience.

Thursday, October 26, 2017

In 25 Years, Your Employer Will Directly Control Your Moods

[Edit Oct. 28: After discussion with friends and commenters in social media, I now think that the thesis should be moderated in two ways. First, before direct mood control becomes common in the workplace, it probably first needs to become voluntarily common at home; and thus it will probably take more than 25 years. Second, it seems likely that in many (most?) cases the direct control will remain in the employee's hands, though there will likely be coercive pressure from the employer to use it as the employer expects. (Thanks to everyone for their comments!)]

Here's the argument:

(1.) In 25 years, employers will have the technological capacity to directly control their employees' moods.

(2.) Employers will not refrain from exercising that capacity.

(3.) Most working-age adults will be employees.

(4.) Therefore, in 25 years, most working-age adults will have employers who directly control their moods.

The argument is valid in the sense that the conclusion (4) follows if all of the premises are true.

Premise 1 seems plausible, given current technological trajectories. Control could be either pharmacological or via direct brain stimulation. Pharmacological control could, for example, be through pills that directly influence your mood, energy levels, ability to concentrate, feeling of submissiveness, or passion for the type of task at hand. Direct brain stimulation could be through a removable TMS helmet that magnetically stimulates and suppresses neural activity in different brain regions, or with some more invasive technology. McDonald's might ask its cashiers to tweak their dials toward perky friendliness. Data entry centers might ask their temp workers to tweak their dials toward undistractable focus. Brothels might ask their strippers to tweak their dials toward sexual arousal.

Contra Premise 1, society might collapse, of course, or technological growth could stall or proceed much more slowly. If it's just slower, then we can replace "25 years" with 50 or 100 and retain the rest of the argument. It seems unlikely that moods are too complex or finicky to be subject to fairly precise technological control, given how readily they can be influenced by low-tech means.

I don't know to what extent people in Silicon Valley, Wall Street, and elite universities already use high-tech drugs to enhance alertness, energy, and concentration at work. That might already be a step down this road. Indeed, coffee might partly be seen this way too, especially if you use it to give your all to work, and then collapse in exhaustion when the caffeine wears off and you arrive home. My thought is that in a few decades the interventions might be much more direct, effective, and precisely targeted.

Premise 2 also seems plausible, given the relative social power of employers vs. employees. As long as there's surplus labor and a scarcity of desirable jobs, then employers will have some choice about whom to hire. If Starbucks has a choice between Applicant A who is willing to turn up the perky-friendly dial and otherwise similar Applicant B who is not so willing, then they will presumably tend to prefer Applicant A. If the Silicon Valley startup wants an employee who will crank out intense 16-hour days one after the next, and the technology is available for people to do so by directly regulating their moods, energy levels, focus, and passion, then the people who take that direct control, for their employers' benefit, will tend to win the competition for positions. If Stanford wants to hire the medical researcher who is publishing article after article, they'll find the researcher who dialed up her appetite for work and dialed down everything else.

Employees might yield control directly to the employer: The TMS dials might be in the boss's office, or the cafeteria lunch might include the pharmacological cocktail of the day. Alternatively, employees might keep the hand on the dial themselves, but experience substantial pressure to manipulate it in the directions expected by the employer. If that pressure is high enough and effective enough, then it comes to much the same thing. (My guess is that lower-prestige occupations (the majority) would yield control directly to the employer, while higher-prestige occupations would retain the sheen of self-control alongside very effective pressure to use that "self-control" in certain ways.)

Contra Premise 2, (a.) collective bargaining might prevent employers from successfully demanding direct mood control; or (b.) governmental regulations might do so; or (c.) there might be a lack of surplus labor.

Rebuttal to (a): The historical trend recently, at least in the U.S., has been against unionization and collective bargaining, though I guess that could change.

Rebuttal to (b): Although government regulations could forbid certain drugs or brain technologies, if there's enough demand for those drugs or technologies, employees will find ways to use them (unless enforcement gets a lot of resources, as in professional sports). Government regulations could specifically forbid employers from requiring that their employees use certain technologies, while permitting such technologies for private use. (No TMS helmets on the job.) But enforcement might again be difficult; and private use vs. use as an employee is a permeable line for the increasing number of jobs that involve working outside of a set time and location. Also, it's easier to regulate a contractual demand than an informal de facto demand. Presumably many companies could say that of course they don't require their employees to use such technologies. It's up to the employee! But if the technology delivers as promised, the employees who "voluntarily choose" to have their moods directly regulated will be more productive and otherwise behave as the company desires, and thus be more attractive to retain and promote.

Rebuttal to (c): At present there's no general long-term trend toward a shortage of labor; and at least for jobs seen as highly desirable, there will always be more applicants than available positions.

Premise 3 also seems plausible, especially on a liberal definition of "employee". Most working-age adults (in Europe and North America) are currently employees of one form or another. That could change substantially with the "gig economy" and more independent contracting, but not necessarily in a way that takes the sting out of the main argument. Even if an Uber driver is technically not an employee, the pressures toward direct mood control for productivity ought to be similar. Likewise for computer programmers and others who do piecework as independent contractors. If anything, the pressures may be higher, with less security of income and fewer formal workplace regulations.

Thinking about Premises 1-3, I find myself drawn to the conclusion that my children's and grandchildren's employers are likely to have a huge amount of coercive control over their moods and passions.

-------------------------------------

Related:

"What Would (or Should) You Do with Administrator Access to Your Mind?" (guest post by Henry Shevlin, Aug 16, 2017).

"Crash Space" (a short story by R. Scott Bakker for Midwest Studies in Philosophy).

"My Daughter's Rented Eyes" (Oct 11, 2016).

[image source: lady-traveler, creative commons]

Thursday, October 19, 2017

Practical and Impractical Advice for Philosophers Writing Fiction

Hugh D. Reynolds has written up a fun, vivid summary of my talk at Oxford Brookes last spring, on fiction writing for philosophers!

-----------------------------------

Eric Schwitzgebel has a pleasingly liberal view of what constitutes philosophy. A philosopher is anyone wrestling with the “biggest picture framing issues” of... well, anything.

In a keynote session at the Fiction Writing for Philosophers Workshop that was held at Oxford Brookes University in June 2017, Schwitzgebel, Professor of Philosophy at the University of California, Riverside, shared his advice–which he stated would be both practical and impractical.

Schwitzgebel tells us of a leading coiffeur who styles himself as a “Philosopher of Hair”. We laugh – but there’s something in this – the vagary, the contingency in favoured forms of philosophical output. And it’s not just hairdressers that threaten to encroach upon the Philosophy Department’s turf. Given that the foundational issues in any branch of science or art are philosophical in nature, it follows that most people “doing” philosophy today aren’t professional philosophers.

There are a host of ways one could go about doing philosophy, but of late a consensus has emerged amongst those that write articles for academic journals: the only proper way to “do” philosophy is by writing articles for academic journals. Is it time to re-stock the tool shed? Philosophical nuts come in all shapes and sizes; yet contemporary attempts to crack them are somewhat monotone.

As Schwitzgebel wrote in a Los Angeles Times op-ed piece:

Too exclusive a focus on technical journal articles excludes non-academics from the dialogue — or maybe, better said, excludes us philosophers from non-academics’ more important dialogue.

[Hugh's account of my talk continues here.]

-----------------------------------

Thanks also to Helen De Cruz for setting up the talk and to Skye Cleary for finding a home for Hugh's account on the APA blog.

[image detail from APA Blog]

Tuesday, October 17, 2017

Should You Referee the Same Paper Twice, for Different Journals?

Uh-oh, it happened again. That paper I refereed for Journal X a few months ago -- it's back in my inbox. Journal X rejected it, and now Journal Y wants to know what I think. Would I be willing to referee it for Journal Y?

In the past, I've tended to say no if I had previously recommended rejection, yes if I had previously recommended acceptance.

If I'd previously recommended rejection, I've tended to reason thus: I could be mistaken in my negative view. It would be a disservice both to the field in general and to the author in particular if a single stubborn referee prevented an excellent paper from being published by rejecting it again and again from different journals. If the paper really doesn't merit publication, then another referee will presumably reach the same conclusion, and the paper will be rejected without my help.

If I'd previously recommended acceptance (or encouraging R&R), I've tended to just permit myself think that the other journal's decision was probably the wrong call, and it does no harm to the field or to the author for me to serve as referee again to help this promising paper find the home it deserves.

I've begun to wonder whether I should just generally refuse to referee the same paper more than once for different journals, even in positive cases. Maybe if everyone followed my policy, that would overall tend to harm the field by skewing the referee pool too much toward the positive side?

I could also imagine arguments -- though I'm not as tempted by them -- that it's fine to reject the same paper multiple times from different journals. After all, it's hard for journals to find expert referees, and if you're confident in your opinion, you might as well share it widely and save everyone's time.

I'd be curious to hear about others' practices, and their reasons for and against.

(Let's assume that anonymity isn't an issue, having been maintained throughout the process.)

[Cross-posted at Daily Nous]

Monday, October 16, 2017

New Essay in Draft: Kant Meets Cyberpunk

Abstract:

I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that things necessarily appear to us. It might seem unlikely that we are living in a virtual reality instantiated on a non-spatial computer. However, understanding this possibility can help us appreciate the merits of transcendental idealism in general, as well as transcendental idealism's underappreciated skeptical consequences.

Full essay here.

As always, I welcome comments, objections, and discussion either as comments on this post or by email to my UCR email address.

Thursday, October 12, 2017

Truth, Dare, and Wonder

According to Nomy Arpaly and Zach Barnett, some philosophers prefer Truth and others prefer Dare. I love the distinction. It helps us see an important dynamic in the field. But it's not exhaustive. I think there are also Wonder philosophers.

As I see the distinction, Truth philosophers sincerely aim to present the philosophical truth as they see it. They tend to prefer modest, moderate, and commonsensical positions. They tend to recognize the substantial truth in multiple different perspectives (at least once they've been around long enough to see the flaws in their youthful enthusiasms), and thus tend to prefer multidimensionality and nuance. Truth philosophers would rather be boring and right than interesting and wrong.

Dare philosophers reach instead for the bold and unusual. They want to explore the boundaries of what can be defended. They're happy for the sake of argument to champion unusual positions that they might not fully believe, if those positions are elegant, novel, fun, contrarian, or if they think the positions have more going for them than is generally recognized. Dare philosophers sometimes treat philosophy like a game in which the ideal achievement is the breathtakingly clever defense of a position that others would have thought to be patently absurd.

There's a familiar dynamic that arises from their interaction. The Dare philosopher ventures a bold thesis, cleverly defended. ("Possible worlds really exist!", "All matter is conscious!", "We're morally obliged to let humanity go extinct!") If the defense is clever enough, so that a substantial number of readers are tempted to think "Wait, could that really be true? What exactly is wrong with the argument?" then the Truth philosopher steps in. The Truth philosopher finds the holes and presuppositions in the argument, or at least tries to, and defends a more seemingly sensible view.

This Dare-and-Truth dynamic is central to the field and good for its development. Sometimes there's more truth in the Dare positions than one would have thought, and without the Dare philosophers out there pushing the limits, seeing what can be said in defense of the seemingly absurd, then as a field we wouldn't appreciate those positions as vividly as we might. Also, I think, there's something intrinsically valuable about exploring the boundaries of philosophical defensibility, even if the positions explored turn out to be flatly false. It's part of the magnificent glory of life on Earth that we have fiendishly clever panpsychists and modal realists in our midst.

Now consider Wonder.

Why study philosophy? I mean at a personal level. Personally, what do you find cool, interesting, or rewarding about philosophy? One answer is Truth: Through philosophy, you discover answers to some of the profoundest and most difficult questions that people can pose. Another answer is Dare: It's fun to match wits, push arguments, defend surprising theses, win the argumentative game (or at least play to a draw) despite starting from a seemingly indefensible position. Both of those motivations speak to me somewhat. But I think what really delights me more than anything else in philosophy is its capacity to upend what I think I know, its capacity to call into question what I previously took for granted, its capacity to cast me into doubt, confusion, and wonder.

Unlike the Dare philosopher, the Wonder philosopher is guided by a norm of sincerity and truth. It's not primarily about matching wits and finding clever arguments. Unlike the Truth philosopher, the Wonder philosopher has an affection for the strange and seemingly wrong -- and is willing to push wild theses to the extent they suspect that those theses, wonderfully, surprisingly, might be true.

But in the Dare-and-Truth dynamic of the field, the Wonder philosopher can struggle to find a place. Bold Dare articles and sensible Truth articles both have a natural home in the journals. But "whoa, I wonder if this weird thing might be true?" is a little harder to publish.

Probably no one is pure Truth, pure Dare, or pure Wonder. We're all a mix of the three, I suspect. Thus, one approach is to leave Wonder out of your research profile: Find the Truth, where you can, publish that, and leave Wonder for your classroom teaching and private reading. Defend the existence of moderate naturalistically-grounded moral truths in your published papers; read Zhuangzi on the side.

Still, there are a few publishing strategies for Wonder philosophers. Here are four:

(1.) Find a Dare-like position that you really do sincerely endorse on reflection, and defend that -- optionally with some explicit qualifications indicating that you are exploring it only as a possibility.

(2.) Explicitly argue that we should invest a small but non-trivial credence in some Dare-like position -- for example, because the Truth-type arguments against it aren't fully compelling.

(3.) Find a Truth-like view that generates Wonder if it's true. For example, defend some form of doubt about philosophical method or about the extent of our self-knowledge. Defend the position on sensible, widely acceptable grounds; and then sensibly argue that one possible consequence is that we don't know some of the things that we normally take for granted that we do know.

(4.) Write about historical philosophers with weird and wonderful views. This gives you a chance to explore the Wonderful without committing to it.

In retrospect, I think one unifying theme in my disparate work is that it fits under one of these four heads. Much of my recent metaphysics fits under (1) or (2) (e.g., here, here, here). My work on belief and introspection mostly fits under (3) (with some (1) in my bolder moments): We can't take for granted that we have the handsome beliefs (e.g., "the sexes are intellectually equal") that we think we do, or that we have the moral character or types of experience that we think we do. And my interest in Zhuangzi and some of the stranger corners of early introspective psychology fits under (4).

Friday, October 06, 2017

Do Philosophy Professors Tend to Come from Socially Elite Backgrounds?

To judge from the examples we use in our essays, we philosophers are a pretty classy bunch. Evidently, philosophers tend to frequent the theater, delight in expensive wines, enjoy novels by George Eliot, and regret owning insufficiently many boats. Ah, the life of the philosopher, full of deep thoughts about opera while sipping Ch√Ęteau Latour and lingering over 19th-century novels on your yacht!

Maybe it's true that philosophers typically come from wealthy or educationally elite family backgrounds? Various studies suggest that lower-income students and first-generation college students in the U.S. and Britain are more likely to choose what are sometimes perceived as lower risk, more "practical" majors like engineering, the physical sciences, and education, than they are to choose arts and humanities majors.

To explore this question, I requested data from the National Science Foundation's Survey of Earned Doctorates. The SED collects demographic and other data from PhD recipients from virtually all accredited universities in the U.S., typically with response rates over 90%.

I requested data on two relevant SED questions:

  • What is the highest educational attainment of your mother and father?
  • and also, since starting at community college is generally regarded as a less elite educational path than going directly from high school to a four-year university,

  • Did you earn college credit from a community or two-year college?
  • Before you read on... any guesses about the results?


    Community college attendance.

    Philosophy PhD recipients [red line below] were less likely than PhD recipients overall [black line] to have attended community college, but philosophers might actually be slightly more likely than other arts and humanities majors to have attended community college [blue line]:

    [click picture for clearer image]

    [The apparent jump from 2003 to 2004 is due to a format change in the question, from asking the respondent to list all colleges attended (2003 and earlier) to asking the yes or no question above (2004 and after).]

    Merging the 2004-2015 data for analysis, 17% of philosophy PhD recipients had attended community college, compared to 15% of other arts and humanities PhDs and 19% of PhDs overall. Pairwise comparisons: philosophy 696/4107 vs. arts & humanities overall (excl. phil.) 7051/45966 (z = 2.7, p = .006); vs. all PhD recipients (excl. phil.) 69958/372985 (z = -3.0, p = .003).

    The NSF also sent me the breakdown by race, gender, and ethnicity. I found no substantial differences by gender. Non-Hispanic white philosophy PhD recipients may have been a bit less likely to have attended community college than the other groups (17% vs. 21%, z = -2.2, p = .03) -- actually a somewhat smaller effect size than I might have predicted. (Among PhD recipients as a whole, Asians were a bit less likely (14%) and Hispanics [any race] a bit more likely (25%) to have attended community college than whites (20%) and blacks (19%).)

    In sum, as measured by rates of community college attendance, philosophers' educational background is only a little more elite than that of PhD recipients overall and might be slightly less elite, on average, than that of PhD recipients in the other arts and humanities.


    Parental Education.

    The SED divides parental education levels into four categories: high school or less, some college, bachelor's degree, or advanced degree.

    Overall, recipients reported higher education levels for their fathers (35% higher degree, 25% high school or less [merging 2010-2015]) than for their mothers (25% and 31% respectively). Interestingly, women PhD recipients reported slightly higher levels of maternal education than did men, while women and men reported similar levels of paternal education, suggesting that a mother's education is a small specific predictor of her daughter's educational attainment. (Among women PhD recipients [in all fields, 2010-2015], 27% report their mothers having a higher degree and 29% report high school or less; for men the corresponding numbers are 24% and 33%.)

    Philosophers report higher levels of parental education than do other PhD recipients. In 2010-2015, 45% of philosophy PhD recipients reported having fathers with higher degrees and 33% reported having mothers with higher degrees, compared to 43% and 31% in the arts and humanities generally and 35% and 25% among all PhD recipients (philosophers' fathers 1129/2509 vs. arts & humanities' fathers (excl. phil.) 11110/26064, z = 2.3, p = .02; philosophers' mothers 817/2512 vs. a&h mothers 8078/26176, z = 1.7, p = .09). Similar trends for earlier decades suggest that the small difference between philosophy and the remaining arts and humanities is unlikely to be chance.

    [click picture for clearer image]

    Although philosophy has a higher percentage of men among recent PhDs (about 72%) than do most other disciplines outside of the physical sciences and engineering, this fact does not appear to explain the pattern. Limiting the data either to only men or only women, the same trends remain evident.

    Recent philosophy PhD recipients are also disproportionately non-Hispanic white (about 85%) compared to most other academic disciplines that do not focus on European culture. It is possible that this explains some of the tendency toward higher parental educational attainment among philosophy PhDs than among PhDs in other areas. For example, limiting the data to only non-Hispanic whites eliminates the difference in parental educational attainment between philosophy and the other arts and humanities: 46% both of recent philosophy PhDs and of arts and humanities PhDs report fathers with higher degrees and 34% of both groups report mothers with higher degrees. (Among all non-Hispanic white PhD recipients, it's 41% and 31% respectively.)

    Unsurprisingly, parental education is much higher in general among PhD recipients than in the U.S. population overall: Approximately 12% of people over the age of 25 in the US have higher degrees (roughly similar for all age groups, including the age groups that would be expected of the parents of recent PhD recipients).

    In sum, the parents of PhD recipients in philosophy tend to have somewhat higher educational attainment than PhD recipients overall and slightly higher educational attainment that PhD recipients in the other arts and humanities. However, much of this difference may be explainable by the overrepresentation of non-Hispanic whites within philosophy, rather than by a field-specific factor.


    Conclusion.

    Although PhD recipients in general tend to come from more educationally privileged backgrounds than do people who do not earn PhDs, philosophy PhD recipients do not appear to come from especially elite academic backgrounds, compared to their peers in other departments, despite our field's penchant for highbrow examples.

    -----------------------------------------

    ETA: Raw data here.

    ETA2: On my public Facebook link to this post, Wesley Buckwalter has emphasized that not all philosophy PhDs become professors. Of course that is true, though it looks like a majority of philosophy PhDs do attain permanent academic posts within five years of completion (see here). If it were the case that people with community college credit or with lower levels of parental education were substantially less likely than others to become professors even after completing the PhD, then that would undermine the inference from these data about PhD recipients to conclusions about philosophy professors in general.

    Monday, September 25, 2017

    How to Build an Immaterial Computer

    I'm working on a paper, "Kant Meets Cyberpunk", in which I'll argue that if we are living in a simulation -- that is, if we are conscious AIs living in an artificial computational environment -- then there's no particularly good reason to think that the computer that is running our simulation is a material computer. It might, for example, be an immaterial Cartesian soul. (I do think it has to be a concrete, temporally existing object, capable of state transitions, rather than a purely abstract entity.)

    Since we normally think of computers as material objects, it might seem odd to suppose that a computer could be composed from immaterial soul-stuff. However, the well-known philosopher and theorist of computation Hilary Putnam has remarked that there's nothing in the theory of computation that requires that computers be made of material substances (1965/1975, p. 435-436). To support this idea, I want to construct an example of an immaterial computer -- which might be fun or useful even independently of my project concerning Kant and the simulation argument.

    --------------------------

    Standard computational theory goes back to Alan Turing (1936). One of its most famous results is this: Any problem that can be solved purely algorithmically can in principle be solved by a very simple system. Turing imagined a strip of tape, of unlimited length in at least one direction, with a read-write head that can move back and forth along the tape, reading alphanumeric characters written on that tape and then erasing them and writing new characters according to simple if-then rules. In principle, one could construct a computer along these lines -- a "Turing machine" -- that, given enough time, has the same ability to solve computational problems as the most powerful supercomputer we can imagine.

    Now, can we build a Turing machine, or a Turing machine equivalent, out of something immaterial?

    For concreteness, let's consider a Cartesian soul [note 1]: It is capable of thought and conscious experience. It exists in time, and it has causal powers. However, it does not have spatial properties like extension or position. To give it full power, let's assume it has perfect memory. This need not be a human soul. Let's call it Angel.

    A proper Turing machine requires the following:

  • a finite, non-empty set of possible states of the machine, including a specified starting state and one or more specified halting states;
  • a finite, non-empty set of symbols, including a specified blank symbol;
  • the capacity to move a read/write head "right" and "left" along a tape inscribed with those symbols, reading a symbol inscribed at whatever position the head occupies; and
  • a finite transition function that specifies, given the machine's current state and the symbol currently beneath its read/write head, a new state to be entered and a replacement symbol to be written in that position, plus an instruction to then move the head either right or left.
  • A Cartesian soul ought to be capable of having multiple states. We might suppose that Angel has moods, such as bliss. Perhaps he can be in any one of several discrete states along an interval from sad to happy. Angel’s initial state might be the most extreme sadness and Angel might halt only at the most extreme happiness.

    Although we normally think of an alphabet of symbols as an alphabet of written symbols, symbols might also be imagined. Angel might imagine a number of discrete pitches from the A three octaves below middle C to the A three octaves above middle C. Middle C might be the blank symbol.

    Instead of physical tape, Angel thinks of integer numbers. Instead of having a read-write head that moves right and left in space, Angel thinks of adding or subtracting one from a running total. We can populate the "tape" with symbols using Angel's perfect memory: Angel associates 0 with one pitch, +1 with another pitch, +2 with another pitch, and so forth, for a finite number of specified associations. All unspecified associations are assumed to be middle C. Instead of a read-write head starting at a spatial location on a tape, Angel starts by thinking of 0, and recalling the pitch that 0 is associated with. Instead of the read-write head moving right to read the next spatially adjacent symbol on the tape, Angel adds one to his running total and thinks of the pitch that is associated with the updated running total. Instead of moving left, he subtracts one. Thus, Angel's "tape" is a set of memory associations like that in the figure below, where at some point specific associations run out and Middle C is assumed on to infinity.

    The transition function can be understood as a set of rules of this form: If Angel is in such and such a state (e.g., 23% happy) and is "reading" such and such a note (e.g., B2), then Angel should "write" such-and-such a note (e.g, G4), enter such-and-such a new state (e.g., 52% happy), and either add or subtract one from his running count. We rely on Angel's memory to implement the writing and reading: To "write" G4 when his running count is +2 is to commit to memory the idea that next time the running count is +2 he will "read" – that is, actively recall – the symbol G4 (instead of the B2 he previously associated with +2).

    As far as I can tell, Angel is a perfectly fine Turing machine equivalent. If standard computational theory is correct, he could execute any computational task that any ordinary material computer could execute. And he has no properties incompatible with being an immaterial Cartesian soul as such souls are ordinarily conceived.

    --------------------------

    [Note 1] I attribute moods and imaginings to this soul, which Descartes believes arise from the interaction of soul and body. On my understanding of Descartes, such things are possible in souls without bodies, but if necessary we could change to more purely intellectual examples. I am also bracketing Descartes' view that the soul is not a "machine", which appears to depend on commitment to a view of machines as necessarily material entities (Discourse, part 5). --------------------------

    Related:

    Kant Meets Cyberpunk (blogpost version, Jan 19, 2012)

    The Turing Machines of Babel (short story in Apex Magazine, July 2017)

    Tuesday, September 19, 2017

    New Paper in Draft: The Insularity of Anglophone Philosophy: Quantitative Analyses

    by Eric Schwitzgebel, Linus Ta-Lun Huang, Andrew Higgins, and Ivan Gonzales-Cabrera

    Abstract:

    We present evidence that mainstream Anglophone philosophy is insular in the sense that participants in this academic tradition tend mostly to cite or interact with other participants in this academic tradition, while having little academic interaction with philosophers writing in other languages. Among our evidence: In a sample of articles from elite Anglophone philosophy journals, 97% of citations are citations of work originally written in English; 96% of members of editorial boards of elite Anglophone philosophy journals are housed in majority-Anglophone countries; and only one of the 100 most-cited recent authors in the Stanford Encyclopedia of Philosophy spent most of his career in non-Anglophone countries writing primarily in a language other than English. In contrast, philosophy articles published in elite Chinese-language and Spanish-language journals cite from a range of linguistic traditions, as do non-English-language articles in a convenience sample of established European-language journals. We also find evidence that work in English has more influence on work in other languages than vice versa and that when non-Anglophone philosophers cite recent work outside of their own linguistic tradition it tends to be work in English.

    Full version here.

    Comments and criticisms welcome, either by email to my academic address or as comments on this post. By the way, I'm traveling (currently in Paris, heading to Atlanta tomorrow), so replies and comments approvals might be a bit slower than usual.

    Thursday, September 14, 2017

    What would it take for humanity to survive? (And does it matter if we do?) (guest post by Henry Shevlin)

    guest post by Henry Shevlin

    The Doctor: You lot, you spend all your time thinking about dying, like you're gonna get killed by eggs, or beef, or global warming, or asteroids. But you never take time to imagine the impossible. Like maybe you survive. (Doctor Who, “The End of the World”)

    It’s tempting to think that humanity is doomed: environmental catastrophe, nuclear war, and pandemics all seem capable of wiping us out, and that’s without imagining all of the exciting new technologies that might be lying in wait across the horizon waiting to devour us. However, I’m an optimist. I think there’s an excellent chance humanity will see this century out. And if we eventually become a multi-planetary species, the odds start looking really quite good for us. Nonetheless, in thinking about the potential value in human survival (or the potential loss from human extinction), I think we could do more first to pin down whether (and why) we should care about our survival, and exactly what would be required for us to survive.

    For many hardnosed people, I imagine there’s an obvious answer to both questions: there is no special value in human survival, and in fact, the universe may be a better place for everyone (including perhaps us) if we were to all quietly go extinct. This is a position I’ve heard from ecologists and antinatalists, and while I won’t debate it here, I find it deeply unpersuasive. As far as we know, humanity is the only truly intelligent species in the universe – the only species that is capable of great works of art, philosophy, and technological development. And while we may not be the only conscious species on earth, we are likely the only species capable of the more rarefied forms of happiness and value. Further to that, even though there are surely other conscious species on earth worth caring about, our sun will finish them off in a few billion years, and they’re not getting off this planet without our help (in other words: no dogs on Mars unless we put them there).

    However, even if you’re sympathetic to this line of response, it admittedly doesn’t show there’s any value in specifically human survival. Even if we grant that humans are an important source of utility worth protecting, surely there are intelligent aliens somewhere out there in the cosmos capable of enjoying just as fancy pleasures as those we experience. Insofar as we’re concerned with human survival at all, then, maybe it should just be in virtue of our more general high capacity for well-being?

    Again, I’m not particularly convinced by this. Leaving aside the fact that we may be alone in the universe, I can’t shake the deep intuition that there’s some special value in the thriving of humanity, even if only for us. To illustrate the point, imagine that one day a group of tiny aliens show up in orbit and politely ask if they can terraform earth to be more amenable to them, specifically replacing our atmosphere with one composed of sulphur dioxide. The downside of this will be that humanity and all of the life on Earth will die out. On the upside, however, the aliens’ tiny size means that Earth could sustain trillions of them. “You’re rational ethical beings,” they say. “Surely, you can appreciate that it’s a better use of resources to give us your planet? Think of all the utility we’d generate! And if you’re really worried, we can keep a few organisms from every species alive in one of our alien zoos.”

    Maybe I’m parochial and selfish, but the idea that we should go along with the aliens’ wishes seems absurd to me (well, maybe they can have Mars). One of my deepest moral intuitions is that there is some special good that we are rationally allowed – if not obliged – to pursue in ensuring the continuation and thriving of humanity.

    Let’s just say you agree with me. We now face a further question: what would it take for humanity to survive in this ethically relevant sense? It’s a surprisingly hard question to answer. One simple option would be that we survive as long as the species Homo sapiens is still kicking around. Without getting too deeply into the semantics of “humanity”, it seems like this misses the morally interesting dimensions of survival. For example, imagine that in the medium term future, beneficial gene-modding becomes ubiquitous, to the point where all our descendants would be reproductively cut off from breeding with the likes of us. While that would mean the end of Homo sapiens (at least by standard definitions of species), it wouldn’t, to my mind, mean the end of humanity in the broader and more ethically meaningful sense.

    A trickier scenario would involve the idea that one day we may cease to be biological organisms, having all uploaded ourselves to computers or robot bodies. Could humanity still exist in this scenario? My intuition is that we might well survive this. Imagine a civilization of robots who counted biological humans among their ancestors, and went around quoting Shakespeare to each other, discussing the causes of the Napoleonic Wars, and debating whether the great television epic Game of Thrones was a satisfactory adaptation of the books. In that scenario, I feel that humanity in the broader sense could well be thriving, even if we no longer have biological bodies.

    This leads me to a final possibility: maybe what’s ethically relevant in our survival is really the survival of our culture and values: that what matters is really that beings relevantly like us are partaking in the artistic and cultural fruits of our civilization.

    While I’m tempted by this view, I think it’s just a little bit too liberal. Imagine we wipe ourselves out next year in a war involving devastating bioweapons, and then a few centuries later, a group of aliens show up on Earth to find that nobody’s home. Though they’re disappointed that there are no living humans, they are delighted by the cultural treasure trove of they’ve found. Soon, alien scholars are quoting Shakespeare and George R.R. Martin and figuring out how to cook pasta al dente. Earth becomes to the aliens what Pompeii is to us: a fantastic tourist destination, a cultural theme park.

    In that scenario, my gut says we still lose. Even though there are beings that are (let’s assume) relevantly like us that are enjoying our culture, humanity did not survive in the ethically relevant sense.

    So what’s missing? What is it that’s preserved in the robot descendant scenario that’s missing in the alien tourist one? My only answer is that some kind of appropriate causal continuity must be what makes the difference. Perhaps it’s that we choose, through a series of voluntary, purposive actions to bring about the robot scenario, whereas the alien theme park is a mere accident. Or perhaps it’s the fact that I’m assuming there’s a gradual transition from us to the robots, rather than the eschatological lacuna of the theme park case.

    I have some more thought experiments that might help us decide between these alternatives, but that would be taking us beyond the scope of a blogpost. And perhaps my intuitions that got us this far are already radically at odds with yours. But in any case, as we take our steps into the next stage of human development, I think it’s important for us to figure out what it is about us (if anything) that makes humanity valuable.

    [image source]

    Tuesday, September 12, 2017

    Writing for the 10%

    [The following is adapted from my advice to aspiring writers of philosophical fiction at the Philosophy Through Fiction workshop at Oxford Brookes last June.]

    I have a new science fiction story out this month in Clarkesworld. I'm delighted! Clarkesworld is one of my favorite magazines and a terrific location for thoughtful speculative fiction.

    However, I doubt that you'll like my story. I don't say this out of modesty or because I think this story is especially unlikable. I say it partly to help defuse expectations: Please feel free not to like my story! I won't be offended. But I say it too, in this context, because I think it's important for writers to remind themselves regularly of one possibly somewhat disappointing fact: Most people don't like most fiction. So most people are probably not going to like your fiction -- no matter how wonderful it is.

    In fiction, so much depends on taste. Even the very best, most famous fiction in the world is disliked by most people. I can't stand Ernest Hemingway or George Eliot. I don't dispute that they were great writers -- just not my taste, and there's nothing wrong with that. Similarly, most people don't like most poetry, no matter how famous or awesome it is. And most people don't like most music, when it's not in a style that suits them.

    A few stories do appear to be enjoyed by almost everyone who reads them ("Flowers for Algernon"? "The Paper Menagerie"?), but those are peak stories of great writers' careers. To expect even a very good story by an excellent writer to achieve almost universal likability is like hearing that a philosopher has just put out a new book and then expecting it to be as beloved and influential as Naming and Necessity.

    Even if someone likes your expository philosophy, they probably won't like your fiction. The two types of writing are so different! Even someone who enjoys philosophically-inspired fiction probably won't like your fiction in particular. Too many other parameters of taste also need to align. They'll find your prose style too flowery or too dry, your characters too flat or too cartoonishly clever, your plot too predictable or too confusing, your philosophical elements too heavy-handed or too understated....

    I draw two lessons.

    First lesson: Although you probably want your friends, family, and colleagues to enjoy your work, and some secret inner part of you might expect them to enjoy it (because it's so wonderful!), it's best to suppress that desire and expectation. You need to learn to expect indifference without feeling disappointed. It's like expecting your friends and family and colleagues to like your favorite band. Almost none of them will -- even if some part of you screams out "of course everyone should love this song it's so great!" Aesthetic taste doesn't work like that. It's perfectly fine if almost no one you know you likes your writing. They shouldn't feel bad about that, and you shouldn't feel bad about that.

    Second lesson: Write for the people who will like it. Sometimes one hears the advice that you should "just write for yourself" and forget the potential audience. I can see how this might be good advice if the alternative is to try to please everyone, which will never succeed and might along the way destroy what is most distinctive about your voice and style. However, I don't think that advice is quite right, for most writers. If you really are just writing for yourself -- well, isn't that what diaries are for? If you're only writing for yourself you needn't think about comprehensibility, since of course you understand everything. If you're only writing for yourself, you needn't think about suspense, since of course you know what's going to happen. And so forth. The better advice here is write for the 10%. Maybe 10% of the people around you have tastes similar enough to your own that there's a chance that your story will please them. They are your target audience. Your story needn't be comprehensible to everyone, but it should be comprehensible to them. Your story needn't work intellectually and emotionally for everyone, but you should try to make it work intellectually and emotionally for them.

    When sending your story out for feedback, ignore the feedback of the 90%, and treasure the feedback of the 10%. Don't try to implement every change that everyone recommends, or even the majority of changes. Most people will never like the story that you would write. You wouldn't want your favorite punk band taking aesthetic advice from your country-music loving uncle. But listen intently to the 10%, to the readers who are almost there, the ones who have the potential to love your story but don't quite love it yet. They are the ones to listen to. Make it great for them, and forget everyone else.

    [Cross-posted at The Blog of the APA]

    Tuesday, September 05, 2017

    The Gamer's Dilemma (guest post by Henry Shevlin)

    guest post by Henry Shevlin

    As an avid gamer, I’m pleased to find that philosophers are increasingly engaging with the rich aesthetic and ethical issues presented by videogames, including questions about whether videogames can be a form of art and the moral complexities of virtual violence.

    One of the most disturbing ethical questions I’ve encountered in relation to videogames, though, is Morgan Luck’s so-called “Gamer’s Dilemma”. The puzzle it poses is roughly as follows. On the one hand, we don’t tend to regard people committing virtual murders as particularly ethically problematic: whether I’m leading a Mongol horde and slaughtering European peasants or assassinating clients as a killer for hire, it seems that, since no-one really gets hurt, my actions are not particularly morally troubling (there are exceptions to this of course). On the other hand, however, there are still some actions that I could perform in a videogame that we’re much less sanguine about: if we found out that a friend enjoyed playing games involving virtual child abuse or torture of animals, for example, we would doubtless judge them harshly for it.

    The gamer’s dilemma concerns how we can explain or rationalize this disparity in our responses. After all, the disparity doesn’t seem to track any actual harm – there’s no obvious harm done in either case – or even the quantity of simulated harm (nuclear war simulations in which players virtually incinerate billions don’t strike me as unusually repugnant, for example). And while it might be that some forms of simulated violence can lead to actual violence, this remains controversial, and again, it’s unlikely that any such causal connections between simulated harm and actual harm would appropriately track our different intuitions about the different kinds of potentially problematic actions we might take in video games.

    However, while the Gamer’s Dilemma is an interesting puzzle in itself, I think we can broaden the focus to include other artforms besides videogames. Many of us have passions for genres like murder mystery stories, serial killer movies, or apocalyptic novels, all of which involve extreme violence but fall well within the bounds of ordinary taste. However, someone who had a particular penchant for stories about incest, necrophilia, or animal abuse might strike us as, well, more than a little disturbed. Note that this is true even when we focus just on obsessive cases: someone with an obsession for serial killer movies might strike us as eccentric, but we’d probably be far more disturbed by someone whose entire library consisted of books about animal abuse.

    Call this the puzzle of disturbing aesthetic tastes. What makes it the case that some tastes are disturbing and others not, even when both involve fictional harm? Is our tendency to form negative moral judgments about those with disturbing tastes rationally justified? While I’m not entirely sure what to think about this case, I am inclined to think that disturbing aesthetic tastes might reasonably guide our moral judgment of a person insofar as they suggest that that person’s broader moral emotions may be, well, a little out of the ordinary. Most of us feel revulsion rather than fascination with even the fictional torture of animals, for example, and if someone doesn’t share this revulsion in fictional cases, it might provide evidence that they might be ethically deviant in other ways. Crucially, this doesn’t apply to depictions of things like fictional murder, since almost all of us have enjoyed a crime drama at some point in our lives, and it's well within the boundaries of normal taste.

    Note that there’s a parallel here with one possible response to Bernard William’s famous example of the truck driver who – through no fault of his own – kills a child who runs into the road, and subsequently feels no regret or remorse. As Williams points out, there’s no rational reason for the driver to feel regret – ex hypothesi, he did everything he could – yet we’d think poorly of him were he just to shrug the incident off (interestingly paralleled by the recent public outcry in the UK following a similar incident involving a unremorseful cyclist). I think what’s partly driving our intuition in such cases is the fact that a certain amount of irrational guilt and regret even for actions outside our control is to be expected as part of normal human moral psychology. When such regret is absent, it’s an indicator that a person is lacking at least some typical moral emotions. In much the same way, even if there is nothing intrinsically wrong about enjoying videogames or movies about animal torture, the fact that it constitutes a deviance from normal human moral attitudes might make us reasonably suspicious of such people’s broader moral emotions in such cases.

    I think this is a promising line to take in regards to both the gamer’s dilemma and the puzzle of disturbing tastes. One consequence of this, however, would be that as society’s norms and standards change, certain tastes may no longer come to be indicative of more general moral deviancy. For example, in a society with a long history of cannibal fiction, people in general might lack the same intense disgust reactions that we ourselves display despite their being in all respects morally upstanding. In such a society, then, the fact that someone was fascinated with cannibalism might not be a useful indicator as to their broader moral attitudes. I’m inclined to regard this as a reasonable rather than counterintuitive consequence of the view, reflecting the rich diversity in societal taboos and fascinations. Nonetheless, no matter what culture I was visiting, I doubt I’d trust anyone who enjoyed fictional animal torture with watching my dog for the weekend.

    [image source]