Thursday, July 23, 2009

This blog is now at

Friday, July 3, 2009

Who do we care about?

Humans exercise compassion regarding:

  • family more than anyone
  • people they know more than strangers
  • geographically close people more than distant people
  • Visible people more than not visible people
  • culturally similar people more than culturally different people
  • few people more than many people (even one person more than two people, in total, if I recall)
  • people who can't be helped by others more than people who aren't being helped by others (bystander effect)
  • causing and stopping death more than stopping and causing birth
  • people who exist already more than potential people
  • actions more than inactions
  • those suffering more than those without as much pleasure as they could have
  • people who will recover health or wealth with our help more than those whose suffering will merely be reduced
  • high status people more than low status people
  • big animals more than small animals
  • women more than men
  • children more than adults
  • cute things more than ugly things
  • the innocent more than the guilty
Our moral feelings are not concerned for others' wellbeing per se. They are very contingent. What's the pattern? An obvious contender is whether we can be rewarded or punished by the beneficiary of our 'compassion'. Distant, helpless, non-existent and low status people can't easily return the favour or punish. Inaction and shared blame are hard to punish, as everyone is responsible. There are some things that don't fit this, but most can be explained e.g. children are weak, but if they are ours we genetically benefit by caring and if they are not they probably have someone powerful caring about them for that reason. Got a better explanation?

I don't decide what to do by guessing the pattern behind my moral emotions and trying to follow it better. If you do, perhaps try to care only for the powerful. If you don't, notice that your moral feelings are probably fooling you into what's tantamount to murder.

Wednesday, June 17, 2009

Morality is subjective preference, but it can be objectively wrong

People are often unwilling to think of ethics as their own preferences, rather than demands from something more transcendent. For instance it's normal to claim that one really wants to make one choice, but it's only ethical to make the other. My feelings agree, but my thoughts don't. If I follow something I call ethics, that demonstrates that I want to. It's not a physical law. So what's the difference?

Just that. Ethics is a preference for fulfilling preferences attributed to some other source. Popular external sources of values include Gods, nature, other people, transcendent moral truth, group norms, and leaders. If I prefer for your house not to burn down I will turn on the hose. If I think it's moral to stop your house burning down I will turn off the hose if I find out that you want to burn it down to collect insurance money. I care about your values, not the house.

One demonstration that having an external source is important for ethics is the fact that invented ethical systems (such as, 'playing video games is virtuous') seem illegitimate and cheaty. Crazy seeming practices can be ordained by religion and culture, but if you decide independently that it's only ethical to eat cereal on Thursdays and most will feel you are missing the point and some marbles.

While ethics is a matter of choice then, it implies the existence of your preferred outside source of values. This means it can be wrong. The outside source of values might not exist, or might not have values. This is why evidence about evolution can influence whether a person likes gays marrying, despite it being an apparent value judgement.

This means moral intuitions aren't as useful as they seem for information about how to be moral. Gut reactions are handy for working out what you like, but if you find that you like serving someone else's purposes there is factual information about whether they exist or care to take into account. We have better ways to deal with facts than our emotional responses in most realms, so why not use the same here?

The only things that exist and care that I know of are other people and animals. Gods and transcendent values don't exist, and society as a whole and the environment don't care, as far as I know. So if I want to be ethical, preference utilitarianism (caring about other people's preferences) is my only option. Of course I could prefer not to be ethical at all. And I could prefer to follow what pass for other moral rules; being honest, protesting interference in the environment, keeping my dress long. But if these things benefit only my feeling of righteousness, I must admit they are no different to normal personal preferences. If you want to be ethical, these are probably not what you are looking for any more than 'it's virtuous to play video games' is.

Be your conformist, approval seeking, self

People recommend that one another 'be themselves' rather than being influenced by outside expectations and norms. Nobody suggests others should try harder to follow the crowd. They needn't anyway; we seem fairly motivated by impressing others and fitting in. Few seem interested in 'being themselves' in the sense of behaving as they would if nobody was ever watching. The 'individuality' we celebrate usually seems designed for observers. What do people do when there's only themselves to care? Fart louder and leave their dirty cups around. This striving for unadulterated selfhood is not praised. Yes, it seems in most cases you can get more approval if you tailor your actions to getting approval. So why do we so commonly offer this same advice, that we don't follow, and don't approve of any real manifestation of?

Sunday, June 14, 2009

Explain explanations for choosing by choice

A popular explanation of why it's worse to seem stupid than lazy is that lazy seems like more of a choice, so not permanent. Similarly it seems more admired and desired to have innate artistic talent than to try hard despite being less naturally good. Being unable to stand by and let a tragedy occur ('I had no choice!') is more virtuous than making a calm, reasoned decision to avoid a tragedy.

On the other hand, people usually claim to prefer being liked for their personality over their looks. When asked they also relate it to their choice in the matter; it means more to be liked for something you 'had a say in'. People are also proud of achievements they work hard on and decisions they make, and less proud of winning the lottery and forced moves.

The influence of apparent choice on our emotions is opposite in these cases, yet we often use it in the explanation for both. Is percieved level of choice really relevant to anything? If so, why does it explain effects in opposite directions? If not, why do we think of it so soon when questioned on these things?

Friday, April 24, 2009

A puzzle

What do these things have in common? Nerves, emotions, morality, prices.

Tuesday, April 7, 2009

Obvious identity fail

Paul Graham points out something important: religion and politics are generally unfruitful topics of discussion because people have identities tied to them.

An implication:

The most intriguing thing about this theory, if it's right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can't think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.

This seems obvious. For one thing, if you are loyal to anything that incorporates a particular view of the world rather than to truth per se, you have to tend away from believing true things. 

Ramana Kumar says this is not obvious, and (after discussion of this and other topics) that I shouldn't care if things seem obvious, and should just point them out anyway, as they're often not, to him at least (so probably to most). This seems a good idea, except that a microsecond's introspection reveals that I really don't want to say obvious things. Why? Because my identity fondly includes a bit about saying not-obvious things. Bother. 

Is it dangerous here? A tiny bit, but I don't seem very compelled to change it. And nor, I doubt, would be many others with more important things. If you identify with being Left or Right more than being correct to begin with, what would make you want to give it up? 

Ramana suggests that if having an identity is inescapable but the specifics are flexible, then the best plan is perhaps to identify with some small set of things that impels you to kick a large set of other things out of your identity. 

What makes people identify with some things and use/believe/be associated with/consider probable/experience others without getting all funny about it anyway?

As a side note, I don't fully get the concept. I just notice it happens, including in my head sometimes, and that it seems pretty pertinent to people insisting on being wrong. If you can explain how it works or what it means, I'm curious.

Thursday, April 2, 2009

Constrained talk on free speech

I went to a public lecture last night on the question 'How do we balance freedom of speech and religious sensitivity?'. It featured four distinguished academics 'exploring legal, philosophical and cultural perspectives'. I was interested to go because I couldn't think of any reason the 'balance' should be a jot away from free speech on this one, and I thought if smart people thought it worth discussing, there might be arguments I haven't heard.

The most interesting thing I discovered in the evening was that something pretty mysterious to me is going on. The speakers implicitly assumed there was some middle of the road 'balance', without addressing why there should be at all. So they talked about how to assign literary merit to The Satanic Verses, how globalization might mean that we could offend more people by accident, whether it is consistent with other rights to give rights to groups, what the law can do about it now, etc. That these are the pertinent issues in answering the question wasn't questioned. Jeremy Shearmur looked like he might at one point, but his argument was basically 'I think I'd find Piss Christ pretty offensive if I were a Christian - it's disgusting to me that anyone would make it anyway - and so ignorant of Christianity'. More interesting discussion of the question could be found in any bar (some of it was interesting, it just wasn't about the question).

What am I missing here? Is it seriously the consensus (in Australia?) that censorship is in order for items especially offensive to religious people? Is there some argument for this I'm missing? What makes the situation special compared to other free speech issues? The offense? Then why not ban other things offensive to some observers? Ugly houses, swearing, public displays of homosexual affection.. The religion? Is there some reason especially unlikely beliefs are to be protected, or just any beliefs that claim their own sacredness? Are these academics afraid of something I don't know about? Is it much more controversial than I thought to support free speech in general? Or is the question just a matter of balancing the political correctness of saying 'yay free speech' and of 'yay religious tolerance'?

Romance is magical

People seem to generally believe they have high romantic standards, and that they aren't strongly influenced by things like looks, status and money. Research says our standards aren't that high, that they drop if the standard available drops for a single evening, and that superficial factors make more of a difference than we think. Our beliefs about what we want are wrong. It's not an obscure topic though; the evidence should be in front of us. How do we avoid noticing? We're pretty good at not noticing things we don't want to - we can probably do it unaided. Here there is a consistent pattern though.

Consider the hypothesis that there is approximately one man in the world for me. I meet someone who appears to be him within a month of looking. This is not uncommon, though it has a one in many million chance of happening under my hypothesis, if I look insanely hard. This should make me doubt my hypothesis in favor of one where there are several, or many million men in the world for me. What do I really do? Feel that since something so unlikely (under the usual laws of chance) occurred it must be a sign that we were really meant for each other, that the universe is looking out for us, that fate found us deserving, or whatever. Magic is a nice addition to the theory, as it was what we wanted in the relationship anyway. Romantic magic and there being a Mr Right are complimentary beliefs, so meeting someone nice confirms the idea that there was exactly one perfect man in the world rather than suggesting it's absurd.

I can't tell how serious anyone is about this, but ubiquitously when people happen to meet the girl of their dreams on a bus where they were the only English speaking people they put it down to fate, rather than radically lowered expectations. When they marry someone from the same small town they say they were put there for each other. When their partner, chosen on grounds of intellectual qualities, happens to also be rich and handsome their friends remark at how fortune has smiled on them. When people hook up with anyone at all they tell everyone around how unlikely it was that they should both have been at that bus stop on that day, and how since somehow they did they think it's a sign.

We see huge evidence against our hypothesis, invoke magic/friendly-chance as an explanation, then see this as confirmation that the original magic-friendly hypothesis was right.

Does this occur in other forms of delusion? I think so. We often use the semi-supernatural to explain gaps caused by impaired affective forecasting. As far as I remember we overestimate strength of future emotional responses, tend to think whatever happens was the best outcome, and whatever we own is better than what we could have owned (e.g. you like the children you've got more than potential ones you could have had if you had done it another day). We explain these with 'every cloud has a silver lining', or 'everything happens for a reason', or 'it turns out it was meant to happen – now I've realised how wonderful it is to spend more time at home', 'I was guided to take that option - see how well it turned out!' or as happens often to Mother; 'the universe told me to go into that shop today, and uncannily enough, there was a sale there and I found this absolutely wonderful pair of pants!'.

Supernatural explanations aren't just for gaps in our understanding. They are also for gaps between what we want to believe and are forced by proximity to almost notice.

Thursday, March 12, 2009

The origins of virtue

I read Matt Ridley's 'The origins of virtue' just now. It was full of engaging anecdotes and irrelevant details, which I don't find that useful for understanding, so I wrote down the interesting points. On the off chance anyone else would like a summary, I publish it here. I recommend reading it properly. Things written in [here] are my comments.



The aim of this book: How did all this cooperation and niceness, especially amongst humans, come about evolutionarily?

Chapter 1

There are benefits to cooperation: can do many things at once, [can avoid costs of conflict, can enjoy other prisoners' dilemmas, can be safer in groups]

Cooperation occurs on many levels: allegiances, social groups, organisms, cells, organelles, chromosomes, genomes, genes.

Selfish genes explain everything.

Which means it's possible for humans to be unselfish.

There are ubiquitous conflicts of interest to be controlled in coalitions at every level.


Relatedness explains most groupishness ( = like selfishness, but pro-group). e.g. ants, naked mole rats.

Humans distribute reproduction, so aren't closely related to their societies. They try to suppress nepotism even. So why all the cooperation?

Division of labour has huge benefits (trade isn't zero sum)

[cells are cool because they have the same genes, so don't mutiny, but different characters so benefit from division of labour]

Division of labor is greater in larger groups, and with better transport.

There is a trade-off between division of labour and benefits of competition.

By specialising at individual level a group can generalise at group level: efficiently exploit many niches.

Division of labour between males and females is huge and old.


Prisoners' dilemmas are ubiquitous.

Evolutionarily stable strategies = nash equilibria found by evolution.

Tit-for-tat and related strategies are good in iterated prisoners' dilemmas.

This is because they are nice, retaliatory, forgiving, and clear.

If a combination of strategies play against one other repeatedly, increasing in number according to payoffs, the always-defectors thrive as they beat the always-cooperators, then the tit-for-taters take over as the defectors kill each other.

Reciprocity is ubiquitous in our society.

Hypothesis: it's an evolutionarily stable strategy. It allowed us to benefit from cooperation without being related. This has been a major win for our species.

Reciprocity isn't as prevalent between related individuals (in ours and other species).

Tit-for-tat can lead to endless revenge :(


Reciprocity requires remembering many other individuals and their previous behavior. This requires a large brain.

Reciprocity requires meeting the same people continually. Which is why people are nastier in big anonymous places.

Other strategies beat tit-for-tat once tit-for-tat has removed nastier strategies. Best of these is pavlov, or win-stay/lose-shift, especially with learned probabilities.

In asynchronous games 'firm-but-fair' is better – similar to pavlov, but cooperates [once presumably] after being defected against as a cooperator in the last round.

In larger populations reciprocity should be less beneficial – most interactions are with those you won't see again.

Boyd's suggestion: this is the reason for morality behaviour, or punishing those who don't punish defection.

Another solution: social ostracism: make choosing who to play with an option.

A strategy available to humans is prediction of cooperativeness in advance. [Why can we do this? Why don't we evolve to not demonstrate our lack of cooperativeness? Because others evolve to show their cooperativeness if they have it? There are behaviours that only make sense if you intend do be cooperative.]


We share food socially a lot, with strangers and friends. Not so much other possessions. Sex is private and coveted.

Meat is especially important in shared meals.

Hypothesis: meat hunting is where division of labour was first manifested.

Monkey males share meat with females to get sex, consequently hunting meat more than would be worth it for such small successes otherwise.

Hypothesis: humans do this too (some evidence that promiscuous natives hunt more), and the habit evolved into a sexual division of labour amongst married couples (long term relationships are usual in our species, but not in chimps). Males then benefit from division of labour, and also feeding their children.

Hypothesis: sexual division of labour fundamental to our early success as a species – neither hunting or gathering would have done alone, but together with cooking it worked.

Hypotheses: food sharing amongst non-relatives could have descended from when males of a tribe were mostly related, or from the more recent division of labour in couples.

Chimps share and show reciprocity behaviour, but do not offer food voluntarily [doesn't that suggest that in humans its not a result of marriage related sexual division?]

Why do hunter-gatherers share meat more, and share more on trips?

Hypotheses: 1. meat is cooperatively caught, so have to share to continue cooperation. 2. High variance in meat catching – sharing gives stable supply.

What stops free-riding then?


Mammoth hunting introduced humans to significant public goods. You can't not share a mammoth, especially if others have spear throwers. [mammoth hunting should have started then when it became easier to kill a mammoth than to successfully threaten to kill a tribesman who killed a mammoth]

Tolerated theft: the idea that people must share things where they can't use all of them, and to prevent others from taking parts is an effort. That is, TT is what happens once you've caught a public good (e.g. mammoth). Evidence that this isn't what happens in reality; division seems to be controlled. Probably reciprocity of some sort (argument over whether this is in the form of material goods or prestige and sex). Evidence against this too; idle men are allowed to share (if the trade is in sex, they aren't the ones the trading is aimed at, and miss out on the sex trade).

Alternative hypothesis: is treated as a public good, but so big that it's possible to sneak the best bits to girls and get sex.

Trade across time (e.g. in large game) reduces exposure to fluctuations in meat.

Hypothesis: hunter-gatherers are relatively idle because they have to share what they get, so stop getting things after their needs are fulfilled.

Hypotheses: when people hoard money they are punished by their neighbours because they are defecting in the reciprocal sharing that usually takes place, yet they have no incentive to share if they have an improbably large windfall – the returned favours won't be as good. Alternatively can be seen as tolerated theft: punishment for not sharing is an attempt to steal from huge good.

When instincts for reciprocity are in place, gifts can be given 'as weapons'. That is, to force future generosity from the recipient.

Gift giving is less reciprocal (still prevalent, just not carefully equal) amongst human families than amongst human allies.

Gifts can then also signal status; ostentatious generosity demands reciprocity – those who can't lose face. The relative benefit of buying status this way depends on the goods – perishables may as well be grandly wasted. In this case reciprocity is zero sum: no benefits from division of labour, status cycle is zero sum.


Humans are better at solving Wason test when it is in terms of noticing cheating than in terms of other social contexts, or abstract terms.

Hypothesis: humans have an 'exchange organ' in their brains, which deals with calculating related to social contracts. This is unique amongst animals. Evidence: brain damage victims and patients who fail all other tests of intelligence except these, anthropomorphic attitudes to nature heavily involve exchange, anthropomorphizing of objects heavily involves exchange related social emotions (anger, praise).

Moral sentiments appear irrational, but overcome short term personal interests for long term genetic gains.

Commitment problem: when at least one side in a game has no credible threat if the other defects, how can cooperation occur? The other can't prove they will commit. e.g. kidnap victim can't prove she won't go to police, so kidnapper must kill her even if both would be better off if he let her go in return for her silence.

Various games have bad equilibria for rational players in one off situations, but emotions can change things. e.g. prisoners' dilemma is solved if players have guilt and shame. Where player would be irrational to punish other for defection (punishment costly to implement, loss already occurred), anger produces credible threat (will punish in spite of self).

Many emotions serve to alter the rewards of commitment problems, by bringing forward costs and benefits.

For this to work, emotions have to be hard to fake. Shouldn't defectors who are good at faking emotions invade a population of people who can't? No, because in the long run the people who can't find each other and cooperate together. [that's what would happen anyway – you would cooperate the first time, then don't go back if the other defects. Commitment should be a problem largely in one off games – are more emotions shown in those things? In one off games can't have the long run to find people and make good liars pay].

Emotions make interests common, which stops prisoners' dilemmas. Interests of genes are not common, so emotions must be shared with other emotional ones.

Ultimatum game variations suggest that people are motivated more by reciprocity than by absolute fairness.

People lacking social emotions due to brain damage are paralyzed by indecision as they try to rationally weigh information.

We like and praise altruism much more than we practice it. Others' altruism and our looking altruistic are useful, whereas our own selfishness is. [why aren't people who behave like this invaded by slightly more altruistic ones who don't cooperate with them? Why is the equilibrium at being exactly as selfish as we are? Signaling means that everyone looks more altruistic than they are, so everyone is less altruistic than they would be if others were maximally altruistic?]

Hypothesis: economics and evolutionary biology are held in distrust because talking about them doesn't signal belief in altruism etc. Claiming that people or genes are selfish suggests that you are selfish.


Cooperation began (or is used primarily in monkey society) in competition and aggression.

The same 'tricks' will be discovered by evolution as by thought [if their different aims don't matter], so if we share a behaviour with animals it's not obvious that it's evolved in our case, though often it is.

Our ancestors were: social, hierarchical (especially amongst males), more egalitarian and with less rigid hierarchies than monkeys.

Differences between primates:

Monkey hierarchies rely on physical strength more than chimp ones, which rely on social manipulation.

Baboons use cooperation to steal females from higher ranking males, chimps use it to change the social hierarchy.

Chimp coalitions are reciprocal, unlike monkeys.

Power and sexual success are had by coalitions of weaker individuals in chimps and humans.

Bottlenose dolphins (the only species other than us with brain:body ratio bigger than chimps): males have coalitions of 2-3 which they use to kidnap females. All mate with her. These coalitions join to form super-coalitions to steal females from other coalitions. This is reciprocal (on winning, one coalition will leave the female with the other coalition in the super-coalition, in return for the favor next time)

Second order alliances seem unique to dolphins and humans.

Chimp males stay in a troop while females leave, with monkeys it is the other way around. Could be related to aggressive xenophobia of chimp males. Seems so in human societies: matrilineal societies are less fighty.

Chimp groups, rather than individuals, possess territory (rare, but not unique: e.g. wolves).

Hypothesis: this is an extension of the coalition building that occurs for gaining power in a group. Alpha males prevent conflict within the group, making it stable, which is good for all as they are safer from other groups if they stick together.

Humans pursue status through fighting between groups, whereas chimps only do it within groups [how do you know?]


Group selection can almost never happen.

Large groups cooperating are often being directly selfish (safer in shoal than alone).

50:50 sex ratio is because individual selection stronger than group selection. A group would do better by having far more females, yet a gene to produce males would make you replicate much faster, bringing ratio back.

Humans appear to be exception: culturally, not genetically, different groups compete.

Conformism would allow group characteristics to persist long enough that there would be group selection before groups dissolved or were invaded by others' ideas.

Why would conformism evolve?

Hypothesis: we have many niches which require different behaviors. If you move it's beneficial to copy your behavior from your neighbors.

Imitation should be more beneficial if there are more people doing it; better to copy something tried by many than the behavior of one other. How did it get started then?

Hypothesis: seeing what is popular amongst many gives you information.

Hypothesis: keeps groups together. If receptive to indoctrination about altruism we will find ourselves in more successful groups [I don't follow this].

Humans don't actually live in groups; they just perceive everything in terms of them.

A persons' fate isn't tied to that of their group. They don't put the group's wellbeing first. They are groupish out of selfishness – it's not group selection.

Ritual is universal, but details of it are particular.

Hypothesis: Ritual is a means to conformity keeps groups together in conflict, and they survive [How would this begin? Why ritual? Why do they have to be different? Why is conformity necessary to keep groups together? Seems just that we are used to conformity being linked to staying together we assume one leads to the other].

Music and religious belief seem to have similarly group grouping properties.

Cooperation within groups seems linked to xenophobia outside them [cooperation for safety in conflict is of course. What about cooperation for trade? Has that given us non-xenophobia induced cooperative feelings? Earlier chapters seemed to imply so].


Weak evidence of trade 200,000 years ago – not clear when it started.

Trade between groups is unique to humans.

Trade is the glue of alliances between groups; it appears that some trade is just an excuse for this.

Trading rules predate governments. Governments nationalize preexisting trading systems. e.g. 11th C Europe merchant courts [is this a general trend? why is everything in anecdotes? aargh].

Speculation isn't beneficial because there is no division of labour [?].


Natives are not ecologically nice. They do not conserve game. They sent many species extinct.

We tie environmentalism up with other morality [is it pro-social morality, as the book has been about, or purity?].

As with other morality, we are more programmed to preach than to practice.

It doesn't look like people have an instinctive environmental ethic [it's a big prisoners' dilemma – can't we make use of something else in our repertoire?].


Property rights emerge unaided where it is possible to defend them [if you see a tragedy of the commons coming, best to draw up property rights – no reason you will be the free rider].

Nationalization often turns property-divided 'commons' into a free for all, as the govt can't defend it and nobody has reason to protect what they are stealing from.

Ordered and successful systems can emerge without design. e.g. Bali subak traditions could have resulted from all copying any neighbour who did better than them.

Lab experiment suggests that communication encourages a lot of cooperation in tragedy of commons games (better than ability to fine defectors)

If humans can arrange property rights unaided, why all the extinctions last chapter?

Hypothesis: property rights can't be enforced on moving things. Animals that could have property rights asserted on them did have in some cases. e.g. Beavers.

Hoarding taboo (as a result of reciprocity instinct) is to blame for environmentalist dislike of privatisation as a solution.

Hoarding isn't allowed in primitive tribes, but as soon as more reliable lifestyle allows powerful individuals to do better by hoarding than relying on social insurance, they do. Yet we retain an aversion to it.


Humans are born wanting to cooperate, discriminate trustworthiness, commit to trustworthiness, gain a reputation, exchange goods and info, and divide labour.

There was morality before the church, trade before the state, exchange before money, social contracts before Hobbes, welfare before rights, culture before Babylon, society before Greece, self interest before Adam Smith and greed before capitalism.

Also tendency to xenophobic groups is well inbuilt.

How can we make use of our instincts in designing institutions?

Trust is fundamental to cooperative parts of human nature being used.

This has been part of an endless argument about the perfectability of man, famously between Hobbes and Rousseau. Also about how malleable human nature is. [The book goes into detail about the argument over the centuries, but it's an irrelevant story to me].

To say that humans are selfish, especially that their virtue is selfish, is unpopular because saying so encourages it supposedly.

Big state doesn't make bargains with the individual, engendering responsibility, reciprocity, duty, pride – it uses authority. How do people respond to authority?

Welfare state replaces existing community institutions based on reciprocity and encouraging useful feelings, having built up trust over the years. Centralised replacements like the National Health Service. Mandatory donation → reluctance, resentment. Client feelings changed from gratitude to apathy, anger, drive to exploit the system.

:. Government makes people more selfish, not less.

We must encourage material and social exchange between people, for that is what breeds trust, and trust is what breeds virtue.

Our institutions are largely upshots of human nature, not serious attempts to make the best of it.

Sunday, January 4, 2009

Repeated thought

Eliezer Yudkowsky of OB suggests thinking and doing entirely new things for a day: 

Don't read any book you've read before.  Don't read any author you've read before.  Don't visit any website you've visited before.  Don't play any game you've played before.  Don't listen to familiar music that you already know you'll like.  If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk.  Don't go to any restaurant you've been to before, order a dish that you haven't had before.  Talk to new people (even if you have to find them in an IRC channel) about something you don't spend much time discussing.

And most of all, if you become aware of yourself musing on any thought you've thunk before, then muse on something else. Rehearse no old grievances, replay no old fantasies.
The comments and its reposting to MR suggests that this is popular advice. 

It's interesting that, despite the warm reception, this idea needs pointing out, and trying for one experimental day. 

Having habits for things like brushing teeth is useful - the more automatic uninteresting or unenjoyable experiences are, the more time and thought can be devoted to other things. Habits for places to go could be argued for - if you love an experience, why change it? 

But why should we want to repeat thoughts a lot? Seems we say we don't. So, why do we do it? Do we do it? If we can stop when Eliezer suggests it, why don't we notice and stop on our own? Is it that habits are unconscious; a state that doesn't lend itself to noticing things? Has the usefulness of other habits made us so habitual that our thoughts are caught up in it? 

What can we do about it?

As a side note, perhaps the quantity of unconscious habit in a life is related to the way time speeds up as you age.