Summary: “Moral foundations and political orientation: systematic review and meta-analysis”

After a long hiatus due to personal reasons, I return to active research with a proud announcement: my article “Moral foundations and political orientation: systematic review and meta-analysis” has been accepted for publication in Psychological Bulletin – a journal which, according to Wikipedia, ranks 1st out of 138 journals in the category “Psychology, multidisciplinary”. My coauthors are Belén Fernández-Castilla, Simo Järvelä, Niklas Ravaja, and Jan-Erik Lönnqvist. This post is going to be a long one, because hey, the manuscript is 102 pages long (80 pages without references, figures, and tables).

In the article, we investigate the widely reported findings that liberals and conservatives rely on different moral foundations (see Haidt’s TED Talk on the topic). We review how has this issue been studied and what are the potential problems with its measurement. From the original haul of over 1000 records found with searches, we meta-analyze 89 samples, which contain 605 effect sizes, and that have used in total 226,674 participants – of which 192,870 participants are from the widely used YourMorals.org website, and 33,804 participants from independent studies.

Main result

The main finding was that the moral foundations were associated with political orientation generally in the way that has been reported earlier: liberals are more concerned with care (that nobody is harmed) and fairness (that nobody cheats and everyone’s rights are protected), and much less concerned with the other three foundations. Conservatives are about equally concerned with care and fairness than with loyalty (that one’s ingroup is not betrayed), authority (that customs, traditions, and hierarchies are respected by everyone), and sanctity (that the intrinsic purity of certain things is not degraded). In terms of correlations, our primary measure, this means that for care and fairness, people agree more so the correlations are low (from about zero to about |r| = .3 at maximum), while for loyalty, authority, and sanctity, people disagree more, so the correlations are higher (from about |r| = .2 to a bit over .6 in extreme cases). These effects do not sound large, and in absolute terms, they aren’t. But considering that the average effect size in psychology has been estimated as r = .21 (Richard et al., 2003), it’s not meaningless by any means.

As Moral Foundations Theory is based on a social intuitionist account of morality, this difference – liberals higher in care and fairness, conservatives higher in loyalty, authority, and sanctity – explains a lot of the divide in the politics. In MFT, morality is seen primarily as a feeling that the person cannot control, and which is justified with rational explanations only afterwards. That is, first you feel that something is wrong, and only then you learn what others say are the reasons that it is wrong and begin believing those reasons – not the other way around. The problem is that when liberals say that not helping immigrants is wrong (care), or when conservatives say that breaking the traditional marriage is wrong (authority), they both feel this wrongness strongly, but they try to convince the others with the rationalizations that are not the original reason for the feeling. If someone says dogs are terrifying but you just don’t feel it, you don’t suddenly produce a fear of dogs just because they tell their reasons – you shrug and continue as you were. But while in our culture it is understood that others may have different feelings than you, it is not widely understood that morality as largely a feeling works like this as well. When you tell someone that something is wrong, and they shrug and continue as they were rather than agree with you, you will likely not understand how can they not see it. You explain more, they become defensive (after all, dogs are just fine, right? what are you on about?), you begin to wonder, is there a reason they don’t agree? Are they just stupid? Or do they actually see it but don’t agree because they profit from not agreeing? While the whole thing is just because you two feel differently, and without understanding that fact, no amount of explanation of reasons will change it.

According to MFT, this is (one) fundamental reason for political division, and our evidence supports the core assumption behind it: that people with different political orientations do have different feelings about morality. But we also found other things, some of which cast some doubt on this core assumption.

Bias in the YourMorals.org sample and political interest

The comparison between the YourMorals sample and independent samples is one focus of the study, more relevant to other researchers in the field. The YM sample is huge, and most people have likely assumed that a large sample size is more accurate than smaller samples. But when we did the meta-analysis over the independent samples, we found that they are about half of the effect size (the correlation between the MFs and PO, or how different liberals are from conservatives regarding their feelings of a particular moral foundation) of the YM sample – and this difference holds whether the independent samples are the typical WEIRD college students samples or the highly-regarded nationally representative samples such as American National Election Studies or the New Zealand Attitudes and Values Study. Furthermore, Many Labs 2 (Klein et al. 2018), a large-scale project that studied the replicability of psychological science over 125 laboratories, found even smaller effects.

The YM sample has been criticized by its lack of diversity (Davis et al., 2016, reported that the associations do not replicate well in Black samples), potentially leading to a biased results despite the sample size. The moderating effect of race was small, however, along with other demographics which did not have a moderating effect at all. One variable did have a sizable moderating effect though: political interest. Respondents who reported no interest in politics had clearly smaller effects than the very interested respondents. This is line with classic results from the sixties, where it was found that highly politically engaged people are much more consistent on policy issues than people with low political engagement (Converse, 1964). For the former, issues like immigration, welfare, and minority rights are tied to each other, so if you have a particular view on one of them, it’s easy to guess your views on the rest. The people not very engaged with politics however, they might have one opinion on one issue but completely “inconsistent”* views on the others. In our data, it means that the results in the YM sample, more than in other samples, are more largely influenced by the people interested in politics, which tends to accentuate the differences between political orientations. Our interpretation is that the method of collecting the data – instead of researchers selecting the sample, people come seeking the questionnaire, because they are interested in the topic – likely causes the bias. This is supported by the finding that the effect sizes are larger, although still clearly smaller than in the YM sample, in samples collected online (susceptible to the same self-selection problem), smaller in college samples (where the questionnaires are typically filled in during courses that the students may or may not have selected themselves), smaller still in the samples that have paid attention on their representativeness (but may still have a small component of self-selection, as the representative sample is typically selected out of a panel of volunteers, who have some say which studies they want to participate in), and the smallest in the Many Labs 2 study (where the main point was something completely different than MFs and PO, so there is no selection based on interest).

This does not mean the results from the YM sample are wrong, it just means that they are less representative of people in general, who are not very politically engaged. Which in turn means that one should not make conclusions about people in general based on these people who have this systematic bias.

Cultural differences and the political dimensions

We also had findings more relevant to a layperson. Although race alone did not explain the difference between the YM sample and independent samples, combined with the political interest variable it produced interesting results. For Black and Hispanic not-interested-in-politics respondents the association between moral foundations and political orientation was zero or almost negligible – meaning that Black and Hispanic liberals and Black and Hispanic conservatives who are not very interested in politics have very similar feelings about morality. For White people the moderating effect of political interest was less strong, indicating that they are more divided regardless of their level of engagement, while POC are divided only when they are very much into politics. Seeing how this corresponds to the current political climate and the US Presidential Election polls, it is worth noting that these data are from 2009-2017 and have not been influenced much by the divisive nature of Trump administration.

That was the US though, as it is the only country where studies reported race. What about the rest of the world? The data as a whole are very skewed towards the US and other countries were not strongly represented – 47 out of 89 samples were American, only seventeen from Europe, and the rest in individual countries around the world, international, or unspecified – so our cultural comparisons are only suggestive. However, we found some interesting trends.

First, we compared the US and European samples’ use of political orientation labels. Three major labeling schemes were used for the political dimension: liberal-conservative, left-right, and social liberal-conservative / economic liberal-conservative separately but together. Different studies used these in different ways, so we could check whether the way the political dimension was labeled mattered for its relationship to moral foundations – and it does. In the US, liberal-conservative and left-right mean practically the same thing, while in Europe, left-right corresponds specifically to the economic dimension (taxes, welfare, spending, etc.) and liberal-conservative to the social dimension (immigration, minority rights, etc.). But the US respondents are not blind to the difference between the two political dimensions: when they were named explicitly, the associations to moral foundations followed closely to those found in Europe by the other labels.

All this might sound obvious to, for example people in my home country, Finland, where liberal-conservative and left-right are routinely used as orthogonal dimensions in the largest national newspaper when they report on elections, but this is actually a valid research question in political psychology, because – perhaps due to the strong position of American researchers in the field – it has been long argued that a unidimensional political axis is sufficient to represent political differences. Two-dimensional models are used as alternatives, but they are not as prominent as the unidimensional one. And admittedly, the differences between the dimensions were rather modest even where they were apparent. The takeaway is that although the social and economic dimensions are correlated, when they are looked at in detail, they have clear differences – but it depends on the political culture by what names the people understand these dimensions.

But what were the differences? For liberal-conservative or the social political orientation, the associations to moral foundations were the ones described in the beginning of this post: the differences between liberals and conservatives are small in care and fairness (everyone pretty much agrees that not harming people and being fair is important) but larger in the other three (liberals don’t consider things like patriotism, tradition, and purity morally as important as conservatives do). However, left-right or the economic dimension were more equally associated with all five foundations, at a lower level than for liberal-conservative dimension (a bit smaller difference in care, where most people agree, and a bit larger difference in authority). This indicates that while care and fairness is more important to the left wing and loyalty, authority, and sanctity to right wing, the differences are not smaller for the former and larger for the latter. This trend is most interestingly apparent in another analysis, where we used alternative operationalizations for political orientation. According to the dual-process model of political orientation (Duckitt & Sibley, 2009), the social dimension is rooted in a trait called right-wing authoritarianism (RWA; a misleading name, but let’s ignore that here), while the economic dimension is rooted in another trait called social dominance orientation (SDO). The relationships between moral foundations and these two are similar to what I just described above, but in more extreme way: RWA (related to social dimension) is only negligibly related to care and fairness, so people both high and low in RWA largely agree that harming people and being fair is important**, but more strongly related to loyalty, authority, and sanctity (r = .45, .55, and .58), indicating larger disagreement. SDO on the other hand is moderately (r = -.32 and -.40) associated with care and fairness, indicating that people who hold hierarchies (related to economic dimension) in high esteem care clearly less about harming people or peoples’ rights than people who favor stronger equality. And compared to stronger correlations between RWA and loyalty, authority, and sanctity, SDO is clearly less (r = .22, .30, and .19) associated with them.

In addition to general differences between the US and Europe, we studied individual countries by using multilevel modeling on individual-level data. Only five countries of those studies from which we had individual-level data had used both liberal-conservative and left-right labels (explicit social and economic dimensions were only used in the US). Of these, Anglosphere countries New Zealand and the US had in most parts practically nonexistent differences between the dimensions, and Nordic countries Finland and Sweden somewhat larger, especially in sanctity (which is moderately related to conservatism, but only weakly to right-orientation). But the most interesting differences were found in Latvia, where they were not very large, but their directions were unexpected, suggesting that right-orientation might not only have smaller association to authority and sanctity than conservatism, but a negative one – indicating that in Latvia, right-oriented respondents endorsed these moral foundations less, not more, than left-oriented people (as was the case in practically all other countries). Although this was only one sample, it again follows the findings from other research showing that the communist history of Eastern European countries has influenced the political culture in ways that make many US-originating assumptions unreliable (e.g., Piurko et al., 2011).

All in all, findings related to culture suggested that the simple difference between liberals and conservatives is not as simple when respondents other than White Americans were involved. Without data we can only offer speculation about other countries and political cultures. Although the US is the “default” due to the majority of research done there, the political circumstances there are not universal and should not be assumed to be. Likely, the more different the political culture is from the US, the more divergent the associations to moral foundations are. For instance, when we contacted a Chinese researcher about one of their studies, we got an answer that “liberal-conservative dimension is not really a thing in China”. Even in Western countries, local politics differ from place to place and may involve important questions about regional issues, political parties that do not fit well even within the two-dimensional model (such as many populist parties or the greens), and so on. How would moral foundations be related to completely different dimensions? An interesting question for future research.

Methodological issues

Publication bias is a big problem, in the field in general, but also in reviews in particular. It means that due to the perverse incentives in academia, researchers publish studies where they have found something interesting, but abandon studies that are unsurprising or that don’t find “significant effects” (jargon meaning that the rather arbitrary set of statistical methods happen to not meet a rather arbitrary threshold). This results in problems like the replicability crisis: as much as half of the results in psychology cannot be repeated, because the incentives support publishing only surprising or impressive studies, which are often surprising or impressive because they are wrong. Meta-analyses are (rightly) required to analyze whether the publication bias has skewed the results. A priori, we assumed that we should not have that problem, because none of the independent studies were primarily about MFs and PO, but they were rather measured auxiliary to whatever the studies were actually about – thus, no selection on the basis of whether specifically the MF-PO associations were surprising or impressive was expected. This is also what we found, no publication bias, according to several different tests.

The main methodological limitation in our study was the primary measures. Moral foundations are measured by a single instrument, Moral Foundations Questionnaire, with some variation in how it was used. While the MFQ has been validated by the regular standards in the field, the main data they relied on was the YM sample. In addition, although the motivation behind the theory was to take into account morality more broadly than previously, MFQ as an instrument has been criticized for poor psychometric properties (i.e., does it measure what it purports to measure, and does it do it in an optimal way?), as well as item wordings that may be more suited for the White US audience than international one (e.g., references to God, loyalty foundation’s emphasis on patriotism rather than family, racial/ethnic group, or religion). Other measures for moral foundations as well as other conceptualizations of morality altogether exist, but none of them have reached the popularity of the MFT and MFQ that comparing them would have been very difficult. Likewise, we only included studies that measured political orientation with clearly the most popular instrument: a simple self-placement scale, often from 1 to 7, where the respondent chooses where they see themselves on the liberal-conservative (or using the other labels) axis. It has been widely considered sufficient, but a number of problems remain (such as the one that it is unclear whether the political labels mean the same thing for politically engaged and not engaged, people from different political cultures, etc). These problems we could not fix, only acknowledge.

Finally, we noted – like has been noted numerous times after the replicability crisis was recognized – that the sample sizes of many studies were far too small for reliable estimates. “[T]o detect our chosen smallest relevant difference of r = .10 with a power of the recommended 80 %, one would need a sample sizeof 783. The median sample size in the samples included in this meta-analysis was 250, and only eleven samples—including the YourMorals dataset—had a sample size larger than required.” (p. 45) Note that meta-analyses are done exactly to overcome shortcomings like too small sample sizes so it does not invalidate our results, but it does imply problems for making conclusions about smaller effects within the individual studies.

Final words

Thank you to all the original authors who shared their article, results, and often raw data! Special thanks to Jesse Graham, who both shared the YourMorals data and refereed our manuscript – the results are much more meaningful after the his comments and others received during the review process.

I started this work in winter 2015-2016, when I was writing an article on the topic (later published as Kivikangas et al., 2017) and noted the varying effect sizes and labels. I did a small and simple meta-analysis which I published at SPSP 2016, but was encouraged to do a more systematic one, with more sophisticated analyses. We had to begin the collection of data from the start, because I had no experience with meta-analyses, and made some critical mistakes. It took time to correspond with a hundred researchers all over the world; I did a lot of other work and then returned to this one again and again; and the review process was very… throrough. I am happy with the results, although there were a ton of other analyses I wanted to do, but was forced to abandon – nobody would have read the tome that it would have been. Some of them I might return to at a later point.

Personally, this work was somewhat of a detour from my true passion: affective system and how it relates to morality. MFT argues that moral foundations are inherent learning modules that produce moral feelings. Originally I was very much convinced, but the more I have studied it, the more I feel it has a lot to improve. One new piece of research (Curry et al., 2019) has suggested that morality provides universal types of solutions to problems of cooperation. Although the research is anthropological in its origins, it appears to provide more concrete potential adaptations that could be linked to affective mechanisms. I’d very much like to study these links more closely, preferably with experimental methods, but there’s a lot to learn about it first. And there is also a new version of the MFQ coming, and I’m keen to see how has it improved. Presumably the truth is in neither, and I’m enthusiastic about trying to dig closer to it with these new tools. From the political point of view, another interesting approach is Crimston and others’ (2016) moral expansiveness: in many cases, the difference between political poles may not be in what kind of morality (moral foundation) the person endorses more strongly, but rather to whom they link their feelings of morality. It’s been long known that liberals have more universalist view of morality: all humans – and even some non-humans, like livestock and natural in general – should be equally considered, while for conservatives morality appears to be about the people more close to them. How does this interact with types of morality? For instance, a common row in political discussion is that liberals express care about refugees trying to come into our country, while conservatives express care about compatriots who they feel are unnecessarily burdened and endangered. A typical accusation from the liberals is that conservatives are simply racist and the care about women being raped is just window dressing. But according to the moral expansiveness view this might be a genuine difference in to whom the moral feelings extend. It does not help anybody to be disgusted about this difference that, according to the theory, is emotional, not rational. You cannot reason someone out of opinions they did not reason themselves into – but this does not apply only to your political opponents but to you, reader, just like everyone else. Rather, understanding the differences should help us come up with ways to live with each other without purposely stepping on each others feelings just because we don’t happen to have them ourselves.

Scientifically, one of the anonymous reviewers expressed strong doubt whether our results were worth anything. The measures are dubious, the scope is very limited, the topic is not new. Susceptible to the Imposter syndrome as much as researchers often are, I had to concede that they had a point in almost all of their critiques, but after thinking it through, I still had to disagree. Knowing, rather than assuming, that widely cited results are on general level correct is valuable, for the larger audience but also for researchers reading old studies and designing new ones. Knowing better estimates for the results is valuable, for researchers in the field but also for meta-science. Knowing the limitations of the widely cited results and datasets, and what might influence them and how, is valuable. Knowing how the exceptions link to other scientific research is valuable. I say knowing, but of course strictly speaking we never “know” in science; we are only more confident because we have better data. And these are now the best data available on this topic.


*) Inconsistent, not necessarily philosophically, but in terms of how these issues tend to empirically group in local political parties.

**) This is actually interesting, as characteristics of authoritarianism include punitiveness and aggressively “putting the people who are different in their place”, which could reasonably be interpreted as displaying the proneness to harm others and disregard of others’ rights. I would speculate that this might be due to the fact that moral foundations are considered separately here. A high-authoritarian person might agree that not harming people and fairness are important when asked about these alone, but when brought in context of authoritarian aggressiveness and punitiveness, they might say that punishing evil and putting people in their place is clearly more important than individual person’s rights.

References

Converse, P. E. (1964). The nature of belief systems in mass publics. In D. E. Apter (Ed.), Ideology and discontent (pp. 206–261). Free Press.

Crimston, D., Bain, P. G., Hornsey, M. J., & Bastian, B. (2016). Moral expansiveness: Examining variability in the extension of the moral world. Journal of Personality and Social Psychology, 111(4), 636–653. https://doi.org/10.1037/pspp0000086

Curry, O. S., Chesters, M. J., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of ‘morality-as-cooperation’ with a new questionnaire. Journal of Research in Personality, 78, 106–124. https://doi.org/10.1016/j.jrp.2018.10.008

Davis, D. E., Rice, K., Van Tongeren, D. R., Hook, J. N., DeBlaere, C., Worthington, E. L., & Choe, E. (2016). The moral foundations hypothesis does not replicate well in Black samples. Journal of Personality and Social Psychology, 110(4), e23–e30. https://doi.org/10.1037/pspp0000056

Duckitt, J., & Sibley, C. G. (2009). A Dual-Process Motivational Model of Ideology, Politics, and Prejudice. Psychological Inquiry, 20(2–3), 98–109. https://doi.org/10.1080/10478400903028540

Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., Aveyard, M., & Axt, J. R. (2018). Many Labs 2: Investigating Variation in Replicability Across Sample and Setting [Preprint manuscript]. https://psyarxiv.com/9654g/

Piurko, Y., Schwartz, S. H., & Davidov, E. (2011). Basic Personal Values and the Meaning of Left-Right Political Orientations in 20 Countries. Political Psychology, 32(4), 537–561. https://doi.org/10.1111/j.1467-9221.2011.00828.x

Richard, F. D., Charles F.  Bond, J., & Stokes-Zoota, J. J. (2003). One Hundred Years of Social Psychology Quantitatively Described: Review of General Psychology. https://journals.sagepub.com/doi/10.1037/1089-2680.7.4.331

Loss aversion, one more victim to replication crisis? (no)

I saw somewhere a link to working paper by Gal & Rucker entitled The Loss of Loss Aversion: Will It Loom Larger Than Its Gain?, with the comment that loss aversion is one more psychological phenomenon not replicating. I was surprised by this claim because the mechanisms behind loss aversion, or something like it, are very much related to affect psychology. The context of replicability implies that loss aversion was never a thing, it was just a statistical fluke resulting from questionable research practices like social priming and Bem’s psi findings. But that’s not what the linked manuscript says. It’s a review that never actually questions whether such a phenomenon exists at all – rather it’s discussing the scope and an alternative conceptualization of the phenomenon. I’m not that familiar with this literature to assess whether the review really is impartial or whether it cherrypicks its findings (as it’s written quite obviously with a particular conclusion in mind, self-citing a lot), but clearly it should not be cited as evidence that loss aversion as a phenomenon is a result of QRPs. I’m a bit annoyed that the title and even the abstract plays like a clickbait and makes it easy to link the normal theoretical discussion about the limits of a phenomenon to the replicability issue.

In addition to enabling the misreading of this manuscript’s position in the literature, I was slightly miffed that it’s at least partly based on a fundamental misunderstanding of how the mind works. The authors describe a strong and a weak form of loss aversion in order to compare them to evidence, both of them in terms of it being a general, universal principle that can be applied to any human behavior: the strong form as absolute (that “one should not observe cases where gains have a propensity to be weighted more than losses of similar magnitude”, p.9), and the weak form as relative (“on average, one expects the data would largely reveal a greater impact of losses than of gains”, p.10). It may be that this is a feature in decision making research in general rather than a view held only by the authors, but from the point of view of affect psychology, it makes little sense. It’s a strawman, because I don’t think there is anything in psychology that can be considered a universal law like this, at the level of observable outcomes. Human mind does not work on “principles” or “laws” like this, because it is an immensely complex system of reacting, predicting, and self-correcting processes. There is no single process reaching through the whole of human mind, always (or even mostly, on average) producing the same results regardless of circumstances, because that would not be adaptive for the complex physical and social environment our mental machinery. And even if we focus on a very high level, it’s a dubious notion to begin with that all decision making would be governed by a single process translating all kinds of decisions into simple losses and gains.

I admit that I think some things as “principles” of human mind, and negativity bias (related but not identical to loss aversion) sounds like a good candidate, but it does not mean that at the level of observable outcomes, regardless of circumstances, we should see (absolutely or on average) a particular pattern of behavior. Rather, it means that some parts of the system tend to process information in certain ways, and in specific circumstances – where we can somehow control that specifically these processes are the ones influencing the outcomes the most – we can indeed see patterns in behavior.

That said, the alternative conceptualizations – such as propensity towards inaction or status quo – are interesting, and worth considering (assuming the review is not horribly biased) for anyone working with loss aversion. It is very likely true that an intuitively appealing conceptualization tends to be overgeneralized and that scientists easily persist even in face of evidence to the contrary.

 


Gal, D., & Rucker, D. (2017). The Loss of Loss Aversion: Will It Loom Larger Than Its Gain? (SSRN Scholarly Paper No. ID 3049660). Rochester, NY: Social Science Research Network. Retrieved on 12 Jul 2018 from https://papers.ssrn.com/abstract=3049660

TIL depression as an unfortunate result of emotional recalibration

(And re: previous post – no, I’m not horribly depressed, nor am I working full-time again. I’m doing things I enjoy in order to get better, and emotion theory happens to be one of them.)

Reading Tooby & Cosmides (2005), Conceptual Foundations of Evolutionary Psychology, as a part of refamiliarizing myself with basics of evopsych. It is a very good description of a lot of basic ideas behind evopsych, having mostly familiar stuff and surprisingly little stuff I disagree with, but the new part I had not run into before was the idea of recalibrational emotion programs.

The core idea is that unlike many other emotions*, emotions such as guilt, grief, shame, gratitude, and depression, have not evolved for producing any immediate behavior change. Instead, drawing from the computational approach to psychology, the idea is that behavior generally is dependent on a lot of (nonconscious) regulatory variables that track the relatively stable circumstances of one’s life. This way the brain does not have to calculate things like the estimate of social support, evaluation of a particular person’s likelihood of reciprocating kindness (or aggression), or present health and energy of own body, on the fly when already in a situation. But these variables need to be updated constantly, and sometimes the act of updating a variable itself should cause changes in other evaluations, in default modes of behavior in related situations, and so on. The authors use guilt as an example (p. 59):

Imagine a mechanism that evolved to allocate food according to Hamilton’s rule, situated, for example, in a hunter-gatherer woman. The mechanism in the woman has been using the best information available to her to weight the relative values of
the meat to herself and her sister, perhaps reassuring her that it is safe to be away from her sister for a short time. The sudden discovery that her sister, since she was last contacted, has been starving and has become sick functions as an information-dense situation allowing the recalibration of the algorithms that weighted the relative values of the meat to self and sister. The sister’s sickness functions as a cue that the previous allocation weighting was in error and that the variables need to be reweighted—including all of the weightings embedded in habitual action sequences. Guilt functions as an emotion mode specialized for recalibration of regulatory variables that control trade-offs in welfare between self and others […]  Previous courses of action are brought to mind (“I could have helped then; why didn’t I think to?”), with the effect of resetting choice points in decision rules.

The authors briefly mention depression as well: “Former actions that seemed pleasurable in the past, but which ultimately turned out to lead to bad outcomes, are reexperienced in imagination with a new affective coloration, so that in the future entirely different weightings are called up during choices.”

I have always considered the functional explanations of depression (or sadness) suspect, because they have typically only briefly mentioned “deattachment” or something similar as the function, and that has sounded… not right. Why would depression have such a horrible feeling if it was simply aimed at deattaching or realigning goals or something? An adaptation that seems to make you passive, drive you to ruin your relationships, and ultimately kill yourself does not seem very adaptive**.

The idea of recalibration makes this so much more understandable! It is not that depression is an adaptation in itself. Instead, (I now hypothesize,) it is the result of the recalibration program accidentally recalibrating all the core motivations at the same time to zero. Normally, the recalibration operates on one core motivation at the time – it gets set to zero, but other motivations are still running, so the behavior is directed more adaptively. But in a case where you only happen to have a couple of core motivations, and they all get set to zero due to some recalibration (that is not maybe based on the most objective of evaluations if the preceding states are already biased), you end up in the state where you have no core motivations left: depression. The program signals that you should not be doing these things, do the other things, but there are no other things left to do.


 

*) The authors strongly subscribe to a discrete emotion view, but their arguments can be read as somewhat inconsistent with the basic emotion theories.

**) Yes, yes, suicide can be adaptive from the gene’s point of view. But depression does not seem a reliable result of situations where suicide would actually be adaptive.

Reference

Tooby, J., & Cosmides, L. (2005). Conceptual Foundations of Evolutionary Psychology. In D. M. Buss (Ed.), The handbook of evolutionary psychology (pp. 5–67). Hoboken, NJ: Wiley.