Tutkimukseni pyrkii olemaan kuvailevaa, ei arvottavaa

“[O]ur project is descriptive, not normative. We are not trying to say who or what is morally right or good. We are simply trying to analyze an important aspect of human social life. Cultures vary morally, as do individuals within cultures. These differences often lead to hostility, and sometimes violence. We think it would be helpful for social psychologists, policy makers, and citizens more generally to have a language in which they can describe and understand moralities that are not their own.” (Graham et al., 2013, p. 57-58)

“Projektimme on deskriptiivinen [kuvaileva], ei normatiivinen [arvottava]. Emme pyri sanomaan kuka tai mikä on moraalisesti oikeassa, oikein, tai hyvää. Yritämme vain yksinkertaisesti analysoida aihetta, joka on tärkeä ihmisten sosiaalisessa elämässä. Kulttuurien moraalikäsitykset eroavat toisistaan, kuten eroavat myös yksilöiden moraalikäsitykset kulttuurien sisällä. Nämä erot usein johtavat vihamielisyyteen, ja joskus väkivaltaan. Olemme sitä mieltä, että olisi hyödyllistä jos sosiaalipsykologeilla, poliitikoilla ja kansalaisilla yleensä olisi yhteistä kieltä jolla kuvata ja ymmärtää moraalikäsityksiä jotka eivät ole samanlaisia kuin heillä itsellään.” (oma käännökseni)

Ylläoleva lainaus on moraaliperustateorian (Moral Foundations Theory) kehittäjien keskeisestä tieteellisestä julkaisusta, ja allekirjoitan sen täysin. Pyrin myös itse siihen, että oma tutkimukseni olisi mahdollisimman vähän arvottavaa – vaikka minulla on omat yksityiset moraalikäsitykseni siitä mikä on oikein ja hyvää [tähän myöhemmin linkki blogaukseen, jossa kuvailen niitä avoimuuden nimissä tarkemmin], pyrin erottamaan ne tutkimuksestani ja tekemään sitä mahdollisimman kuvailevasti. Tämä on vaikeaa, etenkin kun yksi tutkimuslähtökohdistani on että ihmisen on vaikea tunnistaa moraalikäsitystensä varsinaisia syitä, mutta tavoitteen vaikeus ei ole syy olla yrittämättä.

Jos koskaan huomaat että tieteeseen liittyvät blogaukseni (välillä saatan blogata myös arvoistani, mutta pyrin merkitsemään sen avoimesti ja selvästi), tai vielä vakavampaa, tutkimuspaperini sisältää ilmeisiä tai edes mahdollisia vinoumia arvokysymyksiin nähden, pyydän sinua, lukija, kirjoittamaan joko kommentin tai lähettämään mailia ja selittämään millä tavalla näet asiat eri tavoin. Lupaan lukea ja harkita kaikkia asiallisia palautteita, ja mahdollisuuksien mukaan korjata vinoumia kun niistä huomautetaan.

Learning: measures of political orientation

Writing a meta-analysis on political orientation and MFT, I’ve learned that there are (self-report) measures of political orientation that have completely different approaches to the question, and they seem to be favored by different kinds of users. I’m sure there are interesting philosophical writings on the subject, because topics like “what is political orientation” and “what is ideology” (not to mention “what is measurement”) are probably inexhaustible. This is just my quick observation on the subject.

Four (and a half?) different types of self-report measures:

Self-identification measure uses broad labels (such as “liberal vs. conservative”, “left vs. right”), asking people where they would place themselves (practically always on a bipolar) scale. A pragmatic choice – the focus is somewhere else, but something about the PO needs to be found out, so let’s use the easiest and simplest one. Views PO as a matter of identification and so skips the difficult questions of how to measure PO by settling for the idea that PO is whatever people think it is. However, because the scale predefined but not by the respondents themselves, this may lead to problems when using the measure outside the population from which it emerged – e.g. the use of “lib-cons” and “left-right” are different in the US and Europe. Also psychometrically problematic.

Issue-based measures ask an array of questions about specific political issues. They seem to be favored by researchers in politics – most likely they know about the inconsistencies in people’s identification vs. political behavior (also, their focus is probably more often specifically in political behavior) and the problems with minor groups that do not fit into the bipolar scale, so a single-item scale is viewed as inadequate. Issue-based measures are often combined with self-identification measures to empirically label self-identifications to sets of issue patterns. The approach is that one’s PO is determined by the similarity to other people (a data-driven approach). A general problem with an issue-based measure arises when the issues in the items are not relevant for the respondents (e.g. too old or from a foreign political culture: abortion may be a hot topic in the US and catholic countries, but it is a non-issue in protestant Western/Northern Europe). Another problem emerges when the identifications and response patterns diverge.

Theory-based measure is based on the a priori definitions from particular theoretical approach, and in practice considers the ways people identify themselves as more or less irrelevant. One’s PO is determined by the correspondence to the theoretically important factors that may be issue-based or use items that are about more abstract principles, or both. If you answer in a particular pattern, you can be labeled with a name related to the theoretical thought behind this (such as “liberal vs. authoritarian/statist”), whether you like it or not. I’ve especially seen it in libertarian opinion writings with no empirical research, but it is also used in research. The problem with the former is often that even though a label opposite to their own favored position may seem objective to the writer (like “authoritarian”), people who are not already on the same side surprisingly do not appreciate being  called that, so the writing achieves more to flame than discuss. The tradeoff in the theory-based research approach (in addition to those already related to the issue-based approach) is naturally that often you find what you look for, so the measure can be only as strong as the theory – and I’m not aware of particularly good psychological theories that would take a realistic view of human mind and political behavior into account.

Proxy measures use a measure not intended specifically for political orientation – such as values, particular personality traits like SDO, openness, or moral foundations – but that have earlier established reliable relationships the user can rely on. Psychologists seem to be especially fond of this, probably because they are not interested in political issues per se. Obviously, the problem is that this measure can only capture the facets of PO that happen to correlate with the proxy, so it may miss important information.

In regard to affective psychological view, the different types are not just different ways to answer the same question, but (to some extent) reflect fundamentally different processing. My assumption is that there is a (probably largish) number of nonconscious processes of different levels that produce the range of phenomena that may go under “political orientation”, and that some of the differences in the operating parameters of the low-level processes produce an important portion of the stable differences between political orientations. These trait differences behind political orientation are what I’m mainly interested in. It is not necessarily that the high-level differences are not important, but I assume that their operation in interaction with cultural influences are more complex, and therefore more difficult to study, so it’s better to at least start with the more easier ones.

Self-identification is a result of the conscious processing of the automatic construction of identification categories and those linking oneself to one of them. As such, it is a high-level process, and although that will have some relationship to the trait differences in the lower levels, the heavy processing of the higher levels mostly serve to confound that relationship. So self-identification tells us something about how people self-identify, but less about how this identification fundamentally works – although of course mapping the identification patterns may provide good information for further study. Proxy measures may target the low-level processes better, but they leave the relationship to PO itself unclear, if we assume that PO is something more than just the proxy (and we do).

Issue- and theory-based measures require judgments instead of self-reflection, which is in principle a better way to probe nonconscious processes. The issue-based approach cannot escape the problem that you should have a pretty good theoretical idea of what matters when you choose the items, or else your measure is just a collection of random items that are related to each other on the surface, but that don’t necessarily tell much about the processes underneath. Of course, if the research is on the surface level, that’s completely fine. A theory-based measure with abstract principles comes with the disadvantage that asking people how they would make judgments may give different answers than making them actually make the judgments. So if my assumption about the underlying structure is correct and we want to study the low-level processes with self-report measures, judgment items about issues, guided by a good theory, seems to be the way to go (without spending too much time on analyzing this).

 

Miksi suutumme ryhmälle vaikka tekijä on yksilö?

Ystäväni kysyi jokin aika sitten Facebookissa:

Mikähän siinä on, että joskus, kun joku tekee jotain pahaa (puukottaa satunnaisia ihmisiä torilla, vaikkapa), niin sen sijaan että suuttuisimme vain sille yhdelle, suutumme joillekin ihan muille, jotka mielestämme kuuluvat sen yhden kanssa johonkin samaan ryhmään? Tai vaikkapa jollekin toiselle henkilölle, joka ei edes kuulu siihen ryhmään, mutta on joskus ilmaissut tukeaan sille ryhmälle? Ihan weirdoa.

Tunnistan tämän ilmiön itsessäni, ja näen sen muissa ihmisissä. Toki se vaihtelee, milloin se tapahtuu. Esim. Turun puukotusten tapauksessa en suuttunut kaikille muslimeille, mutta esim. jokin Päivi Räsäsen möläytys voi hetkellisesti saada mut ajattelemaan kaikista uskovaisista jotain pahaa, mitä he eivät ansaitse. Jollekin toiselle nämä tilanteet ja reaktiot menisivät juuri päinvastoin, mutta meillä molemmilla on tämä sama tendenssi suuttua väärälle kohteelle.

Annoin pikaisen vastauksen, jonka sisältöä olen alle hieman laajentanut ja lähteistänyt*.

Kysymyksessä vaikuttaisi näkyvän taustalla (epärealistinen) “järkevän ihmisen” ihmiskuva, jonka mukaan tällainen käyttäytyminen on kummallista, ikään kuin bugi järkevän ihmisen toiminnassa. Päinvastoin, suurin osa (länsimaisessa kulttuurissa kasvaneen henkilön?) intuitiivisistä käsityksistä siitä miten ihminen/mieli toimii on vääriä, ja tässä kuvattu toiminta on seurausta täysin “oikein”** toimivista prosesseista.

Aivot yleistävät ja ryhmittelevät [1]. Ne tekevät sitä tehokkuuden takia: jotta jokaista yksittäistä havaintoa ei tarvitse käsitellä erikseen, vaan tuntomerkkien perusteella voidaan olettaa että havaittu asia on samanlainen kuin mitä aiemmin on havaittu, ja että todennäköisesti siitä seuraa samanlaisia asioita kuin ennen. Tälläinen olettaminen säästää valtavasti työtä jota aivojen täytyy tehdä, ja jos ryhmittely toimii hyvin, se johtaa erittäin hyödylliseen kykyyn tehdä useimmiten riittävän osuvia ennusteita tulevaisuudesta [2]. Yksittäisiä asioita käsitellään sitä tarkemmin, mitä useammin niitä tavataan, koska tämä luultavasti tarkoittaa että ne ovat ympäristössämme tärkeämpiä. Sama pätee tietysti ihmisiin.

Siten ihmisiä käsitellään osana kategorioita, eli ryhmäjäsenyyksiensä kautta sitä enemmän, mitä kauempana ne ovat psykologisesti itsestäsi***. Vaikka pystyt tietoisesti ajattelemaan että Räsänen tekee asian x, tunnejärjestelmäsi käsittelee tiedostamattasi tilanteen sen kautta että ryhmä A josta Räsänen on vain yksittäinen esimerkkitapaus tekee asian x. Siksi tunteesi kohdistuvat ryhmään A kokonaisuudessaan, eivätkä sen yksittäiseen instanssiin Räsänen.

 


*) Ahmimani kirjallisuuden kautta minulla on tarkat lähteet vain oman erikoisalani väitteille, eli niille joita olen paljon käsitellyt, kun taas jotenkin tutkimuksellisen osmoosin kautta syntyy epämääräinen käsitys isosta kasasta asioita, joille ei tiedä mitään tiettyä lähdettä. Lisään tagin “shotgun references” joka viittaa siihen, että googlaan jotain mikä näyttää pikaisesti kertovan aiheesta, mutta en ole lukenut useimpia papereita ainakaan tarkasti. Samalla nämä toimivat itselleni muistutuksena, että voin palata kerran löytämääni viitteeseen jos se myöhemmin osoittautuu tärkeäksi.

**) Sillä että prosessit toimivat “oikein” viittaan siihen, että evolutiivisesti mieli on kehittynyt tietynlaiseksi, sopeutunut suorittamaan tietynlaisia prosesseja tietyissä rajoissa ja tietyissä olosuhteissa. Ilmiö on “bugi” vain suhteessa epärealistisen järkevän ihmisen ihmiskuvan tuottamiin odotuksiin, ei mikään todellinen poikkeama normaalista.

***) Tästä muodostuu kiinnostava hypoteesi: onko sosiaalipsykologian käsite ulkoryhmä seurausta siitä aivoprosessista, että niihin kuuluvat ihmiset eivät ole niin lähellä sinua että aivosi käsittelisivät heitä yksilöinä? Ilmeisesti asiasta ei ole ainakaan konsensusta [3]. Selvästi tämä ei selitä koko ilmiötä: minimaalisen ryhmän paradigmassa ennestään tuntemattomat ihmiset jaetaan ryhmiin, ja omaa ryhmää käsitellään sisäryhmänä ihan pelkästään tällä, joten siihen kuuluvia ei ole käsitelty enempää kuin siihen kuulumattomiakaan. Ehkä tietoinen ryhmäkategorisointi on kuitenkin saman perustavamman mekanismin (kategorisointi) toinen (ylhäältä-alas) käyttötapa?

Viitteet

[1] Seger, C. A., & Peterson, E. J. (2013). Categorization = Decision Making + Generalization. Neuroscience and Biobehavioral Reviews, 37(7), 1187–1200. https://doi.org/10.1016/j.neubiorev.2013.03.015

[2] Bar, M. (2009). The proactive brain: memory for predictions. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1235–1243. https://doi.org/10.1098/rstb.2008.0310

[3] Shkurko, A. V. (2013). Is social categorization based on relational ingroup/outgroup opposition? A meta-analysis. Social Cognitive and Affective Neuroscience, 8(8), 870–877. https://doi.org/10.1093/scan/nss085

Ekonomistien moraalikäsitykset vaikuttavat heidän johtopäätöksiinsä

Lyhyesti: ekonomistien moraalikäsitykset vaikuttavat heidän johtopäätöksiinsä, ja jos (kun) ekonomistit ovat keskimääräistä oikeistolaisempia, heidän johtopäätöksensä ovat tämän arvomaailman värittämiä.

 

Muistan kommentoineeni jossain nettikeskustelussa joku aika takaperin, että on syytä olettaa että ekonomisteiksi hakeutuvat ihmiset, joilla on tietynlainen maailmankuva jo valmiiksi, ja että tämä vaikuttaa siihen millaisia tuloksia taloustieteessä saadaan. Perustin käsitykseni siihen, että yhdysvaltalaisessa sosiaalipsykologiassa on vahva vinouma sikäläisittäin vasemmistolaiseen poliittiseen ajatteluun (Duarte et al. 2015), eikä ole perusteita olettaa että kaikki tästä vinoumasta olisi peräisin alan sisällä opitusta, vaan luultavasti kyseessä on ainakin jossain määrin myös siitä millaiset ihmiset ryhtyvät sosiaalipsykologeiksi (vrt. keskustelu Suomessa poliisien, ja siis poliisiksi hakeutuvien, poliittisista puoluekannoista – keskimäärin aika oikeistolaista).

Nyt satuin törmäämään Randazzon ja Haidtin (2015) tutkimukseen, jossa selvitetään ekonomistien sisäisten koulukuntaerojen yhteyttä heidän moraalinäkemyksiinsä ja käsityksiinsä taloustieteellisistä faktoista*. Ennen paperin pointteihin menemistä, pari disclaimeria: perustuu kyselytutkimukseen, vastaajia on melko vähän, vastaajaprosentti on pieni, ja ainakin tämä versio oli vielä working paper, eli ei vielä hyväksytty tieteelliseksi julkaisuksi (tosin käsittääkseni taloustieteessä working papereita pidetään arvokkaampina kuin psykassa, missä ne eivät ole juuri mitään).

Taustaa:

[The] divergence of economic judgment is typically chalked up to competing methodological approaches (Schmitt 2013; Neumark and Wascher 2006). Methodological disagreement has long been the  mainstream explanation given for why economists disagree. In 1953, Milton Friedman wrote that economists should and could focus on an objective, empirical science, developing theories and hypotheses “that yield valid and meaningful (i.e., not truistic) predictions about phenomena” (1953, p. 7). Thus, disagreements among well-trained economists are due to methodological differences—or “scientific judgments”—not competing value judgments (Friedman 1968, p. 2).

[This is what they call the Samuelson-Friedman hypothesis]

vs.

Taken together, we might speak of a Myrdal-Heyne hypothesis: Economics is a social activity where testing and prediction are only possible in the context of humanly framed questions and problems, and therefore economics is permeated by value judgments.9 On this view, economists bring their subjective biases into the work of social science. Sometimes economists disagree because they are looking at different data or using
different methodologies—but what draws economists to a particular data set, model, methodology, or definition of a term is often rooted in their normative worldviews.

Tulokset lyhyesti:

  • Taloustieteen koulukunnat (Keynesiläiset, uusklassiset, itävaltalaiset, New Institutional – en tiedä mikä tämä viimeinen on suomeksi) eroavat merkittävästi siinä mitä moraaliperustoja he kannattavat – ja joissain tapauksissa merkittävämmin kuin eduskuntapuolueet Suomessa. Keynesiläiset pitävät carea ja varsinkin equalityä tärkeämpinä kuin muut, ja itävaltalaiset arvostavat näitä erityisen vähän. Itävaltalaiset sen sijaan pitävät libertyä erityisen tärkeänä.

econschoolsMF

  • Kun verrataan normatiivisiä ja positiviisia taloustieteellisiä väitteitä koulukuntien välillä, huomataan että näissä on osin ihan yhtä suuria eroja – eli koulukunnat ovat joistain “tosiasiaväittämistä” ihan yhtä eri mieltä kuin eksplisiittisistä arvoväittämistä.
  • Moraalinäkemykset, ja nimenomaisesti care ja equality, ennustavat näitä eroja sekä arvo- että positiivisten väittämien kohdalla yhtä hyvin.
  • Mitä enemmän väitteistä on ekonomistien sisällä konsensusta, sitä huonommin moraalinäkemykset selittävät jäljellä olevia eroja.
  • Eniten konsensusta on uusklassisista näkemyksistä.

“we can clearly see that the disagreements between the schools of economic thought are predictable by the moral judgments of economists in those groups. The Myrdal-Heyne hypothesis says that economists frame questions from their own particular, subjective worldview, and that this leads to value judgments infiltrating the methods used to consider economic phenomena. Its not that economists are picking and choosing their data explicitly to get the findings they want (though an unscrupulous economist could), instead what the moral judgments do is shape the approach an economist takes to considering a research subject like the effect of the minimum wage.

Oma lisäykseni: vaikka keynesiläiset on korkeammalla caressa ja equalityssä, kaikki koulukunnat ovat matalammalla niissä (ja proportionalityssä) kuin suomalaiset puolueet keskimäärin, ml. oikeistolaiset. Kun care ja fairness – joka on puoliksi equalityä, puoliksi proportionalityä – on negatiivisessa yhteydessä oikeistolaisuuteen, ovat ekonomistit keskimäärin selvästi oikeistolaisempia kuin suomalaiset (mutta niin tietysti ovat amerikkalaiset yleensäkin).

Sitten tärkein kohta (lihavointi omani):

To reiterate, this should not be interpreted as a condemnation of economics. Economics is a social science, and its practitioners are subject to political bias just like
practitioners of the other social sciences (e.g., social psychology; see Duarte et al., 2015). Our findings therefore raise a warning flag when politicians and journalists cite economic research to bolster their favored conclusions. In the development of legislative language, or citation of economic research, or quoting of economists as experts, there should be recognition that value judgments are present in the economics under consideration.

Tämä nyt ei suoraan vastaa alussa esittämääni kysymykseen, mutta liittyy samaan tematiikkaan.

 


* ) Paperissa puhutaan “positiivisista väitteistä”: “Positive economics is in principle independent of any particular ethical
position or normative judgments” (s. 2, viite Friedmanilta jota en ole lukenut).

Viitteet

Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political diversity will improve social psychological science. Behavioral and Brain Sciences, 38. https://doi.org/10.1017/S0140525X14000430

Randazzo, A., & Haidt, J. (2015). Are Economists Influenced by Their Moral Worldviews? Evidence from the Moral Foundations of Economists Questionnaire (SSRN Scholarly Paper No. ID 2700889). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2700889

Reading: 27 semantic categories for emotional experiences (Cowen & Keltner, 2017)

“Self-report captures 27 distinct categories of emotion bridged by continuous gradients” by Cowen and Keltner (2017) in PNAS has raised interest but also ridicule. Sure, if your preconception is that discrete theories are bs anyway, then “there are 27 categories instead of the traditional six!” may seem funny. But I have argued (Kivikangas, in review; see also Scarantino & Griffiths, 2011) that discrete emotion views (not the same as basic emotion theories) have their place in emotion theory, and I find a lot of good in the article – as long as it is kept in mind that it is about self-reports of emotional experiences.

To summarize, the participants were shown short video clips, and different ratings of emotional experiences resulted in a list of 27 semantic categories that overlap somewhat, implying that both discrete and dimensional views are right. The sample could have been bigger, the list they started with seems somewhat arbitrary, and the type of stimuli probably influences the results. But the article supports many of my own ideas, so my confirmation bias says it’s valid stuff.

In a bit more detail:

  • They had 853 participants (which, IMO, they should mention in the main text as well) from MTurk watching in total 2185 videos (5 s on average) and judging their feelings. The participants were divided into three groups [1]:
    • first group provided free responses to 30 randomly assigned videos (although the supporting information says this was not entirely free, but a predefined set of 600 terms which were autocompleted when the participant typed into a blank box);
    • second group rated 30 videos according to a predefined set of 34 discrete categories (had to choose at least one, but could choose more – apparently this choice was dichotomous) the authors had gathered from bits and pieces around the literature;
    • third group rated 12 videos videos according to a predefined set of 14 dimensional [2] scales (9-point Likert scale).
  • I don’t pretend to know the statistical methods so well I could vouch for their verity [3], but the authors report that 24 to 26 of the 34 discrete categories from the second group were independent enough [3] to rate the videos reliably. The “free” responses from the first group provided 27 independent [5] descriptions, that were then factored into the 34 categories to find out the independent categories. Apparently these three analyses are taken as evidence that categories beyond the 27 are redundant (e.g. used as synonyms; statistically not reliably independent).

cowen,keltner-fig2C

  • Their list above (Fig. 2C) dropped the following from the original 34 categories: contempt and disappointment (coloading on anger), envy and guilt (unreliable, but not loading on any other factor), pride [6] and triumph (coloading on admiration), and sympathy (coloading on empathic pain and sadness). I discuss the list and these dropped categories below.
  • These categories are not completely distinct, but smoothly transition to some (but not any!) of the neighboring categories, e.g. anxiety overlaps with fear, and fear overlaps with horror, but horror does not noticeably overlap with anxiety. When the 27 mathematical dimensions these categories are loading are collapsed into two, we get this map:

cowen,keltner-fig2B

  • The map is also available in higher resolution here, and it’s interactive! You can point at any letter and see a gif of the rated video!
  • The authors compare these categories to the 14 dimensions from the third group, and report that while the affective dimensions explained at most 61 % of the variance in the categorical judgment dimensions, the categories explained 78 % of the variance in the affective dimension judgments. When factored into the categorical data, valence is the strongest factor, “unsafety+upswing” is the second (I read this as threat + excited arousal), and commitment (?) is the third.
  • A final claim is that the emotion reports are not dependent on demographics or personality or some other psychological traits (except perhaps religiosity).

The article begins with the remark that “experience is often considered the sine qua non of emotion”, and the general language in the article firmly places it on the experiential level: the study does not focus on the hypothesized psychological processes behind experiences, nor on the neural structures or evolutionary functions comprising the whole affect system. I mention this specifically, because IMO the inability to differentiate between these different levels is one of the main reasons the wide range of emotion theories seem so incompatible (which is one of the main points in Kivikangas, in review). The article recognizes the limits of the semantic approach admirably without making overextending claims, although in the discussion they do speculate (pretty lightly) about how the findings might relate to the neural level. However, although the authors avoid overextending and list relevant limitations (should be studied with other elicitors, other languages and cultures, etc.), the paper is probably still going to be read by many people as a suggestion that there are 27 strictly discrete categories (no, they are somewhat continuous) of emotions (no, these are only self-reports of emotional experiences – and even self-report “is not a direct readout of experience”, as the authors point out).

Furthermore, I like the position (similar to my own) of saying that – although Keltner is a known discrete theorist – both discrete and dimensional views have some crucial parts right, but that the strictest versions are not supported (“These findings converge with doubts that emotion categories “cut nature at its joints” (20), but fail to support the opposite view that reported emotional experiences are defined by entirely independent dimensions”). The authors also start from the astute observation that “the array of emotional states captured in past studies is too narrow to generalize, a priori, to the rich variety of emotional experiences that people deem distinct”. Another point I recently made (Kivikangas, in review) was that although the idea of early evolutionary response modules for recurrent threats and opportunities is plausible, that the number of these modules would be the traditional six, or even 15 (Ekman & Cordaro, 2014), is not. My view is that affects (not experiences, but the “affect channels” of accumulating neural activation; Kivikangas, 2016) are attractor states, produced by a lot larger number (at least dozens, probably hundreds) of smaller processes interacting together. And definitely there is no reason to believe that the words of natural language – typically English in these studies – would describe them accurately (as pointed out by, among others, Russell, 2009, and Barrett, pretty much anything from 2006 onwards).

So there is a lot I like in the article. However, some obvious limitations they do not explicitly state. First, they begin from a list of 34 terms from different parts of the literature, which is a wider array than normally used, but still rules out a lot of affective phenomena. From my own research history, media experiences have other relevant feelings, like frustration or tension (anticipation of something happening, typically produced with tense music). Of course one can say that those are covered by anger and anxiety, for example, but I would have to point out the relatively small number of participants – the factors might be different with another sample. (A side point is that while this seems to be a fine study, for a more specific purpose, such as for a wider use in game research, one would probably want to conduct their own study with that particular population, because the usage would probably be different.)

A theoretically more interesting point is that they include categories like nostalgia and sexual desire, and even craving and entrancement, which many theorists would argue vehemently against in a list of emotions. Me, I am happy for their inclusion as I think that “emotion” is a somewhat arbitrary category anyway and if we are looking at the affective system as a whole, we note a lot of stuff that are certainly affective but are not thought as emotions (one more point I made in Kivikangas, in review…; also mentioned by Russell, 2009). But it raises a question why many other less traditional categories were not included. Schadenfreude, bittersweet, moral anger/disgust (interestingly, one of the dimensions was “fairness”)? What about thirst? Could be included in craving, but we don’t know, because it wasn’t in the study. I have stated (Kivikangas, 2016) that startle is affective, as is the kind of curiosity to which Panksepp refers as “seeking” (Panksepp & Biven, 2012). Would the youtube generation differentiate between amusement and lulz? Naturally, some decisions must be made what to keep and what to ignore, but if they were going for the widest possible array (with things like nostalgia, craving, and entrancement), I think it could be still considerably wider. I have not looked at the list of 600 “free” responses, but apparently the authors checked which of the 34 categories were supported by the free responses, but did not check what other potentially relevant they might have included.

A second obvious limitation is the stimulus type: 5-sec (on average) video clips. The authors state that this should be studied with other kind of elicitors, sure, but they don’t explicitly mention that maybe some of their results are due to that. Specifically, the reason for dropping quite common emotions – contempt, disappointment, envy, guilt, pride, triumph, and sympathy – from their list might be that they (at least some of them) need more context. Guilt, pride, and triumph are related to something the person does, not something they simply observe in third person. Contempt is related to a more comprehensive evaluation of target personality, and envy relates to one’s own possessions or accomplishments. Actually, I was surprised that they found anger, which may also be difficult to elicit without context (as anger traditionally is thought to relate to person’s own goals) – but indeed, in the supporting information it was the second next category to drop when they tested models with less (25) factors. I suspected that there might be clips with angry people and that participants had recognized anger instead of felt it, but this seems to not be the case. Clips present in the interactive map classified with E for anger are either unjustified and uncalled-for acts of violence, or Trump or Hillary Clinton – which probably are closer to moral anger than the traditional blocked-goals anger. Anyhow, the list of found factors would be even longer if the type of stimulus did not limit it.

As a conclusion, although I have been more interested in the affective system underlying the emotion experiences and haven’t seen much point in the arguments over whether the experience can be best described as discrete emotions or dimensions, the empirical map combining aspects of both is much more plausible to me than a strictly discrete list or a too tidy circumplex. And even though the reports of (a priori restricted) emotions are not the same as the affective system underlying them, I am hopeful that this paper helps the discussion that perhaps the different models are not incompatible, and that perhaps the models may be different on different levels of scrutiny (i.e. experience vs. psychological vs. neural vs. evolutionary).

 


Footnotes

[1] The numbers are a bit unclear. The authors flaunt: “these procedures yielded a total of 324,066 individual judgments (27,660 multiple choice categorical judgments, 19,710 free-response judgments, and 276,696 nine-point dimensional judgments”.

They say that “Observers were allowed to complete as many of versions of the survey as desired, with different videos presented in each”, and that “Each of
the 2,185 videos was judged by 9 to 17 observers in terms of the 34 categories”, but repetitions per participant or per video for other response types are unclear. Without prior knowledge then, 853 in 3 groups = 284 participants per response type, which is barely above what Lakens & Evers (2014) say is needed for finding a stable (see quotation below) small effect size (r = .1; required n = 252), but below what is required for 80 % power (n = 394) for that effect size. According to within-subjects power calculations I remember, 9-17 repetitions per video does not really help the power almost at all.

“With a small number of observations, effect size estimates have very wide CIs and are relatively unstable. An effect size estimate observed after collecting 20 observations can change dramatically if an additional 20 observations are added. An important question when designing an experiment is how many observations are needed to observe relatively stable effect size estimates, such that the effect size estimate will not change considerably when more participants are collected.” Lakens & Evers, 2014, pp. 279

 

[2] Mostly appraisal dimensions in addition to traditional dimensions: approach, arousal, attention, certainty, commitment, control, dominance, effort, fairness, identity, obstruction, safety, upswing, valence.

[3] One thing I found weird was the median-split correlation for demographics and other traits. They used it to show that traits do not explain differences in emotional responding, but a quick googling only shows recommendations that median-splits should not be used because it loses a lot of information. I hope this is not a sign that the method has been used purposefully in order to find no differences.

[4] “Using SH-CCA we found that between 24 (P < 0.05) and 26 (P < 0.1) statistically significant semantic dimensions of reported emotional experience (i.e., 24–26 linear combinations of the categories) were required to explain the reliability of participants’ reports of emotional experience in response to the 2,185 videos.” I don’t immediately understand how this method works.

[5] “In other words, we determined how many distinct varieties of emotion captured by the categorical ratings (e.g., fear vs. horror) were also reliably
associated with distinct terms in the free response task (e.g., “suspense” vs. “shock”). We did so using CCA, which finds linear combinations within each of two sets of variables that maximally correlate with each other. In this analysis, we found 27 significant linearly independent patterns of shared variance between the categorical and free response reports of emotional experience (P < 0.01), meaning people’s multiple choice and free-response interpretations identified 27 of the same distinct varieties of emotional experience.”

[6] Fig. 1 and its caption shows pride not loading to its own factor and relief loading, but the text talks about these vice versa, and relief is in other figures, so most likely figure 1 is mistaken.


References

Barrett, L. F. (2006). Are Emotions Natural Kinds? Perspectives on Psychological Science, 1(1), 28–58. https://doi.org/10.1111/j.1745-6916.2006.00003.x

Cowen, A. S., & Keltner, D. (2017). Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences, 114(38), E7900–E7909. https://doi.org/10.1073/pnas.1702247114

Ekman, P., & Cordaro, D. (2011). What is meant by calling emotions basic. Emotion Review, 3(4), 364–370. https://doi.org/10.1177/1754073911410740

Kivikangas, J. M. (2016). Affect channel model of evaluation and the game experience. In K. Karpouzis & G. Yannakakis (Eds.), Emotion in games: theory and praxis (pp. 21–37). Cham, Switzerland: Springer International Publishing. Retrieved from doi:10.1007/978-3-319-41316-7_2

Kivikangas, J. M. (in review). Negotiating peace: On the (in)compatibility of discrete and constructionist emotion views. Manuscript in review.

Lakens, D., & Evers, E. R. (2014). Sailing from the seas of chaos into the corridor of stability practical recommendations to increase the informational value of studies. Perspectives on Psychological Science, 9(3), 278–292.

Panksepp, J., & Biven, L. (2012). The archaeology of mind: neuroevolutionary origins of human emotions. New York, NY: W.W. Norton & Company, Inc.

Russell, J. A. (2009). Emotion, core affect, and psychological construction. Cognition & Emotion, 23(7), 1259–1283. https://doi.org/10.1080/02699930902809375

TIL several first authors

I was just informed by my colleague that the point of several “first” authors – which is in practice (in our field) used in publication lists to show who has done the most work – is that I could switch the first names in my publication list / resume (perhaps with a note explaining this so that people searching for it don’t get confused) to show that I’m a first author as well. Is this a thing? I’ve never heard of it, but I’ve never understood the point of several first authors either, and this would explain the sense in it. Do people do this?

Reading: “Moral outrage in the digital age”

“Moral outrage in the digital age” by M.J. Crockett is a short theoretical paper, drawing together from several lines of research a [model? hypothesis? theoretical framework? theory?] to explain how the operation of moral outrage is transformed by digital media.

I’m not particularly keen on the underlying view on moral outrage which seems to be based on basic emotion interpretation of anger and disgust (from Fig. 2: “For each immoral act, moral outrage was calculated by multiplying self-reported anger and disgust” – btw, why multiplying and not averaging or calculating a sum?), but otherwise it makes a nice and plausible case of the differences the digital media might make. I’m not familiar with most of the empirical research it refers to, so I can’t say much about how convincing the actual evidence is, but the overview fits my preconceptions.

The main points can be summarized (Fig. 1 is not immediately clear):

  • Humans have psychological processes to react with an emotional condemnation when they think a moral norm has been violated.
  • Digital media
    • gives us a greatly increased access (removes physical constraints) to information about moral violations than traditional social communication (like gossip)
    • lowers the costs (effort; the article talks about the possibility of physical retribution, but I’d generalize that as the risk of potentially wasting the social capital) for expressing outrage
    • lowers the inhibitions (no face-to-face feedback means we don’t have to deal with the fact of causing emotional distress in others, which is a negative experience for most) of expressing outrage
    • increases the potential benefits (reputational rewards of moral quality and trustworthiness; successful regulation of group behavior).
  • These factors drive more moral outrage in digital media, which increases social polarization, dehumanize the targets (and their groups?), and reduce societal trust.

The short paper does not suggest any interventions, but if these mechanisms hold, then it seems to me that potential ways to inhibit this process would be to increase the costs and inhibitions, as the access and potential benefits are more difficult to control (and latter perhaps should not be controlled?). Especially effort, but perhaps costs of social capital as well, could be increased via technological solutions. These are testable predictions to cut out the most low-effort outrage. It would be interesting to see what portion of the outrage would this influence. For instance:

  • Minimally increase the effort, by increasing the steps of, or introducing a small waiting period to sharing.
  • Introducing a way to incur a minimal social cost to sharing, e.g. a downvote, perhaps limited to the friends of the sharer only, so a downvote would actually carry a meaning of “people I care about think somewhat less of me” and maybe would not be constantly abused like on anonymous platforms?

 


 

Reference:

Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behavior. Retrieved from https://www.nature.com/articles/s41562-017-0213-3