“Moral outrage in the digital age” by M.J. Crockett is a short theoretical paper, drawing together from several lines of research a [model? hypothesis? theoretical framework? theory?] to explain how the operation of moral outrage is transformed by digital media.
I’m not particularly keen on the underlying view on moral outrage which seems to be based on basic emotion interpretation of anger and disgust (from Fig. 2: “For each immoral act, moral outrage was calculated by multiplying self-reported anger and disgust” – btw, why multiplying and not averaging or calculating a sum?), but otherwise it makes a nice and plausible case of the differences the digital media might make. I’m not familiar with most of the empirical research it refers to, so I can’t say much about how convincing the actual evidence is, but the overview fits my preconceptions.
The main points can be summarized (Fig. 1 is not immediately clear):
- Humans have psychological processes to react with an emotional condemnation when they think a moral norm has been violated.
- Digital media
- gives us a greatly increased access (removes physical constraints) to information about moral violations than traditional social communication (like gossip)
- lowers the costs (effort; the article talks about the possibility of physical retribution, but I’d generalize that as the risk of potentially wasting the social capital) for expressing outrage
- lowers the inhibitions (no face-to-face feedback means we don’t have to deal with the fact of causing emotional distress in others, which is a negative experience for most) of expressing outrage
- increases the potential benefits (reputational rewards of moral quality and trustworthiness; successful regulation of group behavior).
- These factors drive more moral outrage in digital media, which increases social polarization, dehumanize the targets (and their groups?), and reduce societal trust.
The short paper does not suggest any interventions, but if these mechanisms hold, then it seems to me that potential ways to inhibit this process would be to increase the costs and inhibitions, as the access and potential benefits are more difficult to control (and latter perhaps should not be controlled?). Especially effort, but perhaps costs of social capital as well, could be increased via technological solutions. These are testable predictions to cut out the most low-effort outrage. It would be interesting to see what portion of the outrage would this influence. For instance:
- Minimally increase the effort, by increasing the steps of, or introducing a small waiting period to sharing.
- Introducing a way to incur a minimal social cost to sharing, e.g. a downvote, perhaps limited to the friends of the sharer only, so a downvote would actually carry a meaning of “people I care about think somewhat less of me” and maybe would not be constantly abused like on anonymous platforms?
Reference: