Nudging Compliance Vs. Nudging Creativity?

10371747_10152148670282592_7269132421310706575_n

The burgeoning literature on Nudges (Thaler & Sunstein, 2008; Kamenica, 2012) in many ways talks about the power of simple/ small changes to choice architecture (through understanding the role of various underlying mechanisms like priming, framing, defaults, “less is more”, loss aversion and various other biases/ heuristics, etc.) and how these small changes could be used to influence people’s choices and behavior, including help increase compliance rates to seemingly benign corporate and government policies.

While many of these nudges are designed to influence day to day, mundane behavior (from driving habits to consumption habits) and compliance, I wonder if the future of research on nudges entails asking how these can be leveraged to drive seemingly non-compliant and creative behaviors and thus further tap into rich, latent inner psychological resources?

There seem to be indications that external motivation/ resources and incentives (for example, monetary payouts) may suppress internal motivation/ resources (for example, pro-social behavior) (Kamenica, 2012). One interpretation of this possibly is that a presence/ abundance of extrinsic motivation/ resources, may possibly suppress and create an absence/ scarcity of intrinsic motivation/ resources, rendering these intrinsic resources latent and/ or defunct (for example, intrinsic resources like creativity, altruism, etc. may be suppressed in the presence of monetary payouts). Given this, one has to wonder how can behavioral research on nudges help tap into and activate rich, latent, intrinsic resources, which on the surface may lead to non-compliant behavior but at the same time may have potential for creative, non-traditional solutions to socio-economic problems? Also is there a possible risk of nudges in their benign pursuit of driving compliance, inadvertently influencing the evolutionary process by reducing diversity and variation?

While nudges surely can help drive compliance rates to seemingly benign corporate and government policies (and hence keen interest expressed by various national governments to jump on board the “nudges” bandwagon, the Behavioral Insights Group in the U.K. being one example – http://www.behaviouralinsights.co.uk/), I wonder if the future of research on nudges entails figuring out ways of tapping into deeply entrenched human creative instincts that may possibly lead to seemingly non-traditional and non-conformist , albeit highly creative, solutions to various complex socio-economic problems?

Advertisements

Bounded Ethicality

Rusted

The Carnegie school (read Herbert Simon and his conception of “bounded rationality”) has influenced scholars like Max Bazerman (PhD from Carnegie-Mellon, 1979) to extend the idea of cognitive limitations to the sphere of morality and ethics as well. Bounded ethicality thus suggests that the capacity of a human being to behave ethically is bounded/ constrained/ limited and hence may be prone to errors under various conditions (Chugh, Bazerman and Banaji, 2005; Kern and Chugh, 2009).

Further, scholars have identified the role of “automaticity”, i.e., the role of heuristics, visceral affect/ emotions and System 1 thinking (Fast/ Intuitive) as drivers of moral/ ethical behavior, as against the slow, deliberate process of moral reasoning.

This concept of “bounded ethicality” then helps us better appreciate why “intentional harm” may be perceived to be worse than “unintentional harm”. Based on our appreciation of the processes of social decision making (bounded rationality, heuristics, etc.), it is perhaps easy to imagine how a foundational heuristic of intentional harm = unforgivable (vs. unintentional harm = forgivable) may be at play when we consider people’s “oversensitivity to intent” (Ames and Fiske, 2013).

This “automaticity”/ heuristical approach to ethics/ morals may also be able to further explain that mere mention of the term “climate change” may trigger different automatic heuristics depending on your political stance and which in turn may lead to highly polarized responses and attitudes vis-a-vis the environment (Feinberg and Willer, 2013).

Further, this “automaticity” may perhaps get exacerbated/ activated more so under conditions of scarce resources in general, and scarce time in specific? Perhaps under time pressures we are more likely to operate from automatic/ heuristical responses (both pro and con depending on our entrenched/ habitual responses) to unethical behaviors by others? On the same lines, we can also perhaps see and empathize how under time pressures people, including ourselves, may be more likely to act in ways which may be perceived as “unethical” (take short-cuts or engage in other acts of self-interest/ self-preservation/ survival under limited resources)? Given this, I wonder if a better appreciation of our cognitive limitations, including “bounded ethicality” would perhaps help us be more understanding (forgiving?) of some ethical errors (by self or others) as being only human?

More coffee, less cooperation?!

Hope

Reading the paper from this week’s readings on serotonin depletion (Crockett, et al, 2008) as well as earlier reading the paper in Week 2 on “The Neuroscience of Social Decision-Making” (Rilling & Sanfey, 2011), I found myself being intrigued by the role of serotonin and how it may influence (or hinder) pro-social behavior/ social exchanges.

According to at least one online source (LiveStrong – https://www.livestrong.com/article/221617-serotonin-depletion/) while drugs like alcohol, nicotine and marijuana lead to an initial burst in release of sertonin (though that euphoria also fades quickly), caffeine on the other hand lowers serotonin levels as well as decreases appetite for carbohydrates.

This left me wondering, would there possibly be a relationship between caffeine consumption and uncooperative behavior? At least based on what one has read, it seems caffeine consumption reduces serotonin levels. Further there seems to be a fair amount of empirical research that serotonin depletion leads to less cooperation (Crockett, et al, 2008; Rilling & Sanfey, 2011). Putting these two together would it be a fair speculation/ hypothesis that more caffeine consumption leads to less cooperation?

In fact one could explore this relationship in combination with another paper for this week’s reading, namely – Herrmann, et al, 2008. We could possibly add per capita coffee consumption as an additional variable into the regression brew (pun intended) for some of these cities (Boston, Zurich, Riyadh, etc.) covered in the Herrmann, et al study (2008) and see how this variable behaves in the regression and may (or may not) co-relate with anti-social punishment.

I wonder if there may be any merit/ practical implication of conducting a study of this nature or if this would be just be a “nice to know” academic exercise with little practical value? Your thoughts?

Framing Effects? Bad Lie Detectors Vs. Good Actors?

Photo(61)20140608112314

A broad underlying narrative around heuristics, emotions and our ability to form first impressions seems to be around how all of these are “error-prone”, for example, how our ability to detect lies is prone to mistakes (refer – Kang Lee’s TED Talk).

While there have been few accounts of how heuristics and our ability to form quick assessments are extremely frugal and efficient (output-decision vs. input-information ratio) (Gigerenzer and Gaissmaier, 2011), a more dominant narrative continues to be pointing at the downsides of heuristics, affect and ability to form first impressions.

It also seems that this phenomenon depends on how these findings are framed. For example, I wonder if the Kang Lee talk could be framed around how human beings are “great actors” (and thus are able to avoid detection) rather than framing the findings as human beings are “bad lie detectors”?

We can certainly see how primordial/ survival/ evolutionary forces may have helped us become “good actors” at hiding some of what we may be thinking or feeling. One can see how this quality can serve as a very useful defense/ safety mechanism in the face of danger (the danger of being “detected” by what may appear as hostile actors, for example, hostile authority figures like parents! :-)).

Further, this ability to hide emotions helps us self-regulate our social interactions. Imagine social interactions where all of us acted like the great Sherlock Holmes (overly-glorified and over-rated, if I may say so). No ability to hide true thoughts and emotions! This inability to hide our true thoughts/ emotions would make us abysmal with our relationships, and in Sherlock’s own words, render us as complete “sociopaths”.

As students, scholars and researchers, I think it is important to recognize how even the most scientific of findings are framed and how this “framing effect” leads to biased understanding of underlying phenomenon.

This also puts me in touch with my own biases vis-a-vis heuristics and emotions. I do realize that given my “strength-based” leanings, I am more likely to see the underlying positives/ strengths versus the negatives/ weaknesses of heuristics and emotions. This only makes me more cautious of “framing effects” and the use of adjectives (“bad liars”, “error-prone”, “fast and frugal”, etc.) in describing phenomenon, whether used by myself or others.

I wonder if it is the adjectives we use to describe phenomenon which introduce the bias, the “framing effects”? Your thoughts?

Emotion and Decision Making: Can feelings (affect) be learned?

15085610_10154068179247592_2950500557868519519_n

It is unequivocally acknowledged, that “emotions powerfully, predictably and pervasively influence decision making” (Lerner, et al., 2015). Further, despite the fact that some scholars have relegated emotion to play a secondary/ subordinate role in decision-making, there continues to be scholarly and empirical support for the  powerful and dominant role of emotions like anger, fear, disgust and regret in terms of positively assisting decision making. For example, anger is found to be a great fuel and motive force behind actions against injustice (Solomon, 1993).

At the same time, it is also acknowledged that emotions can lead to biases and dysfunctional behaviors like prejudices and phobias (for example, an irrational fear of flying or darkness, contrary to objective evidence).

Whilst reading the Guitart-Masip, et al, 2014 paper for this week’s reading, which discusses the interaction between valence (affect/ feelings) and action, left me wondering if like Pavlovian action/ response (which can be trained/ learned), is there a possibility that some of our feelings/ emotions may be learned behavior too?

Of course, one can imagine how most of our emotions are primal and evolutionary in nature. However, even evolutionary and genetic processes, one can presume may entail a certain learning component?

If one were to take the above argument seriously, then it follows that some people have trained (or learned) themselves to feel, for example, disgust on seeing certain objects or environments.  While a butcher it can be argued would feel no disgust spending umpteen hours hacking and chopping flesh and meat and being surrounded by blood and gore, one can see how someone who is trained (or learned) to be a vegetarian would be filled with disgust if put in the same environment, which may in certain instances lead to violent visceral responses like vomiting/ throwing up at the sight of bloody meat.

Similarly there may be a possibility that emotions can be unlearned? Surely, we can  imagine (may even know first-hand) persons who’ve had a fear of heights or flying, have gradually learned to overcome the fear? I wonder if there is any contemporary research which explores this line of thought, i.e., “feelings can be learned and unlearned” through slow, deliberate (read, “rational”) practice over a period of time? And if that may be the case, then claims of co-existence and simultaneous role of both emotions and deliberate reasoning in both impulsive and long-drawn decision making, may be further strengthened.