Thursday 5 March 2009

Nice For a Price: esteem as a social incentive

Daily, enormous numbers of people interact in digitally mediated social groups. They usually behave prosocially, that is, they generally play nice with each other, often cooperate and sometimes even collaborate in the creation of a social good. As a result there's a lot of discussion around the sorts of stable social contracts groups can coordinate on; how new tools and designs for getting together impact and are impacted by these possible social contracts; and what sorts of incentives or motivations individuals have to play nice in the first place given that cooperating isn’t necessarily as easy, cheap or materially profitable as being a jerk.

This (way too long) post is all about motivations to join a group and act collectively. It’s the second post in a series (first one) trying to show that the recent focus on the deeply social or other-regarding nature of social and collective action groups online is too one-sided, simplistic and, frankly, ideological. Certainly we’re social animals with frequently benevolent motivations. But that doesn’t mean that we’ve no self-centered, broadly (as opposed to narrowly or materially) rational motivations. Extrinsic incentives, which we scheme to “maximize” in our limited way, abound in social situations. However, they’re mostly non-material things like esteem – the positive evaluation of your actions by the norm-based standards of your group – and status – deferential position in a social group. These are real motivations. Even Yochai Benkler, a genuine web utopianist, mentions them briefly in his Wealth of Networks. He says in passing that they’re part of our real motivational repertoire. But generally these issues are treated as either a dirty secret or beside the point. They’re neither. We need to really look at how these inescapable self-regarding motivations can help us be more prosocial and not just blame them when we’re antisocial.

Since it’s the most completely worked out account of the effects of social functionality on group creation and maintenance, I’ll focus on Clay Shirky’s excellent Here Comes Everybody. Smart as he is, I’m sure he understands the point about the value of broadly instrumental motivations for prosocial behavior. But the hero of his book is the genuine prosociality of humans freed from the limitations of “real world” organizational overhead by digital social functionality. And though he addresses self-serving, antisocial behavior like disinhibition and free-riding, he avoids talk of self-interested, extrinsic motivations for collective action almost entirely, focusing instead on intrinsic motivations such as vanity, self-esteem, true interest in the public good and simple innate pro-sociality. I agree with most of his conclusions, I just think he left out a huge factor that should inform thinking and design around online sociality.

Two Motivation Problems

There are two big problems of motivation here that are conceptually, but probably not practically, separable. First, there’s the motivation connected to the collective good some group’s trying to bring about, say, a high-quality, UGC encyclopedia like Wikipedia. Second, there’s the motivation to play nice, to make your contribution to the collective good you value in a way that the group approves of, like sticking to citable facts as opposed to stating an opinion when adding to a Wikipedia entry.

Riffing on a very useful distinction made by Clay Shirky in his latest book, we can divide our motivations between the group’s promise, whatever collective good or goal it’s trying to achieve, and the bargain, or the social contract within or through which the goal is to be achieved (he also distinguishes the tools, or the actual functionality the group employs to achieve the promise, but that’s for a different post). For Shirky, if you care about the goal embedded in the promise – e.g. collectively create a quality, UGC encyclopedia – and consider it actually attainable as it’s implicitly “stated,” then you’ll be motivated to contribute. Indeed, Shirky explicitly considers motivation only in regard to the promise: “The promise is the essential piece, the thing that convinces a potential user to become an actual user.”

In practice, it’s not so easy to separate the promise from the bargain. But in theory we can say that the promise is normatively degenerate. That is, you can achieve the promise through a variety of bargains. Although the details of the bargain have a huge impact on the viability of the promise, you could theoretically develop a quality, UGC encyclopedia without the exact same contributor/editor structure or implicit/explicit rule sets operating on Wikipedia. Again, they’re theoretically separable but probably not practically separable; it’s most likely the case that people don’t even know they “have” a goal or value the content of the promise until it’s articulated to them via the specific implicit promise couched in terms of some tentative, operating bargain. That is, the actual “articulation” of the promise in the context of the bargain probably creates people’s desire to reach the collective goal as much as it meets it. Also, bargains tend to be continually re-negotiated during the process of actively pursuing the promise.

Shirky would undoubtedly disagree, but it seems like the bargain impacts overall motivation in addition to the motivation arising from the promise. If you value the promise and think this group’s specific bargain is a decent way to achieve it, then you’ll be motivated to join this specific group or contribute to this particular collective good. But if the social environment in which you can contribute to your genuinely desired social good blows, then you’ll likely have to be a lot more motivated by that good to stick around. Similarly, if you're only moderately motivated by the promise, but the bargain offers something exciting in itself – say, the public adoration of your peers – then that might be enough to get you contribute when you otherwise wouldn't. Clearly, the bargain impacts overall motivation, positively or negatively, though some initial interest in the collective goal is probably necessary in the first place.

Anyway, one of the big arguments of Shirky’s book is that toady’s social functionality lowers the cost of contributing and collaborating to such a degree that people motivated by the same promise or goal – latent groups to use Shirky’s Mancur Olson-inspired term – can actually get together to do something about it. Even relatively weak goal motivation is no longer blocked from actually resulting in action since the overhead associated with coordination has dropped so dramatically; thanks to social functionality like wikis, blogs, tagging, cell phones, etc., it’s really “easy” to get dispersed people together to focus on some collectively defined good.

However, since the promise is “normatively degenerate,” i.e. can be achieved by means of a number of distinct social contracts or bargains, we still need to consider people’s motivation to comply with a specific bargain given the goal. What would make people choose one Bargain over another? Does the nature of the Bargain do anything to the overall motivation to contribute or cooperate? To answer these, we first need to say what a Bargain is. Fudging a little, we can say that Bargains, as little social contracts of sorts, are collections of norms

Why Comply?

Social norms are the implicit (but they can become explicit like some laws or codes) interaction rules that keep social groups from deteriorating into antisocial free-for-alls. They turn potentially state-of-nature, every-man-for-himself interactions into coordination games. Humans are social animals, but that doesn’t mean we always play nice. If it did, we’d rarely see any groups fall apart. Everybody would play nice by their group’s standards. As it is, though, we see a lot of antisocial, anti-normative behavior in contexts in which prosocial norms would be appropriate, particularly online. This suggests that we have a conditional preference for following norms. If the conditions are met, we play by the norms, otherwise it’s a melee. Obviously, the conditions under which we can generally induce a preference for social norm following on the part of most users should impact our designs for social spaces online and elsewhere.

Just to solidify everything, I’ll borrow (with slight fudging/modification) Cristina Bicchieri’s idea that there are three types of conditions which have to be met for someone to prefer following a norm in a given situation.

  1. Empirical Condition: Do you expect the norm to be observed by most people in this sort of situation?
  2. Normative Condition: Do you believe that most people expect or prefer you to follow it in this sort of situation?
  3. Motivational Conditions: There are several of these and they relate to your reasons for complying given the truth of 1 and 2 above.
  • Fear: Is your compliance solely based on fear of negative sanctions for non-compliance?
  • Esteem: Is your compliance based on positive sanctions like praise and esteem?
  • Reasonableness/Internalization: Is your compliance based on your assessment of the reasonableness of the norm or possibly on a non-reflective, conditioned expectation?

The first two are necessary existence conditions for the norm. If neither are met, even if there's ostensibly a norm in place you won’t follow it because either you don’t think everybody else will or you feel it’s not really expected of you. In any given situation the norm may not be salient, and thus people might not know they should be following it. So, designs for spaces in which you want users to coordinate on a cooperation-inducing social contract should include provision for making norms salient (e.g., user-rating of UGC, contribution sorting by quality, etc.). But salience isn’t really what interests me here. Let's look at the Motivational conditions, which are about the reasons for personal compliance provided the Empirical and Normative conditions are met. If you think we just have a basic preference to do as others do regardless of our relationship to them, then consider these supplementary.

Compliance out of fear of negative sanctions, from frowns to finger wagging to expulsion from the group, can be very powerful. Often people play by the rules because of perceived threat of force, even if it’s just social force in the form of conventionalized displays of displeasure (reprimands from admins, poorly rated UGC, banishment, etc.). This is the most widely recognized motivation in the literature. Clearly, it only operates under conditions of at least partial transparency or incomplete anonymity and some sort of publicity or public availability of contributions.

Viewing the norm as reasonable or maybe even internalizing it, renders compliance largely non-reflective. You don’t act out of fear or positive incentive, you just obey the norm. This motivation doesn’t have any observation conditions on the situation; you’d obey the norm in totally anonymous interactions.

Esteem motivated compliance is a little different from the other two. In this cases you obey because obeying results in a personal gain or good in addition to whatever motivations you take to be intrinsic (e.g. general preference for conformity). You get something of value from others – esteem – for acting appropriately or contributing at a high, group-defined standard. The esteem motive is related to the fear motive, particularly if we consider disesteem a negative sanction. And like the fear motive it requires limited anonymity and publicity of contributions.

If esteem is a sought after item in limited supply, then it can add to the overall motivation one has for contributing to the larger social good defined by the group’s Promise. If complying with a social contract and contributing at a high level relative to the group’s norms can generate esteem, then people already motivated by the group’s goal have additional motivation to work. They have been significantly incentivized beyond the presumably prosocial incentive of the collective good.

Even if the collective good is materially negligible or even costly to the individual, the esteem motivation associated with group norm compliance can incentivize behavior to bring about the collective good (provided overall motivation can be the aggregate of promise-based and bargain-based motivations). Basically, implementing the social contract in such a way that esteem can be accumulated (limited anonymity and institutionalized feedback channels, for example) can make selfish jerks work prosocially. However, it also makes people for whom the collective good is genuinely motivating even more motivated.

Obviously, there is considerable space for trouble here. Unethical, mistaken and outmoded norms can easily be perpetuated by esteem motivated compliance. Agreed. Our task as designers and theorists in this space is to figure out how these mechanisms work and whether or not they can be reliably used for prosocial ends. That's why I think the bias for consideration of intrinsic and clearly prosocial motives only is shortsighted. Esteem and other extrinsic non-material motivations operate in all social groups. Unless we face our less social, selfish selves and try to understand how they operate, we can't even begin to make design decisions that will allow us to channel this ignoble element in our character for proscocial – or at least not antisocial – ends.

A Note On Near Anonymity, Limited Publicity and Esteem... skip at will

You might think that the near anonymity and volume of contributions of UGC creations like Wikipedia would destroy the esteem motivation: if next to nobody knows who you are and the volume of contributions mitigates the publicity any specific addition gets, then no esteem accumulates. I don’t think this is the case for several reasons.

For one thing, a study of handwashing habits in public restrooms in New York city showed that the mere presence of an anonymous stranger increased handwashing from 40% to 80%. (Munger, K. and S. J. Harris. 1989. “Effects of an Observer on Handwashing in a Public Restroom. Perceptual and Motor Skills 69(3):733-734.) The anonymity of the esteemer doesn’t seem to matter, but most importantly it doesn’t seem to matter that the esteem-seeker is anonymous as well. The mere fact that someone – anyone – could judge, the normatively charged public situation influences behavior regardless of anonymity.

Also, in many esteem situations, relative anonymity and obscurity actually heighten the potential for esteem. Basically, in many situations in which esteem is at issue, the potential esteemer judges not just the action but also the disposition. If calculation is obvious and the desire for recognition glaring, then you’re likely to give less esteem than if the estimable act occurred “for itself,” as it were. The public forum in which esteem is judged and the conditions under which credit is claimed ("I did this" or "I did this for the group") has an impact on this dispositional judgment. Too public and too obviously about you and not the good, and it might be showing off. But in conditions of limited publicity and partial anonymity your estimable act gives the impression that you’re doing it for its own sake. That is, an estimable act in situations where the probability of being seen and personally esteemed is relatively low (and all the other members knows this) renders much greater esteem if it is in fact seen and identified. In this situation, it looks like the act arose from a natural disposition and not esteem seeking. Near anonymity and limited visibility in some sense optimize esteem, in the words of Geoffrey Brennan and Philip Pettit.

Finally, esteem doesn’t have to circle back and add to our personal pool of accumulated esteem for us to feel it in some measure. That is, esteem attaching to your online identity, which nobody knows is really you, has its own reward. Indeed, we don’t always want the esteem we receive in one arena to mingle with (or be contaminated by) the personal esteem (or disesteem) earned in another. It’s a form of esteem management. Additionally, if Kai Spiekermann is right and the esteem others are willing to give often has less to do with what you’ve done than who you are, anonymity can level the playing field a bit and allow esteem to flow more freely than it would under more transparent conditions.

1 comment:

John R said...

2 posts in 2 days! Very exciting - I shall dig in now...