It’s a fact that trends end, but most accounts of the uptake of cultural artifacts don’t really get into the end too much. They focus on the dynamics in regards to acceptance or consumption, not really touching on how these very same dynamics can lead to the cultural item being very quickly dropped. Trends, after all, are only as stable as people’s beliefs about other people’s preferences and expectations.
Which brings us to fads. Fads can be thought of as cultural trends – tastes, styles, fashions, attitudes, jargon, slang, etc. – that peak rapidly and then quickly die out. We’re all familiar with these: a cultural item – consider those annoying rubber wrist bands from a couple of years ago – suddenly shoots up in prevalence, appearing everywhere at once, and then suddenly pops back out of sight. So, fads by definition are trends that end quickly, but speed of uptake is obviously a key signal. In this post we’ll look at two different accounts of the relation of speed to ends and how our evaluation of fads differs from our evaluation of other trends. Finally, we’ll try to figure out what is “symbolically” at stake in the distinction between a trend and a fad and how this makes the latter automatically less worth joining.
First of all, Luis Bettencourt presents a model of the trend life cycle in which agents use a trend's relative speed of adoption as an indicator of viability and value. The faster the speed of adoption relative to other competing items, the more attractive an item is. When the speed slows, as it inevitably will given a finite population, agents will begin to abandon the trend provided the speed falls below an individually determined critical level. In this model, speed is a positive factor as it indicates the potential value, a form of “social proof,” of a trend. But it also inevitably brings about the trend’s end. Because of the way the dynamics are structured, trends that move fast, will move even faster, quickly bringing about their own crash. These are clearly fads.
Contrary to Bettencourt’s model, Jonah Berger and Gael Le Mens claim [pdf] that a high perceived rate of adoption of an “identity relevant” cultural item decreases our likelihood of adopting or consuming it. Identity relevant cultural items are those that can be thought of or used by potential consumers as a means of communicating some desirable or esteem-generating information about who they are or – more likely – who they’d like others to think they are; identity relevant cultural items are conventionally meaningful signals of group-specific tastes, social affiliation, status, etc. If adoption is too fast, the item might be a fad, which is a bad investment in the social identity stakes because, by definition, they can’t sustain popularity.
An important difference between the two views is obviously the latter’s focus on “identity.” In Bettencourt’s model, the faster the better for all items; it’s the fact that velocity is unsustainable in finite populations that leads to collapse. In Berger and Le Mens’s view, beyond a certain velocity threshold, the faster the worse, at least for publicly visible cultural items with identity implications. They argue that it’s our concern for the “symbolic value” cultural items may have for the development and maintenance of social identity that explains the desire to avoid fads. High velocity may decrease the attractiveness an item might have because it could be just a “flash in the pan” or of fleeting popularity. In other words, faddish cultural items are a bad investment as they don’t maintain symbolic value. So, in both models, speed is information, but in Berger and Le Mens’s, identity concerns decrease the item’s attractiveness (i.e. likelihood we’ll partake) as speed increases.
Their goal is to show that we judge potential faddishness by velocity and that we avoid fads because they're viewed negatively, which might be tied to the fear that they don't return symbolic value. Though they never really get into why we view fads negatively, I feel that we can say something stronger than that fads are unattractive because they’re bad social identity investments. Indeed, it seems that public association with a fad in an identity relevant domain may ultimately deliver disvalue as opposed to just decreased value. It not only won’t add to your social identity, in some situations it might actually damage your social identity. It’s often embarrassing or somehow dis-estimable to have been caught in a fad, to have publicly invested in or consumed some short-lived and now unpopular cultural item (if there are any pictures floating around the internet of you earnestly rocking a white Miami Vice blazer and woven loafers, you know what I mean).
So, I think that with a little reflection most will admit that getting publicly caught in a fad is somehow embarrassing or to be avoided, i.e. a disvalue. Can we articulate what it is about faddishness that actually bugs us? Why should the perception of an identity relevant cultural item’s faddishness make it less valuable or even potentially disvaluable? If it’s just that “flashes in the pan” – items that don’t sustain popularity/potential symbolic value – provide limited or no return to social identity on investment, why do we actually feel embarrassed about fad association as opposed to just annoyed at the wasted time? What is symbolically at stake in the distinction between fads and trends?
Autonomy and Constancy
Our social identity is impacted – often mediated – by the cultural items we consume or otherwise associate ourselves with. Cultural items have symbolic value related to what consumption of the item conventionally means or communicates. For example, the shoes you buy do more than just protect your feet or give you extra purchase on slippery sidewalks. They also communicate something about you, your tastes and often your position in a socio-cultural taxonomy (you’re a hipster in Converse or narrow slip-ons; an urban kid in puffy, tricked out Nikes; etc.). They’re public signals with conventional cultural connotations that you can exploit to manage your social identity. It’s these conventional connotations, the realm of taste and tastes, that confer the symbolic value. We manage our social identities partially by managing the set of cultural items we publicly associate ourselves with.
But clearly it’s not the cultural item alone that confers the symbolic value we exploit in managing perceptions of social identity. Perceptions of our motives and the longer term regularity of our social identity also impact our ability to wring symbolic value from a cultural item. For example, I had a friend who rapidly cycled through “personas,” going from punk to skinhead to b-boy to truck driver. With each new identity came a boatload of highly appropriate, conventionally meaningful gear, slang and comportment. But no matter how cool or dead-on they were for the current identity, he could never really wring any value from them. In fact given the suddenness of each of his transformations, they just seemed like cynical accoutrements, rendering symbolic disvalue and actually damaging his social identity to those he presumably most esteemed. The more he desperately tried to construct a "cool" or valuable social identity, the less likely it became that he actually could. Clearly, the items' perceived appropriateness or continuity with past social identity is important. More generally, perceptions of the motives for associating oneself with cultural items effect the items' symbolic value (and possibly one's larger social identity). If it seems out of character or if the social identity motive is too obvious, we likely won’t be able to garner any actual symbolic value from them no matter what they are.
So, in managing our social identities, we also have to manage perceptions of our management; there are “perceived motive” conditions on the symbolic value we get from some cultural item in an identity relevant domain. More specifically, there are autonomy and constancy conditions, which, if not met, decrease or possibly reverse the symbolic value available from a cultural item. Fads, pretty much by their nature, trash these conditions. Before saying why, let me look a little more closely at the perceived motive conditions, which must be met for an item to confer symbolic value.
Autonomy
Sometimes people try too hard to impress. They obviously speak, dress or comport themselves in what they assume relevant others consider the “cool” way. People like this are often called posers (or poseurs... but when I spell it that way I feel like one). They could be doing the same thing everybody else is doing, but their identity relevant moves are perceived as desperate, “inauthentic,” or even cynical.
As Jon Elster says, “nothing is so unimpressive as behavior designed to impress.” I think this “general axiom” extends to our public association with cultural items as well. Of course, we all recognize that our tastes are shaped by our in-group and larger culture. If not for the conventions and norms against which we evaluate cultural items and social identity, “symbolic value” really wouldn’t exist. But when identity relevant choices appear unduly concerned with others’ perceptions they start to lose value and may even damage social identity. When it appears that you’re dressing or talking a certain way solely out of concern for others’ perceptions of you, you may seem, for example, pretentious, conformist or cynical as opposed to cool. In short, in the West at least, we disvalue identity relevant moves that appear to be completely externally or cynically motivated, while we greatly value those that appear to be internally, “authentically” or autonomously motivated.
Constancy
When it comes to your social identity you don’t want to appear that you’re trying on personas. This is closely related to autonomy, but conceptually distinct. You can switch your style daily, trying to align yourself with various cultural groups, in which case you’re a poser or social butterfly. But you can also switch daily just because you’re an odd loner, a kook. In this case, you’ve high autonomy, but low constancy. They’re distinct, but either way fickleness when it comes to identity is frowned upon. If you give the impression that you’re actively searching for an identity, it’s unsettling: it feels stagey, shifty and possibly cynical.
Though research suggests that our perceptions and expectations of singular self-identity are sort of illusory – more a product of self-narration and fundamental attribution error than some continuous, thing-like Self – we clearly assume and value the idea of constancy, consistency and continuity when it comes to social identity. Pomo identity theories notwithstanding, fickleness, or publicly searching for some sort of social identity, is often interpreted as dishonest, “inauthentic,” self-defeating, cynical and sometimes even pathological. It’s probably a cultural artifact of the West, but we really like to think of ourselves and others as Selves in the ideal sense. Acting otherwise can turn social identity moves on their head, making them seem like a sham and destroying any potential value.
Why We Avoid Fads
We are socially incentivized to manage our social identity management. We seek to construct social identities that are meaningful for and align with existing socio-cultural groups. But in order to create a valuable social identity, we have to construct it in a way that gives the impression that our choices are the autonomous decisions of a constant, stable, already fully constituted social identity. Failing to give the impression of autonomy and constancy in our decisions decreases the “value” of social identity relevant moves.
Fads are culturally salient; their sudden appearance everywhere gives them high visibility and holds our focus. Because of our (probably cultural) bias toward autonomy and constancy – that is, for evaluating identity moves against the ideal of individual, reflective choice by a singular Self – we interpret these waves of uptake and failure in individual terms. After a fad crashes, our bias pushes us to interpret it negatively as a case of social influence over autonomous decision, as a case of herd mentality. The speed of the descent we interpret as social proof of the emptiness or valuelessness of the fad. But the speed of the ascent we interpret as the dis-estimable actions of non-autonomous, inconstant conformists: it’s the result of unreflective "bandwagoneers," people with no strong Selves embarrassingly misled by social influence.
Provided you want to optimize the potential symbolic value from a cultural item, you had better take into consideration whether or not it’s a fad. But, that’s not the only reason you should avoid them. Publicly associating with fads may damage your larger social identity. That is, association with a fad is interpreted as a failure of the autonomy and constancy conditions and specific failures of general conditions reflect on all of your identity relevant decisions. Publicly joining a fad changes the way others interpret your motives for all social identity relevant decisions; your motives become suspect not just in this case, but to some limited degree in all past and future cases. Provided information is publicly available, association with fads incrementally whittles away at the perceived "authenticity," "autonomy" and, effectively, "value" of your social identity. So, it’s a wise strategy to wait and see. Failing that, use the information at hand to judge the probability that some cultural item is a fad. Keep your eye on the speedometer; velocity of uptake is the most salient indicator of a fad pre-crash.
Wednesday, 18 March 2009
Thursday, 5 March 2009
Nice For a Price: esteem as a social incentive
Daily, enormous numbers of people interact in digitally mediated social groups. They usually behave prosocially, that is, they generally play nice with each other, often cooperate and sometimes even collaborate in the creation of a social good. As a result there's a lot of discussion around the sorts of stable social contracts groups can coordinate on; how new tools and designs for getting together impact and are impacted by these possible social contracts; and what sorts of incentives or motivations individuals have to play nice in the first place given that cooperating isn’t necessarily as easy, cheap or materially profitable as being a jerk.
This (way too long) post is all about motivations to join a group and act collectively. It’s the second post in a series (first one) trying to show that the recent focus on the deeply social or other-regarding nature of social and collective action groups online is too one-sided, simplistic and, frankly, ideological. Certainly we’re social animals with frequently benevolent motivations. But that doesn’t mean that we’ve no self-centered, broadly (as opposed to narrowly or materially) rational motivations. Extrinsic incentives, which we scheme to “maximize” in our limited way, abound in social situations. However, they’re mostly non-material things like esteem – the positive evaluation of your actions by the norm-based standards of your group – and status – deferential position in a social group. These are real motivations. Even Yochai Benkler, a genuine web utopianist, mentions them briefly in his Wealth of Networks. He says in passing that they’re part of our real motivational repertoire. But generally these issues are treated as either a dirty secret or beside the point. They’re neither. We need to really look at how these inescapable self-regarding motivations can help us be more prosocial and not just blame them when we’re antisocial.
Since it’s the most completely worked out account of the effects of social functionality on group creation and maintenance, I’ll focus on Clay Shirky’s excellent Here Comes Everybody. Smart as he is, I’m sure he understands the point about the value of broadly instrumental motivations for prosocial behavior. But the hero of his book is the genuine prosociality of humans freed from the limitations of “real world” organizational overhead by digital social functionality. And though he addresses self-serving, antisocial behavior like disinhibition and free-riding, he avoids talk of self-interested, extrinsic motivations for collective action almost entirely, focusing instead on intrinsic motivations such as vanity, self-esteem, true interest in the public good and simple innate pro-sociality. I agree with most of his conclusions, I just think he left out a huge factor that should inform thinking and design around online sociality.
Two Motivation Problems
There are two big problems of motivation here that are conceptually, but probably not practically, separable. First, there’s the motivation connected to the collective good some group’s trying to bring about, say, a high-quality, UGC encyclopedia like Wikipedia. Second, there’s the motivation to play nice, to make your contribution to the collective good you value in a way that the group approves of, like sticking to citable facts as opposed to stating an opinion when adding to a Wikipedia entry.
Riffing on a very useful distinction made by Clay Shirky in his latest book, we can divide our motivations between the group’s promise, whatever collective good or goal it’s trying to achieve, and the bargain, or the social contract within or through which the goal is to be achieved (he also distinguishes the tools, or the actual functionality the group employs to achieve the promise, but that’s for a different post). For Shirky, if you care about the goal embedded in the promise – e.g. collectively create a quality, UGC encyclopedia – and consider it actually attainable as it’s implicitly “stated,” then you’ll be motivated to contribute. Indeed, Shirky explicitly considers motivation only in regard to the promise: “The promise is the essential piece, the thing that convinces a potential user to become an actual user.”
In practice, it’s not so easy to separate the promise from the bargain. But in theory we can say that the promise is normatively degenerate. That is, you can achieve the promise through a variety of bargains. Although the details of the bargain have a huge impact on the viability of the promise, you could theoretically develop a quality, UGC encyclopedia without the exact same contributor/editor structure or implicit/explicit rule sets operating on Wikipedia. Again, they’re theoretically separable but probably not practically separable; it’s most likely the case that people don’t even know they “have” a goal or value the content of the promise until it’s articulated to them via the specific implicit promise couched in terms of some tentative, operating bargain. That is, the actual “articulation” of the promise in the context of the bargain probably creates people’s desire to reach the collective goal as much as it meets it. Also, bargains tend to be continually re-negotiated during the process of actively pursuing the promise.
Shirky would undoubtedly disagree, but it seems like the bargain impacts overall motivation in addition to the motivation arising from the promise. If you value the promise and think this group’s specific bargain is a decent way to achieve it, then you’ll be motivated to join this specific group or contribute to this particular collective good. But if the social environment in which you can contribute to your genuinely desired social good blows, then you’ll likely have to be a lot more motivated by that good to stick around. Similarly, if you're only moderately motivated by the promise, but the bargain offers something exciting in itself – say, the public adoration of your peers – then that might be enough to get you contribute when you otherwise wouldn't. Clearly, the bargain impacts overall motivation, positively or negatively, though some initial interest in the collective goal is probably necessary in the first place.
Anyway, one of the big arguments of Shirky’s book is that toady’s social functionality lowers the cost of contributing and collaborating to such a degree that people motivated by the same promise or goal – latent groups to use Shirky’s Mancur Olson-inspired term – can actually get together to do something about it. Even relatively weak goal motivation is no longer blocked from actually resulting in action since the overhead associated with coordination has dropped so dramatically; thanks to social functionality like wikis, blogs, tagging, cell phones, etc., it’s really “easy” to get dispersed people together to focus on some collectively defined good.
However, since the promise is “normatively degenerate,” i.e. can be achieved by means of a number of distinct social contracts or bargains, we still need to consider people’s motivation to comply with a specific bargain given the goal. What would make people choose one Bargain over another? Does the nature of the Bargain do anything to the overall motivation to contribute or cooperate? To answer these, we first need to say what a Bargain is. Fudging a little, we can say that Bargains, as little social contracts of sorts, are collections of norms
Why Comply?
Social norms are the implicit (but they can become explicit like some laws or codes) interaction rules that keep social groups from deteriorating into antisocial free-for-alls. They turn potentially state-of-nature, every-man-for-himself interactions into coordination games. Humans are social animals, but that doesn’t mean we always play nice. If it did, we’d rarely see any groups fall apart. Everybody would play nice by their group’s standards. As it is, though, we see a lot of antisocial, anti-normative behavior in contexts in which prosocial norms would be appropriate, particularly online. This suggests that we have a conditional preference for following norms. If the conditions are met, we play by the norms, otherwise it’s a melee. Obviously, the conditions under which we can generally induce a preference for social norm following on the part of most users should impact our designs for social spaces online and elsewhere.
Just to solidify everything, I’ll borrow (with slight fudging/modification) Cristina Bicchieri’s idea that there are three types of conditions which have to be met for someone to prefer following a norm in a given situation.
The first two are necessary existence conditions for the norm. If neither are met, even if there's ostensibly a norm in place you won’t follow it because either you don’t think everybody else will or you feel it’s not really expected of you. In any given situation the norm may not be salient, and thus people might not know they should be following it. So, designs for spaces in which you want users to coordinate on a cooperation-inducing social contract should include provision for making norms salient (e.g., user-rating of UGC, contribution sorting by quality, etc.). But salience isn’t really what interests me here. Let's look at the Motivational conditions, which are about the reasons for personal compliance provided the Empirical and Normative conditions are met. If you think we just have a basic preference to do as others do regardless of our relationship to them, then consider these supplementary.
Compliance out of fear of negative sanctions, from frowns to finger wagging to expulsion from the group, can be very powerful. Often people play by the rules because of perceived threat of force, even if it’s just social force in the form of conventionalized displays of displeasure (reprimands from admins, poorly rated UGC, banishment, etc.). This is the most widely recognized motivation in the literature. Clearly, it only operates under conditions of at least partial transparency or incomplete anonymity and some sort of publicity or public availability of contributions.
Viewing the norm as reasonable or maybe even internalizing it, renders compliance largely non-reflective. You don’t act out of fear or positive incentive, you just obey the norm. This motivation doesn’t have any observation conditions on the situation; you’d obey the norm in totally anonymous interactions.
Esteem motivated compliance is a little different from the other two. In this cases you obey because obeying results in a personal gain or good in addition to whatever motivations you take to be intrinsic (e.g. general preference for conformity). You get something of value from others – esteem – for acting appropriately or contributing at a high, group-defined standard. The esteem motive is related to the fear motive, particularly if we consider disesteem a negative sanction. And like the fear motive it requires limited anonymity and publicity of contributions.
If esteem is a sought after item in limited supply, then it can add to the overall motivation one has for contributing to the larger social good defined by the group’s Promise. If complying with a social contract and contributing at a high level relative to the group’s norms can generate esteem, then people already motivated by the group’s goal have additional motivation to work. They have been significantly incentivized beyond the presumably prosocial incentive of the collective good.
Even if the collective good is materially negligible or even costly to the individual, the esteem motivation associated with group norm compliance can incentivize behavior to bring about the collective good (provided overall motivation can be the aggregate of promise-based and bargain-based motivations). Basically, implementing the social contract in such a way that esteem can be accumulated (limited anonymity and institutionalized feedback channels, for example) can make selfish jerks work prosocially. However, it also makes people for whom the collective good is genuinely motivating even more motivated.
Obviously, there is considerable space for trouble here. Unethical, mistaken and outmoded norms can easily be perpetuated by esteem motivated compliance. Agreed. Our task as designers and theorists in this space is to figure out how these mechanisms work and whether or not they can be reliably used for prosocial ends. That's why I think the bias for consideration of intrinsic and clearly prosocial motives only is shortsighted. Esteem and other extrinsic non-material motivations operate in all social groups. Unless we face our less social, selfish selves and try to understand how they operate, we can't even begin to make design decisions that will allow us to channel this ignoble element in our character for proscocial – or at least not antisocial – ends.
A Note On Near Anonymity, Limited Publicity and Esteem... skip at will
You might think that the near anonymity and volume of contributions of UGC creations like Wikipedia would destroy the esteem motivation: if next to nobody knows who you are and the volume of contributions mitigates the publicity any specific addition gets, then no esteem accumulates. I don’t think this is the case for several reasons.
For one thing, a study of handwashing habits in public restrooms in New York city showed that the mere presence of an anonymous stranger increased handwashing from 40% to 80%. (Munger, K. and S. J. Harris. 1989. “Effects of an Observer on Handwashing in a Public Restroom. Perceptual and Motor Skills 69(3):733-734.) The anonymity of the esteemer doesn’t seem to matter, but most importantly it doesn’t seem to matter that the esteem-seeker is anonymous as well. The mere fact that someone – anyone – could judge, the normatively charged public situation influences behavior regardless of anonymity.
Also, in many esteem situations, relative anonymity and obscurity actually heighten the potential for esteem. Basically, in many situations in which esteem is at issue, the potential esteemer judges not just the action but also the disposition. If calculation is obvious and the desire for recognition glaring, then you’re likely to give less esteem than if the estimable act occurred “for itself,” as it were. The public forum in which esteem is judged and the conditions under which credit is claimed ("I did this" or "I did this for the group") has an impact on this dispositional judgment. Too public and too obviously about you and not the good, and it might be showing off. But in conditions of limited publicity and partial anonymity your estimable act gives the impression that you’re doing it for its own sake. That is, an estimable act in situations where the probability of being seen and personally esteemed is relatively low (and all the other members knows this) renders much greater esteem if it is in fact seen and identified. In this situation, it looks like the act arose from a natural disposition and not esteem seeking. Near anonymity and limited visibility in some sense optimize esteem, in the words of Geoffrey Brennan and Philip Pettit.
Finally, esteem doesn’t have to circle back and add to our personal pool of accumulated esteem for us to feel it in some measure. That is, esteem attaching to your online identity, which nobody knows is really you, has its own reward. Indeed, we don’t always want the esteem we receive in one arena to mingle with (or be contaminated by) the personal esteem (or disesteem) earned in another. It’s a form of esteem management. Additionally, if Kai Spiekermann is right and the esteem others are willing to give often has less to do with what you’ve done than who you are, anonymity can level the playing field a bit and allow esteem to flow more freely than it would under more transparent conditions.
This (way too long) post is all about motivations to join a group and act collectively. It’s the second post in a series (first one) trying to show that the recent focus on the deeply social or other-regarding nature of social and collective action groups online is too one-sided, simplistic and, frankly, ideological. Certainly we’re social animals with frequently benevolent motivations. But that doesn’t mean that we’ve no self-centered, broadly (as opposed to narrowly or materially) rational motivations. Extrinsic incentives, which we scheme to “maximize” in our limited way, abound in social situations. However, they’re mostly non-material things like esteem – the positive evaluation of your actions by the norm-based standards of your group – and status – deferential position in a social group. These are real motivations. Even Yochai Benkler, a genuine web utopianist, mentions them briefly in his Wealth of Networks. He says in passing that they’re part of our real motivational repertoire. But generally these issues are treated as either a dirty secret or beside the point. They’re neither. We need to really look at how these inescapable self-regarding motivations can help us be more prosocial and not just blame them when we’re antisocial.
Since it’s the most completely worked out account of the effects of social functionality on group creation and maintenance, I’ll focus on Clay Shirky’s excellent Here Comes Everybody. Smart as he is, I’m sure he understands the point about the value of broadly instrumental motivations for prosocial behavior. But the hero of his book is the genuine prosociality of humans freed from the limitations of “real world” organizational overhead by digital social functionality. And though he addresses self-serving, antisocial behavior like disinhibition and free-riding, he avoids talk of self-interested, extrinsic motivations for collective action almost entirely, focusing instead on intrinsic motivations such as vanity, self-esteem, true interest in the public good and simple innate pro-sociality. I agree with most of his conclusions, I just think he left out a huge factor that should inform thinking and design around online sociality.
Two Motivation Problems
There are two big problems of motivation here that are conceptually, but probably not practically, separable. First, there’s the motivation connected to the collective good some group’s trying to bring about, say, a high-quality, UGC encyclopedia like Wikipedia. Second, there’s the motivation to play nice, to make your contribution to the collective good you value in a way that the group approves of, like sticking to citable facts as opposed to stating an opinion when adding to a Wikipedia entry.
Riffing on a very useful distinction made by Clay Shirky in his latest book, we can divide our motivations between the group’s promise, whatever collective good or goal it’s trying to achieve, and the bargain, or the social contract within or through which the goal is to be achieved (he also distinguishes the tools, or the actual functionality the group employs to achieve the promise, but that’s for a different post). For Shirky, if you care about the goal embedded in the promise – e.g. collectively create a quality, UGC encyclopedia – and consider it actually attainable as it’s implicitly “stated,” then you’ll be motivated to contribute. Indeed, Shirky explicitly considers motivation only in regard to the promise: “The promise is the essential piece, the thing that convinces a potential user to become an actual user.”
In practice, it’s not so easy to separate the promise from the bargain. But in theory we can say that the promise is normatively degenerate. That is, you can achieve the promise through a variety of bargains. Although the details of the bargain have a huge impact on the viability of the promise, you could theoretically develop a quality, UGC encyclopedia without the exact same contributor/editor structure or implicit/explicit rule sets operating on Wikipedia. Again, they’re theoretically separable but probably not practically separable; it’s most likely the case that people don’t even know they “have” a goal or value the content of the promise until it’s articulated to them via the specific implicit promise couched in terms of some tentative, operating bargain. That is, the actual “articulation” of the promise in the context of the bargain probably creates people’s desire to reach the collective goal as much as it meets it. Also, bargains tend to be continually re-negotiated during the process of actively pursuing the promise.
Shirky would undoubtedly disagree, but it seems like the bargain impacts overall motivation in addition to the motivation arising from the promise. If you value the promise and think this group’s specific bargain is a decent way to achieve it, then you’ll be motivated to join this specific group or contribute to this particular collective good. But if the social environment in which you can contribute to your genuinely desired social good blows, then you’ll likely have to be a lot more motivated by that good to stick around. Similarly, if you're only moderately motivated by the promise, but the bargain offers something exciting in itself – say, the public adoration of your peers – then that might be enough to get you contribute when you otherwise wouldn't. Clearly, the bargain impacts overall motivation, positively or negatively, though some initial interest in the collective goal is probably necessary in the first place.
Anyway, one of the big arguments of Shirky’s book is that toady’s social functionality lowers the cost of contributing and collaborating to such a degree that people motivated by the same promise or goal – latent groups to use Shirky’s Mancur Olson-inspired term – can actually get together to do something about it. Even relatively weak goal motivation is no longer blocked from actually resulting in action since the overhead associated with coordination has dropped so dramatically; thanks to social functionality like wikis, blogs, tagging, cell phones, etc., it’s really “easy” to get dispersed people together to focus on some collectively defined good.
However, since the promise is “normatively degenerate,” i.e. can be achieved by means of a number of distinct social contracts or bargains, we still need to consider people’s motivation to comply with a specific bargain given the goal. What would make people choose one Bargain over another? Does the nature of the Bargain do anything to the overall motivation to contribute or cooperate? To answer these, we first need to say what a Bargain is. Fudging a little, we can say that Bargains, as little social contracts of sorts, are collections of norms
Why Comply?
Social norms are the implicit (but they can become explicit like some laws or codes) interaction rules that keep social groups from deteriorating into antisocial free-for-alls. They turn potentially state-of-nature, every-man-for-himself interactions into coordination games. Humans are social animals, but that doesn’t mean we always play nice. If it did, we’d rarely see any groups fall apart. Everybody would play nice by their group’s standards. As it is, though, we see a lot of antisocial, anti-normative behavior in contexts in which prosocial norms would be appropriate, particularly online. This suggests that we have a conditional preference for following norms. If the conditions are met, we play by the norms, otherwise it’s a melee. Obviously, the conditions under which we can generally induce a preference for social norm following on the part of most users should impact our designs for social spaces online and elsewhere.
Just to solidify everything, I’ll borrow (with slight fudging/modification) Cristina Bicchieri’s idea that there are three types of conditions which have to be met for someone to prefer following a norm in a given situation.
- Empirical Condition: Do you expect the norm to be observed by most people in this sort of situation?
- Normative Condition: Do you believe that most people expect or prefer you to follow it in this sort of situation?
- Motivational Conditions: There are several of these and they relate to your reasons for complying given the truth of 1 and 2 above.
- Fear: Is your compliance solely based on fear of negative sanctions for non-compliance?
- Esteem: Is your compliance based on positive sanctions like praise and esteem?
- Reasonableness/Internalization: Is your compliance based on your assessment of the reasonableness of the norm or possibly on a non-reflective, conditioned expectation?
The first two are necessary existence conditions for the norm. If neither are met, even if there's ostensibly a norm in place you won’t follow it because either you don’t think everybody else will or you feel it’s not really expected of you. In any given situation the norm may not be salient, and thus people might not know they should be following it. So, designs for spaces in which you want users to coordinate on a cooperation-inducing social contract should include provision for making norms salient (e.g., user-rating of UGC, contribution sorting by quality, etc.). But salience isn’t really what interests me here. Let's look at the Motivational conditions, which are about the reasons for personal compliance provided the Empirical and Normative conditions are met. If you think we just have a basic preference to do as others do regardless of our relationship to them, then consider these supplementary.
Compliance out of fear of negative sanctions, from frowns to finger wagging to expulsion from the group, can be very powerful. Often people play by the rules because of perceived threat of force, even if it’s just social force in the form of conventionalized displays of displeasure (reprimands from admins, poorly rated UGC, banishment, etc.). This is the most widely recognized motivation in the literature. Clearly, it only operates under conditions of at least partial transparency or incomplete anonymity and some sort of publicity or public availability of contributions.
Viewing the norm as reasonable or maybe even internalizing it, renders compliance largely non-reflective. You don’t act out of fear or positive incentive, you just obey the norm. This motivation doesn’t have any observation conditions on the situation; you’d obey the norm in totally anonymous interactions.
Esteem motivated compliance is a little different from the other two. In this cases you obey because obeying results in a personal gain or good in addition to whatever motivations you take to be intrinsic (e.g. general preference for conformity). You get something of value from others – esteem – for acting appropriately or contributing at a high, group-defined standard. The esteem motive is related to the fear motive, particularly if we consider disesteem a negative sanction. And like the fear motive it requires limited anonymity and publicity of contributions.
If esteem is a sought after item in limited supply, then it can add to the overall motivation one has for contributing to the larger social good defined by the group’s Promise. If complying with a social contract and contributing at a high level relative to the group’s norms can generate esteem, then people already motivated by the group’s goal have additional motivation to work. They have been significantly incentivized beyond the presumably prosocial incentive of the collective good.
Even if the collective good is materially negligible or even costly to the individual, the esteem motivation associated with group norm compliance can incentivize behavior to bring about the collective good (provided overall motivation can be the aggregate of promise-based and bargain-based motivations). Basically, implementing the social contract in such a way that esteem can be accumulated (limited anonymity and institutionalized feedback channels, for example) can make selfish jerks work prosocially. However, it also makes people for whom the collective good is genuinely motivating even more motivated.
Obviously, there is considerable space for trouble here. Unethical, mistaken and outmoded norms can easily be perpetuated by esteem motivated compliance. Agreed. Our task as designers and theorists in this space is to figure out how these mechanisms work and whether or not they can be reliably used for prosocial ends. That's why I think the bias for consideration of intrinsic and clearly prosocial motives only is shortsighted. Esteem and other extrinsic non-material motivations operate in all social groups. Unless we face our less social, selfish selves and try to understand how they operate, we can't even begin to make design decisions that will allow us to channel this ignoble element in our character for proscocial – or at least not antisocial – ends.
A Note On Near Anonymity, Limited Publicity and Esteem... skip at will
You might think that the near anonymity and volume of contributions of UGC creations like Wikipedia would destroy the esteem motivation: if next to nobody knows who you are and the volume of contributions mitigates the publicity any specific addition gets, then no esteem accumulates. I don’t think this is the case for several reasons.
For one thing, a study of handwashing habits in public restrooms in New York city showed that the mere presence of an anonymous stranger increased handwashing from 40% to 80%. (Munger, K. and S. J. Harris. 1989. “Effects of an Observer on Handwashing in a Public Restroom. Perceptual and Motor Skills 69(3):733-734.) The anonymity of the esteemer doesn’t seem to matter, but most importantly it doesn’t seem to matter that the esteem-seeker is anonymous as well. The mere fact that someone – anyone – could judge, the normatively charged public situation influences behavior regardless of anonymity.
Also, in many esteem situations, relative anonymity and obscurity actually heighten the potential for esteem. Basically, in many situations in which esteem is at issue, the potential esteemer judges not just the action but also the disposition. If calculation is obvious and the desire for recognition glaring, then you’re likely to give less esteem than if the estimable act occurred “for itself,” as it were. The public forum in which esteem is judged and the conditions under which credit is claimed ("I did this" or "I did this for the group") has an impact on this dispositional judgment. Too public and too obviously about you and not the good, and it might be showing off. But in conditions of limited publicity and partial anonymity your estimable act gives the impression that you’re doing it for its own sake. That is, an estimable act in situations where the probability of being seen and personally esteemed is relatively low (and all the other members knows this) renders much greater esteem if it is in fact seen and identified. In this situation, it looks like the act arose from a natural disposition and not esteem seeking. Near anonymity and limited visibility in some sense optimize esteem, in the words of Geoffrey Brennan and Philip Pettit.
Finally, esteem doesn’t have to circle back and add to our personal pool of accumulated esteem for us to feel it in some measure. That is, esteem attaching to your online identity, which nobody knows is really you, has its own reward. Indeed, we don’t always want the esteem we receive in one arena to mingle with (or be contaminated by) the personal esteem (or disesteem) earned in another. It’s a form of esteem management. Additionally, if Kai Spiekermann is right and the esteem others are willing to give often has less to do with what you’ve done than who you are, anonymity can level the playing field a bit and allow esteem to flow more freely than it would under more transparent conditions.
Labels:
Economy of Esteem,
norms,
Shirky,
social functionality
Monday, 2 March 2009
Playing Nice For the Wrong Reasons
We were sold the story of being mainly self-interested, mainly rational actors interacting in market places. And the internet has shown that we have all these social, empathetic relationships with deep, authentic motivations that are nothing to do with selling and spending.
Clay Shirky
The Observer, Sunday 15 February 2009
One hears this sentiment a lot these days, particularly from American web pundits. It sounds like a lingering echo of the utopianist strains of early web propaganda; it's a rhetorical move positioning the web as a signal force in a new flourishing of hierarchy-smashing communitarianism against the old alienating and atomizing intellectual myths of “maximization” and “rationality.” The idea's intellectual sources include the recent widely touted corrections (e.g. behavioral economics) of some of the excesses of neo-classical economics and possibly the U.S. sociological tradition stemming from “functionalism.”
Anyway, I agree wholeheartedly that we often cooperate or collaborate on the web out of more or less benevolent – if not truly altruistic – impulses. But it’s also obvious that “deep, authentic motivations” aren’t the only incentives for acting in, or even just joining, groups. Significant “shallow” and “inauthentic” motivations impact social behavior as well. “Ignoble” motivations like esteem accumulation and status achievement – which are both distinct from reputation – significantly impact people’s behavior in social settings. Indeed, much of the functionality on the social web already has an esteem function baked into its primary or legitimate function. As a simple example, posting reviews is as much about displaying expertise or simple likes and dislikes as it is about helping others choose. As a rule, people genuinely like to play nice and help each other. But they’ll play nicer and help more if you also let them compete for group defined goods that are neither “authentic” nor “deep.”
Sticking with Clay Shirky, in his latest book he recognizes that there is a place for some sorts of less than noble rewards online. For example, he considers “vanity” and reputational benefit to be valid motivations for contribution. But the mention of reputation notwithstanding, he focuses on intrinsic, non-material motivations like self-esteem, the desire to produce something good and the need for communion. What he doesn’t talk about are the powerful extrinsic, non-material motivations involving the attention of and positive evaluation by others in your group. You contribute partly for the esteem your contribution can get you. And this holds true even if the esteeming group is minuscule. Indeed, it’s often the case that the smaller the esteeming group, the more valuable the esteem (“selling out” after all is the trading of esteem for popularity).
If we’re honest with ourselves, it seems pretty clear that some of our motivations for social interaction on the web (and elsewhere) are self-interested, operate by some sort of market principles and are neither authentic nor empathetic. We engage in behavior that is intended to "maximize" some in demand good – esteem – relative to the costs we're willing to bear. Other things equal, the greater the potential esteem the more cost we're willing to bear. Esteem seeking definitely isn't ideally authentic behavior: you can get it only to the extent that it’s not apparent – to yourself or others – that you’re actively seeking it. And although esteem seeking involves empathy – it assumes taking the point of view of others in order to determine the most estimable move – it’s not an “empathetic” motivation in the laudatory sense intended by Shirky. But, of course, this doesn’t mean that we don’t genuinely like helping and hanging out with others. It just means we have both sorts of motivations.
Shirky clearly recognizes this fact since he qualifies the whole thing with "mainly." Most likely, he just wants to make the point that we often act socially for the non-optimizing, genuinely pro-social reasons we say we act. Our actions are generally genuine and not cynical. But this is the tricky part: I don’t think that recognizing the importance of esteem to most contributors automatically commits us to cynicism. Furthermore I don't think that the distinction between Shirky's "good" motivations and my market-like esteem considerations is all that clear and easy to maintain in the first place. Esteem, like self-interest generally, is what Philip Pettit and Geoffrey Brennan call a standby or virtual cause of behavior. It’s not what’s directly sought from your actions, but if esteem wasn’t provided, you’d be less likely to behave that way. It’s a bias that steers us rather than an explicit principle that guides us. Esteem – the positive evaluations of others by the norm-based standards of whatever reference group you’re using – is the emotionally powerful implicit incentive within social groups that maintains conformity while allowing constant competitive evolution. Seeking it isn’t necessarily cynical, rather it's inextricably woven into group-focused behavior.
Why on earth are we so afraid of “rational motivations” that we have to banish them almost completely from talk of group action online? Beats me. Anyway, as someone who has to design interaction spaces online I think it’s dogmatic, maybe even superstitions, to think that esteem motivations are somehow less real or powerful because “inauthentic” and self-serving. After all, in public goods experiments, we are shown to be conditionally cooperative, meaning that we cooperate with a self-serving bias. If norms of cooperation or reciprocation aren’t sufficiently salient or aren’t otherwise maintained, we tend to stop playing nice and settle for getting all we can. The desire for esteem and status can actually get people to observe and stabilize cooperative norms for purely self-centered reasons; they’re self-serving incentives for pro-social behavior . We should design accordingly.
In the next couple of posts I’ll look more closely at esteem seeking, Shirky’s book and the popular bias against instrumental motivations online. I’ll use the example of the development of the Linux operating system to illustrate the distinction between two different versions of social capital as well as two different takes on the free rider problem. Of course, both of them will utilize the idea of esteem along with Bourdieu’s idea of the “economy of symbolic goods.”
Labels:
Bourdieu,
Economy of Esteem,
Shirky,
social functionality,
web design
Subscribe to:
Posts (Atom)