Friday, 4 December 2009

The Hippies vs. The Straights: Information/Knowledge, Internalism/Externalism

Knowledge is of two kinds: we know a subject ourselves, or we know where we can find information upon it.
Samuel Johnson
A few years ago there was a heated folksonomy vs. taxonomy debate between the vague, jargon-spewing web 2.0-ers and the scared, curmudgeonly old-guard Information Architects. Then there was a mess of talk and some patrician lamentation about the death of the expert (e.g. encyclopedias) at the hands of wise crowds (e.g. Wikipedia). And we’re still chattering about the imminent shuttering of hoary old knowledge-disseminating institutions like newspapers and magazines driven to the brink by upstart “open knowledge” aggregators like Twitter and the blogosphere.

Let’s call the two camps in these debates the Hippies – the folksonomizing Tweeters – and the Straights – the taxonomizing encyclopedists. Sort of by definition of “debate” they take opposing sides. But the problem is, they’re usually talking past each other for ideological reasons, which means we’ll never really get any satisfactory answers about the issues. So, though there are legitimate points of difference between these two groups, I’d rather look at a distinction and a faux dichotomy at the heart of the debates they keep having in hopes of getting clear on foundational issues as opposed to the positions taken. The first is the distinction between information and knowledge, the second is the apparent dichotomy between an internalist viewpoint and an externalist one.
  • Information vs. Knowledge: Let’s say that information is data potentially pertinent to your projects or interests (as opposed to noise, which is wholly superfluous). Knowledge is something more than information. It’s information that you believe in some sense; it actually is true or genuinely appropriate or morally/ethically reasonable, etc.; and it arrives via reliable channels. So, knowledge necessarily incorporates our beliefs, desires and projects. But it necessarily includes something else as well. We evaluate items items we've solid reason to believe as knowledge – as opposed to just information – against some appropriate "disinterested" measure or guarantor. Information’s being knowledge isn’t just up to you, your beliefs and your interests; it’s up to something about the way the world is, whether physically, socially or culturally.
  • Internalism vs. Externalism: Internalists (or individualists) hold the traditional view that cognition, knowledge, meaning, perception, etc. are necessarily the result of internal, valid, inferences and calculations on some representations wholly inside the head of the thinker, knower, speaker/hearer, perceiver, etc. Externalists generally believe that cognition, knowledge, meaning, perception, etc. extend outside of the head and into the environment (physical, social, cultural, etc.) of the thinker, knower, speaker/hearer, perceiver, etc. Basically the dichotomy comes down to the following question: are the cognitive and epistemic abilities that make us the creatures we are “all in the head” or are they somehow extended “out there” into the social and cultural world?
A lot of the sturm und drang surrounding debates on the effects of the web on intelligence, democracy, culture, etc. comes down to differing intuitions and flat out misunderstandings about these foundational issues. Furthermore, entertaining rhetoric aside, we won’t get satisfying answers to the issues being debated until the Hippies and the Straights stop allowing ideological commitments to drag them to opposite poles of the distinction or rigidify the actually soft "dichotomy."

Informational Markets

Let’s start by stating the obvious: Many of our actions are based on what we take to be knowledge. And when asked to justify our actions, if we have a choice between justificatory facts – being pieces of true information that we’re in a position to know – or unsubstantiated justificatory hearsay, we use the former because the latter lacks any real foundation. The former is knowledge: information that we’re justified in believing, meaning that it’s come to us via a reliable channel, and is true or appropriate or reasonable, etc. Knowledge, as distinct from plain old information, is obviously incredibly important.

But unfortunately, it’s also an ideal that doesn’t weather the constraints of reality all that well. As Russell Hardin notes in his new book (which, though not too informative and wrong in equating belief and knowledge, presents an interesting thesis), there’s an economy of constraints on knowledge search: we’ve limited resources of time and attention and our incentives to attain knowledge are determined almost exclusively by our current instrumental needs or desires and our many non-knowledge based prior beliefs. That is, we tend to stop looking for knowledge when we find information that’s just good enough. And we usually find this information through inexpert, simply familiar or comfortable, i.e. not necessarily reliable, channels. What’s more, we really don’t make much of an effort to determine truth or falsity or optimality provided the new information fits with our past presumed knowledge. Why should we, after all? We’re busy, limited beings and if our non-knowledge works well enough or helps us get by, the only incentive to push further seems purely academic.

But this means that in real life we practice little or no epistemic hygiene; most of what we call knowledge is most often just information even though it’s the basis of our actions (including search and recognition criteria for future knowledge) and often serves as public and private “justification”.

Now, the key to the Hippies’ position – for example that the blogosphere could be a viable news source – is that “epistemic hygiene” is best left to market forces. Knowledge will automatically win in an open informational marketplace. Knowledge – the best, the truth – necessarily survives the winnowing effects of the market for information; if there’s a market for knowledge and it’s out there, the best way to ensure that it gets where it’s needed is to let the demand draw it from competing providers. When there’s a robustly discriminatory market for knowledge as distinct from just information, providers are thereby incentivized to provide it.

But the problem is, considering the economics of informational consumption, there’s not much of a market for knowledge as opposed to just information. That is, due to the economically constrained drives of real world infovores, people’s discernment isn’t such that they’ll seek out only knowledge. Thus they probably won’t provide enough pressure to drive the virtuous, knowledge-elevating market forces in the short term. But over the long term, the faintest of discriminating behavior will probably lead to knowledge boosting institutions. That is, it’s unclear if we have strong enough demand for knowledge, as opposed to just information, to ensure epistemic virtuousness right now.

At this point, the Straights swoop in claiming that epistemic virtuousness must be ensured from on high through the time honored institutions we’ve already got. But, of course, it’s clearly the case that placing all control in the hands of a small group of “deciders”, anointed moral shepherds to the riffraff, is simply ridiculous. Similarly, the institutions we have weren’t hatched fully formed with all of their knowledge preserving norms in place and operational. Indeed, all societies and cultures are, ultimately, bottom-up phenomena. Their seeming self-evidence notwithstanding, knowledge institutions (e.g. journalism and it’s vaunted social norm of objectivity) are the highly historically contingent result of blind, non-intentional coordination over the long term. Dissolution of the old institutions doesn’t mean that new, possibly better ones can’t be coordinated upon. The risk, if the search for new ones veers off an “equilibrium path,” is a period of turmoil as new institutions are coordinated upon. The real issue thus seems to be the cost, depth and length of the possible period of turmoil.

So, the Hippies don’t pay enough attention to the distinction between knowledge and information, particularly as it relates to the attentional economy of knowledge seekers. That is, they think that an open information market will necessarily result in an increase in knowledge. The Straights, on the other hand, misunderstand knowledge as something sacrosanct, existing only because of the (actually contingent) structures through which it currently flows. Thus those structures need to be shielded at all costs. In one direction lies possibly indefinite knowledge-mitigating turmoil but the potential for improved institutions, while in the other lies comfortable stability at the price of repressive stagnation.

Inside Out or Outside In?

The internalism/externalism distinction is at the heart of debates over whether, for example, Google makes us smarter or Facebook actually expands our social abilities. For the internalist, this is just a dumb question. Cognition is in the head and thus Google is simply a tool or resource the use of which must be weighed like any other. It possibly allows us to do things we couldn’t have before, but it doesn’t make us any smarter or dumber than a hammer does.

Externalists, on the other hand, view technologies like Google as more cognitive prostheses than simply tools. They extend our cognitive powers. In this sense, the same sense in which an amputee's artificial arm becomes his or her arm, Google is part of our cognitive apparatus. Just as the ability to do long division doesn’t mean the ability to do it exclusively in the head, without the aid of such external technologies as pencil, paper and the arabic numeral system, so our knowledge of, say, philosophy may actually be increased if we acquire the skill to find it at will. So Google, in a real sense, makes us smarter insofar as it’s a prosthetic memory bank.

People often balk at the metaphysical implications of saying our minds are somehow extended into the world. So to avoid that quagmire, let’s just say that culture actually augments cognition, that culture impacts, maybe even defines, our characteristically human minds. We, unlike most other critters, can actually use and build on what others have done and then in turn pass it along in detail for others to use and build on. This is similar to psychologist Michael Tomasello’s “Ratcheting,” the idea that humanity’s great cognitive distinctiveness is as much the result of cultural as physical evolution. Fudging a lot, we actually become smarter through the lateral (i.e. within a generation) transmission of skills, abilities and ultimately material artifacts. The stuff of culture, material and conceptual, expands our cognitive abilities.

If we grant just this cultural ratcheting instead of the full blown extended mind thesis, then we can see how one could say that Google, for example, could make us smarter. Ratcheting works on exposure and imitation. Google lays bare the world’s information to a certain extent. The internalist on the other hand, says it makes us lazy – we should learn and store all of this info in our heads if we want to call it knowledge – and thus, as we know where to find it but never really try to, dumber or at least somehow culpable. Plus, there’s no guaranteeing that Google is “epistemically virtuous.”

Frankly, the first internalist argument doesn’t hold water. Whatever intelligence is, it’s clearly not a matter of facts stored in the head. If their gripe is that Google doesn’t force us to develop the strategies of inquiry and action that may actually impact intelligence, then they may have a point. But, at this historical moment, Google is a viable strategy.

Anyway, though externalism seems to make sense, the issue of epistemic virtuousness still remains. That is, cultural artifacts like Google seem to impact our cognitive abilities, but that doesn’t mean that the impact is automatically for the best. The mechanism or artifact we’re “extending our minds” with may be defective in that it doesn’t necessarily deliver an appreciable amount of knowledge per unit of information. So we’re back to knowledge, but with a significant twist. If technologies like Google actually are cognitive prostheses and it’s possible for us to impact the ratio of knowledge to non-knowledge, then we’ve a significant moral obligation to guarantee that it’s epistemically virtuous. So, the Hippies’ hope for a brighter tomorrow through a cognitively expanded humanity rests on the epistemic virtuousness of the technologies through which we’re expanding our minds and abilities. The Straights, on the other hand, are doubtful about the whole expanded mind thing (it encroaches on their cherished romantic individualism) and are certain we’re not up for the ethical obligations incurred if it is true.

The Empty Middle Ground

The Hippies and the Straights both present arguments that turn on the issues central to these distinctions. But the problem is, their commitments seem to drive them to take one side to the exclusion of the other. The Hippies trip out on the Information and Externalism side while the Straights hole up in the library of Knowledge and Internalism, pouting. And just clarifying the nature of the distinctions doesn’t give us the answer either.

There is, however, a methodological lesson to be learned in picking apart the issues. As we’ve seen, both groups use arguments drawn from these divisions, but both seem to only get half of the story. The criticisms they throw at each other often have value. But the object of critique usually does as well. What this suggests is that just because there’s a distinction or apparent dichotomy, you don’t have to choose one pole and fight against the other. You need to consider the issue as a whole and let that inform you, rather than working from an ideological place that forces your hand and thus opens you to attacks you could have foreseen otherwise.

For example, the Hippies – at least the ones who even countenance the distinction – have too rosy a view of our individual desire for knowledge. They assume we actively seek knowledge and actively discriminate against “non-knowledge” instead of just muddling through with good enough information. The Straights, on the other hand, are sticklers for knowledge and really pessimistic about our ability to find it unguided. The problem is, they’re too pessimistic and take too narrow a view of institutional history. They arbitrarily limit the prospects and vehicles for knowledge to institutions and structures that already exist, closing off potentially valuable new ones.

Similarly, the Straights are often, but not always, internalist. They think that the question of, say, Facebook expanding our real social ability is just silly hyperbole or metaphor madness. Facebook is just a tool that we engage with under the direction of our hermetically sealed, inviolable cognitive apparati. It doesn’t extend or expand that apparati. True, Hippies and Straights aren’t necessarily divided on the internalism/extenalism issue. For example when it comes to the “Google makes us smarter” debate, many Hippies say “yes” and Straights usually says “no”, but they can both be somewhat externalist. It’s just that the Hippies think that Google is epistemically virtuous, while the Straights don’t. Anyway, Straights tends to be traditional internalists or individualists. This means they’re a little reluctant to buy the pie in the sky visions of the Hippies, who seem to want externalism to be true out of a progressivist cyborg fantasy of human perfectibility, without considering or shouldering the ethical obligations that would come with its truth.

So what’s the answer? Chances are, the truth is being triangulated by the critiques hurled from either side. Blind faith in market forces isn’t too wise, but neither is stodgy old paternalism. If externalism is true, then we really should be concerned about the epistemic virtuousness of our cognitive prostheses right now. The lesson is that we shouldn’t be forced into taking sides out of ideological considerations, either utopianist or conservative. Instead we need to stop dreaming of panacea or suspecting decline and start looking at the mechanisms and policies by which we can both allow freedom and ensure knowledge in the short and long run.

Friday, 29 May 2009

Being-on-the-web: Weinberger's "infrastructure of meaning"

Writing about the web is increasingly “post-utopianist” meaning that it doesn’t expressly argue for the essential goodness of the web; doesn’t assume we’ve no real responsibilities as designers and consumers; and doesn’t assume that people are always nice or positively prosocial in their behavior online. Writers like Cass Sunnstein and Clay Shirky are at least trying to get past the breathlessness of the early days of communitarian-turned-capitalist web boosterism to a more realistic – but still hopeful – place. This is great and it’s exactly what was needed.

Dystopianism – the view that the web is essentially for the worse – is clearly the wrong reaction to utopianism. But if you can get past the panicky, paternalist fist-shaking and general curmudgeonliness of a lot of the dystopianist writing, their arguments tend to be of two types: appeals to popular, romantically conservative conceptions of individuality, creativity and culture or arguments showing that the boosters’ optimism is actually misplaced and the web doesn’t work the way they claim. The latter arguments are actually helpful, just showing that, if you’re attempting to draw evaluative conclusions from theoretical arguments, you can’t simply rely on your assumptions about what we all consider positive. For example, you can’t just assume that removing barriers to “publication” is good in and of itself, regardless of whether or not this is an increase in overall “freedom.” Some might not like the idea that there’s no longer a practically imposed institutional filter on publication because it results in enormous choice and verity problems. In addition, we’ve no real proof – plus a lot of theoretical objections and some disconfirming experimental data – that the web’s structural “solutions” to the problems of choice and truth actually work.

The other dystopian “arguments,” however, tend to be limited to dismayed hand wringing and unkind caricature about the state of culture, creativity and taste. So ultimately, the dystopianist/utopianist battle boils down to competing intuitions about what we should value and what we’re giving up or gaining by embracing this new and powerful medium. David Wienberger, a brilliant and proud utopianist, casts this battle in political terms as conservative dystopianists versus liberal utopianists. Of course, analogizing to a dichotomy in another domain doesn’t really clear things up. It’s a rhetorical move trying to get you to cast your lot either with the progressive forces of liberalism (yay!) or with the regressive deadweight of conservativism (boo!). But along the way he throws a third type into the mix, realists. He defines realists as the pragmatists in the middle, rational, level-headed and myopically obsessed with facts, data and history, i.e. boring. Supposedly, the realists feel that the web isn’t that different from other media, that the rhetoric on either side is hysterical and needlessly sensational. We just need to step back and think rationally about this new medium.

Weinberger thinks that the realists are valuable but essentially wrong about the web. That is, they’re wrong about the essence of the web, which is totally different and wholly revolutionary. Realists’ calmly rational judgment of its potential and possibilities will only blind us to its true innovative potential in the long run. For example, thinking of, judging or predicting the web’s impact and future in terms of past media may keep us locked in old patterns and thus foreclose potentially valuable new paths. So realists are valuable advisors and functionaries, but they shouldn’t be allowed to steer the ship or even navigate. After all, you’ll never discover new worlds by reading old maps... or something like that.

Anyway, if we define the realist as somebody who feels that the rhetoric on either side is overheated, that the whole debate needs a dose of reality and that the web isn’t really all that different or revolutionary, then I’m clearly not a realist. The web is indeed different in many respects, mainly in its decentralized structure, wickedly low entry cost and sudden ubiquity. I do think we need a dose of reality, but not in the way Weinberger’s realist thinks. Sure, reality is about facts, a claim most utopianists belittle via scare quotes, but these are facts about mechanisms – what structures foster and propagate knowledge, truth and quality and what we can expect from interacting agents, etc. – and not necessarily facts about history, which really are subject to biased “framing narratives.”

Finally, both utopianists and dystopianists agree that the web is revolutionary, but the former consider it a positive revolution, while the latter consider it negative. The realists, in contrast, don’t think it’s revolutionary at all, but rather more of the same only louder. Unlike Weinberger’s realists, I think the the web is revolutionary, but I use this word advisedly and without the attached evaluation, good or bad. Normatively evaluating a fact is clearly a case of interpretation, it’s identifying a fact as good or bad according to some evaluative scheme. It’s the interpretation that makes the difference. So what’s Weinberger’s interpretation?

Like Shirky, Weinberger analogizes the web revolution to the socio-cultural impact of the printing press or rather moveable type. Just as the printing press led not only to affordable books but also the dissolution of old social/labor orders and the growth of a literate, educated public, so too the web is leading to a boom in bottom-up social organization, individual creation and the general overthrow of old-guard cultural gatekeepers and entrenched hierarchies.

Now, Weinberger and Shirky, never tell us how to define institutions (or norms, conventions, etc.) – the socio-cultural structures overthrown by these revolutions – but to me they’re just self-reinforcing patterns of conditioned preferences and expectations structuring our repeated interactions. They aren’t etched in stone or handed down from on high. Rather they are slowly coordinated upon by generations of locally interacting humans. Thus, they’re contingently coordinated upon interaction and preference structures suited to the circumstances in which they developed. Change the situation or circumstances, and there will be pressure to change the institutions, norms, etc. If the situation changes radically, they will crumble and chaos will ensue, lasting just until new institutions and norms are either coordinated upon or imposed. This is a revolution.

But Weinberger in particular – echoing Marshall McLuhan, Walter J. Ong and my old teacher Greg Ulmer among others – likes to point out that media transform our ways of thinking, thus a revolutionary medium will radically change us. I agree, but only insofar as new media destroy old and foster new norms, conventions and institutions of creation and consumption. It’s the old and new norms and institutions that structure our interactions, inform our preferences and cement our expectations. So we agree on a lot, but notice, we’ve yet to see anything in this revolution that would lead us to evaluate it positively, i.e. as a utopian revolution (conservative old dystopianists, on the other hand, started frowning the second institutions felt pressure). Disruption, difference and impact don’t necessarily equal good. So how does Weinberger get from revolution to positive evolution?

Well, I can’t definitively say, but there are hints throughout his writing. Take this passage:
...Access to printed books gave many more people access to knowledge, changed the economics of knowledge, undermined institutions that were premised on knowledge being scarce and difficult to find, altered the nature and role of expertise, and established the idea that knowledge is capable of being chunked into stable topics. These in turn affected our ideas about what it means to be a human and to be human together. But these are exactly the domains within which the Web is bringing change. Indeed, it is altering not just the content of knowledge but our sense of how ideas go together, for the Web is first and foremost about connections.

And in what way is it altering “our sense of how ideas go together?” In his wickedly clever Everything is Miscellaneous Weinberger claims that the web is an “infrastructure of meaning” as opposed to just stodgy old knowledge. He trots out the philosopher, Nazi and all around dour grump Martin Heidegger to explain his notion of meaning. Basically, it comes down to the humanly grounded, intricately woven, real-world web of warm significance that we actually live in daily as opposed to the cold, objectified, brutally subdivided grid of "official" knowledge. Just as printing initiated a revolution that separated knowledge from the lived world and brought us the evils of categorization, specialization and scientism, so the web – with its personalizing “tags” and ability to instantly pair even the most unlikely contents regardless of official taxonomies – is initiating a sort of counter-revolution in which content and knowledge are re-imbued with subtle, non-taxonomic human significance. Thus, the web – particularly the user-enhanced, user-responsive (if jargony) “Web 2.0” – is an “infrastructure of meaning” insofar as the thickening accretion of human metadata on boundlessly linkable content makes it implicitly available for officially unintended but humanly significant purposes.

So, the foundation of his normative claim that the web is essentially for the best seems to be the idea that it’s instituting a new, souped-up version of the old pre-printing press, pre-Enlightenment notion of situated and subtle human – as opposed to "rational" or scientific – knowledge. The web reclaims knowledge from the alienating pretensions of science, reason and “rationality.”

Versions of this idea have been around for a while. As Weinberger mentions, McLuhan argued for the human impact of media, as did the Jesuit scholar Walter J. Ong. Ong’s book Orality and Literacy was expressly devoted to the cognitive, epistemic and human impact of media types. You could interpret Michel Foucault’s claim that knowledge structures are imposed power structures as a version as well. I even agree with part of Weinberger’s application of it to the web: the web really is revolutionary in the extent to which it puts knowledge at people’s fingertips and allows them to find, add to, connect and forward it at will. And this is a far more human – essentially human – way of interacting with and handling knowledge.

I just don’t agree that the “human” way is necessarily good. It could be great, leading to broader minds and deeper understanding of the world and ourselves. Or it could lead to increasing factionalization, self-absorption and distrust. After all, research suggests that, left to our own devices, people – humans – only seek out and retain confirmation of previously held opinions. So much so that we often ignore the true in favor of the convenient or comfortable. We’re also significantly biased toward things we’re already familiar with. It’s also unfortunately true that our moderate views tend to become more extreme in the sorts of echo-chambers the previous phenomena set up: seeking out confirmation from like-minded people and sources and the discomfort at differing opinions (justified and reinforced by the ready agreement of our like-minded contacts) tends to make our views ever more entrenched, absolute and resilient against contradictory fact.

Just because people can connect content in wickedly exciting but subtle new ways and access highly specialized information in seconds, that doesn’t mean they will be exposed to a breadth of opinion or even – sadly – the truth. The web, because of its native responsiveness to our individual desires, allows each of us to create a cozy cocoon of confirmation and reinforcement.

But maybe this isn’t all bad by Weinberger’s lights. Weinbereger adores Heidegger’s philosophy. Central to Heidegger’s understanding of meaning is the concept of Being-in-the-world: basically, the idea that all encounters with the world are already infected with our intentions, moods, cultural connotations, etc. and that there is no sense to the traditional notion of a pure object or subject. So meaning is pretty much an inescapable consequence of any encounter with the world. But this also suggests that context – physical, social, cultural and historical – is not only inescapable, but necessary for meaning. Objectivity becomes, literally, the view from nowhere, not just impossible, but unintelligible.

Maybe these cocoons of confirmation – these little webs of shared connotations and self-reinforced absolutist understandings, which I claim are negative aspects of a naturally biased humanity – are really what Heidegger’s beleaguered teacher Edmund Husserl called “lifeworlds:” the necessary and inescapable social, cultural and historical contexts within and through which we experience the world. Maybe so, but the problem is, these life worlds are hermetically sealed wholes of historical and cultural prejudice, incommensurable and unassailable. As Heidegger’s most influential student Hans-Georg Gadamer formulated it, prejudice – the historical, social and cultural “situatedness” we’re born into – is essential to Being-in-the-world. Outside of your lifeworld, your cocoon of prejudice, you simply aren’t... in the big metaphysical sense. Thus primordial prejudice – our cocoon of reinforcing ideas ever ready to disregard inconvenient or inconsistent “facts” – is the foundation of meaning in this Heideggerian sense.

Obviously Heidegger had nothing but disdain for the Enlightenment notions of reason, rationality and truth. It’s easy to see why. By his lights, there’s nothing over top of “Being-in-the-world” or “the lifeworld,” no outside facts to adjudicate between the “meanings” grounded in the various prejudice-composed contexts. The “lifeworld” or “Being-in-the-world” is the only ground of significance. Rationality, reason and science, on the other hand, are about seeking a global foundation (possibly in the real world) for the “intersubjectivity” that Heidegger seems to have thought only inheres in shared cultural, social and historical prejudices or contexts.

So maybe we who are stuck on the old-fashioned liberal hope of finding some common, testable ground of meaning, knowledge and intergroup understanding have it terribly, inhumanly wrong. We shouldn’t think of people’s natural drive to willful ignorance and reinforced, non-verifiable absolutism as an unfortunate legacy of our evolutionary past. They aren’t something that we as designers working in a world that desperately needs people to stop embracing local superstition, prejudice and dangerously out of sync norms have a responsibility to mitigate for the good of humanity. Rather we should just realize that these prejudices are the only foundation of truly human meaning and not pretend that there’s anything outside of them. Maybe this is the way Weinberger intends his “infrastructure of meaning” to be interpreted.

So, back to Weinberger’s utopianism. Remember that utopianism is the idea that the web is essentially good or for the best. Specifically that it’s native capacity to allow users to add metadata to content and make subtle, personal connections and relations is fundamentally and wholly positive. I’ve suggested that certain biases in humans – we only like what we know, we only want to be agreed with, agreement makes our prejudices even stronger and we’ll ignore the truth if it violates either of the first two – mitigate the positive prospects. In other words, because of the way we are, the web alone isn’t going to lead us to the promise land. That said, Weinberger’s rosy optimism seems to make sense only if you choose one of the following two options:
  • Ignore the unfortunate facts about humans’ tendency to avoid disconfirmation and neglect what some would call the truth for cognitive comfort and personal consistency.
  • Or, as his preferred philosophical tradition might recommend, embrace these tendencies as a prerequisite of authentic, human meaning. It’s not a bug. It’s a feature.
You do either one of those and I could see how Weinberger’s web utopianism might work. Personally, I find neither particularly appealing.

Of course, I’ve fudged along the way. Heidegger wrote extensively about “authenticity” as a refusal to unreflectively live the conventional life your peers demand etc. Also, “Being-in-the-world,” “lifeworlds” and even Gadamer’s prejudice soaked “horizons” aren’t exactly like the little cocoons of auto-agreement people tend to create around themselves and which the web makes ever easier and more complete. But fudging aside, this doesn’t alter the basic thrust of this popular philosophical tradition’s radical perspectivism. I just wanted to investigate whether this could be what Weinberger has in mind given that he probably knows people aren’t as reasonable as we could hope. Finally, it could be that Weinberger is just trying to say that the web provides wicked cool new ways of getting people to content. Which it does. But I think he’s going for something more.

Wednesday, 27 May 2009

Ha-ha, Ah-ha and Oh-yeah: cultural irony and rediscovery

A lot of the cultural items we consume or partake of – hairstyles, shoes, tv shows, slang, professed values, bands, etc. – can be thought of as socially instrumental. That is, they can have a “symbolic value” over and above their use value, entertainment value or whatever. We often consume them not only because of what they do, but because of what we hope they add to our social identity in the eyes of those we esteem and those we despise, to our in-group and our out-group.

A while ago, I wrote a post illustrating part of this (intuitive? clichéd?) idea. I tried to show how the mutual interactions and reactions of three distinct cultural subgroups – trendsetters, hipsters and regular joes – can drive cultural items through their life cycle. This idea has occurred to a lot of us: many social groups’ preferences (for shoes, bands, styles, slang, etc.) respond – positively or negatively – to other groups’ preferences. Slang and cadence, for example, are often valuable signals and affirmations of group affiliation, so preferences for specific slang terms change rapidly with diffusion outside the group. We illustrated this as interwoven curves along the path from few partaking (o) to most everybody doing it (n).

But so far we haven’t really discussed the later part of a cultural item’s life cycle, the point after c in the graph. Frequently, cultural items just go away and are never heard from again. But sometimes they come back around. In this post I want to look at some late stage possibilities for trends, particularly cultural irony – imbuing cultural items with a different, more "self-aware" symbolic value than they originally had – and rediscovery – rehabilitating older cultural items for current use.

Irony and Rediscovery

First of all, some items never really go through the creep from fringe to mainstream. Agreed. The idea here isn’t to model the essential, inviolable profile of a trend. Rather, it’s just that in most collections of people presented with a cultural choice you can roughly define different subgroups by how their preferences change relative to others’ preferences. For example, within the regular joes there are likely to be different constituencies analogous to trendsetters and hipsters. That is, some regular joes will be a lot like hipsters in their preferences – ready to partake of items not quite fully mainstream. If we restrict n to regular joes, the preference profiles of the different types within this group might look something like our graph.

Anyway, cultural items that never go through the full cycle at the highest, all sub-groups cultural level – that take off in or are specific to one sub-group only – become particularly interesting when we consider irony.

Ironic embrace is cultural consumption that’s generally very aware of the consumed item’s cultural history. This awareness often becomes an explicit part of its new symbolic value. Consider the recent ironic rehabilitation of Jean-Claude Van Damme, Rick Astley and ‘90s pop (Bell, Biv, DeVoe is suddenly on every hipster playlist). Faint nostalgia notwithstanding, most of the consumption of this stuff (like 70s and 80s ironic embrace before it) seems to be ironic.

Grossly simplifying, there are two big possibilities for ironic embrace. The first is when one sub-group appropriates a cultural item from another group after the latter has already abandoned it.

When this happens, we often get what I call ha-ha irony. An example – also illustrating my advanced age – might help. At a “noise” show back in the very early ‘90s (noise became fleetingly cool when “alternative” was mainstreamed by bands like Nirvana and Jane’s Addiction) the headliner’s lead screamer wore a New Kids On The Block t-shirt. At this point, NKOTB’s popularity had dried up even among their teen target. The contrast between a defunct teeny-bop group and the aggressive, self-consciously oppositional posturing of noise music was obviously the ironic point. This is a case of ha-ha irony. It’s just a broad joke or gag and in no way even remotely critical. In fact it even has the prototypical joke structure: an unexpected shift in reference or clash of expectations results in humor.

Obviously, this isn’t a full-blown new trend arising out of an old one. Rather it’s a cultural item that typifies the prefab pop trend previously popular among the mainstream appropriated for new symbolic purposes by a self-consciously opposed sub-group. Bearing this in mind, it could be graphed something like this:

This time we start at c, the point on the original graph where the item peaks for the regular joes, and proceed to n. After n the item starts to become a cultural liability for regular joes and the total population partaking plummets to m, which is much less than n. Now the trendsetters can claim the cultural item for ha-ha ironic purposes. The trendsetters, of course, will start to abandon if the hipsters pick it up at a’, that is, when the hipsters come to see it as a codified ironic strategy (see). But this case probably wouldn’t get past a’. (Although, an ironic mini-trend did occur in the early ‘90s when noise acts started appropriating the insipid graphics of those new-agey “Smooth Sounds” whale-song albums.)

The NKOTB case involves an item that never went through the full cycle from trendsetters to regular joes. Or rather, it’s an item that the hipsters at the noise show most likely never invested in. NKOTB more or less started out with the regular joes. My guess is that this is often the case with ha-ha irony: the items that get ironically rehabilitated by one sub-group tend to be yanked off the junk heap of another subgroup. In this case, it was the hipsters using teeny-bop detritus to highlight their aggressively oppositional stance to pop music. It was a joke that everybody – even the regular joes once into NKOTB– would get. As a sort of rule, we could say the greater the item’s one time value to a subgroup, the greater its potential to be used in a ha-ha ironic way by members of a self-consciously oppositional group.

The second possibility for ironic embrace is when one group ironically appropriates a cultural item from another while the latter group is still into it.

Not surprisingly, we usually get ha-ha irony here, too. Consider some clever hip-kids’ “love” of geeky sci-fi/fantasy conventions, like Dragoncon in Atlanta, Georgia where middle-aged IT professionals (that’s actually unfair... young IT pros dig it too) party all night in DIY Klingon armor. These fringe affairs are really, really popular among die-hard fans and represent for them a market for a very specific sort of symbolic capital. For the hip-kids, on the other hand, it’s a lark, a gag, a chance to ogle the arcane rituals of nerd-communion in their proper environs. The hip-kids’ intended audience – the group from whom they seek recognition of the value of attendance – is their buddies, not the group actually attending the convention. Also, the hip-kids' symbolic value comes from a completely different cultural and symbolic arena than it does for the earnest fan-boys.

It’s sort of like cultural poaching for laughs. Once a few ironic trendsetters start doing it, the very next year will see hipsters joining in. We can graph this second ironic configuration like this:

Under certain circumstances, this sort of value relationship can result in what I call ah-ha irony (as opposed to jokey ha-ha irony). We can illustrate ah-ha irony with a slight alteration of the noise band example. Suppose it had been a Nirvana shirt instead of NKOTB. At the time, Nirvana was wickedly popular and symbolized the mainstreaming tendency that allowed noise bands to arise as an oppositional alternative in the cultural marketplace. Nirvana had gone through the full cycle from trendsetter popularity on the periphery to mainstream pop adoration among the regular joes. Wearing a Nirvana shirt – the incarnation of the new ‘90s pop which many hipster fans viewed as a sort of personal cultural theft – would have been a really critical, really exclusionary (in the sense of in-group/out-group defining) statement that few would have gotten. After all, most of the kids at the show had been – or still were – into Nirvana. That ambiguity of intention is sort of the calling card of “good” or at least powerful irony: it should be sneaky or at least not intelligible to all and have some sort of critical quality.

What seems to distinguish these cases of ha-ha and ah-ha irony is closeness to the cultural item. In the ha-ha irony cases, the kids who were being ironic probably hadn’t been part of any of the groups involved in the item’s trend cycle. They were outsiders who could objectify the cultural item. However, in the ah-ha case, it’s trendsetters using something that most hipsters (and they themselves) had recently invested in as an ironic prop.

It’s probably not anything like a rule, but this specific example of ah-ha irony looks something like this:

The item went through the whole curve; the trendsetters and hipsters had been committed to it at one time. Needless to say, ah-ha irony is really rare (or maybe not and I just don’t get it). It’s usually used solely by fine artists, motivated by chronic self-awareness and cultural inferiority complexes, which drive them to theoretical, unaesthetic excesses. I know because I was one... probably still am.

Let’s look finally at “rediscovery,” earnest and ironic. Sometimes cultural items come back from the dead. Sometimes the folks doing the reviving are earnest (the Nick Drake revival about 12 years ago and the garage rock revival about 4-5 years ago). Sometimes they’re ironic (disco’s many revivals and Enoch Light). But most of the time, it’s a mix of both (the ‘80s synth-pop sound, particularly in contemporary French and West Coast alterna-pop) and it’s always with different intentions than when the item was actually culturally current.

This graph, like the irony graphs, starts at c and goes through the crash at n. After n there’s a period of cultural hibernation while all of the groups assume their original relative positions. At some point, the item gets picked up by the self-conscious cultural adventurers (earnest indie rockers for Nick Drake, the gay community for at least a couple of the disco revivals) and the cycle starts again.

So why is rediscovery sometimes earnest and sometimes ironic? Well, I think part of it might have to do with uptake among past cultural groups and the perceived genealogy of contemporary cultural groups. Contemporary groups that understand themselves as having “descended” somehow from traditionally oppositional subcultures often approach items from these “related” subcultures earnestly and items from “unrelated” or mainstream culture ironically. Regular joes, since they’re not quite as culturally sensitized or obsessively self-aware as trendsetters and hipsters, generally shoot for ha-ha irony unless the item has already gotten past b’. In that case, it’s no longer really “rediscovery”: the item has been “contemporized” or brought back into currency. (Regular joes that still dig the music they loved in high school – “it’s not about new or old...Aerosmith just made quality rock, man!” – aren’t rediscovering anything... they’re just frozen in a particular cultural period.)

A Last Note

But this whole graph-y, representational thing I have going skirts one obvious and over-talked point about contemporary culture: it seems to be moving faster. The trend circuit from hip to passé to rediscovery is getting quicker and quicker. So much quicker that the whole concept of rediscovery makes less and less sense every day. Something similar is happening to the idea of mainstream; it doesn’t really seem to have the old, easy to poke at stodginess it used to. Actually, it’s pretty hard to even locate in the first place. Why is this happening?

Just speculating here, but pervasive media probably helps. Modern user-tailored, user-driven media like the web is really good at getting stuff from the fringe to the center, from “hip” to “mainstream,” overnight. Stuff that used to take years to bubble to the surface through old media channels now zips up almost instantly in a process of accelerated mainstreaming that calls into question the whole idea of fringe and center, counterculture and mainstream.

But in the west at least we still seem to highly value the idea of oppositional individualism and the autonomy of our choices, of trendsetters, “mavericks” and nonconformists, out there marching to the beat of a different, etc. A significant number of folks in the west – most I’d say – have internalized this cultural value or ideal. Trendsetters and hipsters probably wouldn’t be our culture’s marketing holy grail otherwise.

You put these together – media that rapidly drains oppositional cultural positions of their “outsider,” “in the know” status and an internalized cultural admiration of the “individualist” or the “nonconformist” – and you get accelerating trend cycles. After all, if cultural items come larded with a symbolic value that is partially determined by the item’s prevalence, and modern media provide a fat but highly user-responsive channel to spread the word, then you’ll have to act quickly to stay relevant. In this environment, uptake and abandonment of trends is going to speed up.

Adding to the mix cultural industries like film and fashion, that, to a certain extent, have institutionalized in their marketing and business models ideas of constant opposition, innovation or nonconformity, and things really get moving. Taking just one example, the fashion industry is built on the idea of annual overthrow, of mainstreaming (i.e. making passé) last year’s line so this year’s can supplant it. It’s a business model founded on the idea of the incessantly new. Fashion marketing hinges on – and thus amplifies – the desire to be slightly ahead of the curve, to break with the currently mainstream fashion, to be more distinct and “original” (in acceptably fashionable ways) than your peers. In the present media context of almost instant diffusion and accelerated mainstreaming, their business model of providing “the new” and their marketing model of codifying, amplifying and creating a “need” for “the latest,” results in accelerating demand that outstrips their creative capacity. The result: unrepentant cultural recycling at a faster and faster pace.

Tuesday, 7 April 2009

Of Market Analogies and Ultimatum Games: the myth of web utopianism

Web utopianism is the idea that the web is somehow fundamentally or essentially a positive force. It’s not just that the web is more important, socially “impactful” or different than other media. Rather, it’s the claim that the web is, by its nature, for the greater good. Many futurists and web pundits seem to push utopianism as the web’s “brand,” effectively the set of concepts, assumptions, implications and preferences making up its popular conception. But if we buy into this brand, this idea that the web is in some way essentially good, we greatly reduce our responsibilities as designers and users. Once we take the web to be simply good in itself, we no longer really have to consider the potentially bad consequences of our creations or actions, by whatever standard.

Often utopianists frame the web as the ultimate market, where great ideas, unique voices, vital information and compelling products rise to meet the true and internally generated needs and desires of fully autonomous, choice-empowered, creative consumers. From this freedom of access and choice and the ease with which we can create, share and elevate content, they draw positive democratic and communitarian conclusions. As the story goes, the web is about cultural disintermediation on a grand scale: anybody can create the next video craze through YouYube or help build a surprisingly accurate reference work like Wikipedia. Powerful cultural gatekeepers, who for years perpetuated pernicious and self-serving social and informational hierarchies, are suddenly irrelevant. What’s more, nobody will miss them. As it turns out, claim the utopianists, markets are essentially better at arriving at truth, quality and beauty than experts ever could be. And markets coordinate on these desirable ends through the individual, undirected, autonomous choices of consumer-creators. Thus, the utopianists conclude, the web is a technically aided manifestation of Democratic ideals. And this is obviously good.

Perfect Markets Meet Imperfect Web

Unfortunately, this view is wrong in the same way superstitions are wrong: its evaluations and claims are based on shaky assumptions about mechanisms. I want to avoid the cultural politics of the issue and look at some of the utopianists assumptions. First of all, the positive evaluation of the web as a market seems to follow only by analogy to a specific sort of competitive market, what’s called a perfectly competitive market. This is an idealization of the conditions under which the price arrived at by interacting consumers and suppliers will match the “true value” of a commodity. The idealized conditions are pretty stringent and numerous. For example, you must have a large number of suppliers; no barriers to entry; everyone gets the same complete information; no one turns a profit; no one advertises or markets; each supplier’s output is pretty much individually negligible to the ultimate price; and each supplier’s output is intersubstitutable for any other’s. If these conditions are met, then prices will reflect the “true” value of the product.

The utopianists’ idea seems to be that the web is like a perfectly competitive market because it appears to meet somewhat analogous conditions.
  • Anyone can contribute and huge numbers actually do.
  • Consumers can access any information at will.
  • They can find exactly what they want regardless of how niche it might be and can costlessly choose between options.
  • They can state their opinions, create whatever they want or share their knowledge truthfully or “authentically” without the biasing influence of social pressure.
  • There are few recognizable extrinsic incentives to contribute, which avoids the skewing of contributions associated with profits.
  • Finally, any one contribution is as good as any other and doesn’t really affect the final information. In other words, no one opinion can skew the final collective result because there are so many and there’s always someone willing to refute anyone.
Obviously, it’s a strained analogy. Prices and evaluations are "true" in very different and hard to express ways. Settling true evaluations from masses of tastes and opinions isn’t much like setting prices from masses of tastes and desires. Still, something very close to this analogy is a pretty implicit assumption among many web utopianists (and almost explicit in Wikipedia’s “Jimbo” Wales’s public veneration of F.A. Hayek, father of modern equilibrium theory). To many utopianists, the web is simply structurally conducive to settling on true information or elevating true quality in much the same way perfectly competitive markets are conducive to settling on “true” prices.

But the web just doesn’t work this way. Perfectly competitive conditions don’t hold together on the web. There are many reasons the web's not a perfect market, but I’ll just look at one big one: the false assumption that people online aren’t subject to social pressure or influence that might skew the collective result. It’s based on the idea that people’s visible actions aren’t informational signals themselves; that people’s choices follow an ideal of narrowly rational, autonomous reflection. Duncan Watts’s much discussed recent research shows that social influence in cultural markets, e.g. an opinion market like a movie review site, actually leads to radically unpredictable (i.e. quality doesn’t predict success), highly unequal (i.e. huge difference between very famous and slightly famous) distributions for cultural items. In a nutshell, famous things get more famous and this is a contingent, path-dependent process which has very limited correlation with actual quality. In other words, the same items might show very different success or evaluation patterns depending on chance uptake events at earlier stages of the process.

Clearly, this suggests that the conditions for perfect competition aren’t met in markets with significant signaling, like the web. Markets like this won’t necessarily result in the “best” or "truest" rising to the top automatically because the assumption of autonomous decisions just doesn’t hold. As Clay Shirky has noted, the web makes socialization, communication and coordinated activity "ridiculously easy." I'd say it's more than easy, it's pervasive and inescapable. Stretching the economic metaphor a bit, whenever coordination and communication are cheap, information "cartels" form reflexively and not just when people are positively motivated to alter the competitive landscape to their advantage. People pool and align their opinions (and thus productions) whenever communication is possible, skewing the market. They look to others' actions – not just to their own fact based judgments of quality – to help them make decisions. If something starts to take off, this is “social proof” of value, an assumed signal of quality, and others follow suit. Seeing what does well, more of the same is produced. The social web is all about such signaling – sharing your actions and choices – and thus it's rife with social influence. It really can’t be a perfectly competitive market. (This effect is similar to, but distinct from, so called herd behavior and informational cascades, which also make perfect competition difficult.)

Also, the “Democratic ideal” utopianists claim the web promises is the political analog of the perfectly competitive market: the “best” or “truest” result arises when no special interests have undue influence, when tastes, knowledge and needs can aggregate, free from the distorting interests of profit, influence and pooled power. Utopianists point to knowledge aggregators like Wikipedia as prime practical examples of this idea. But Wikipedia is clearly not a perfectly competitive knowledge market or democracy. Rather it’s an oligopoly, or its political analog an oligarchy. A core group of contributors, the editors, has considerably more power and wields considerably more social influence (i.e. their actions are neither negligible nor intersubstitutable) than all other contributors. “Jimbo” Wales’s admiration of Hayek notwithstanding, Wikipedia is not a pure democracy. It’s an oligarchy. And oligarchies do not structurally result in the “truth” the way democracies or perfect markets supposedly do. Wikipedia is successful (to the degree that it is) not because of the nature of wikis or the web, but rather because of the oligarchy’s ability to manage the negative effects of social influence, informational cascades and bad behavior. And this is arguably the result of social influence used for positive ends, not just the structurally positive force of a perfect "knowledge market" or informational democracy (added April 8: the importance of "community" to the success of wikis is made very clearly by Clay Shirky in Here Comes Everybody and by Cass Sunnstein in Infotopia).

Ultimatum Games and Two Types of Prosociality

Well, the utopianists respond, the relative success of Wikipedia’s oligopoly/-garchy suggests that maybe the web isn’t structurally good in the sense that truth, originality or the “best” stuff succeeds solely in virtue of the implementation. Maybe the web just helps people’s native goodness, prosociality and urge for reciprocal interaction to flourish. The web makes it easier for folks to coordinate, create and realize their intrinsic desire to be, not just social, but prosocial. So the necessary goodness of the web is its ability to amplify and enable user's intrinsic prosocial motivations.

However, this weakened, indirect argument – if used for utopianist ends – founders on something like social influence just as the stronger perfect competition argument does. When people are susceptible to social influence – informational cascades, herd behavior, Watts’s social signaling and conformity or esteem effects – bad behavior or lock-in of less than optimal norms often results. After all, there are two distinct ways to understand “prosociality”: as a desire to share, strive and be nice or as a desire to simply do as others expect you to do. Both ways are about deference to your social group. But only the former is automatically positive in the way web utopianists seem to assume. On the latter, if all of your friends are jerks, you think they expect jerkiness of their peers, and you want them to like and esteem you, then you’ll probably be jerky, too. You’re deferring to what you take to be the expectations of those you hang out with.

Against this less than estimable version of “prosociality” the utopianists often claim that research shows we’ve a hard-wired preference for good prosociality, that we're intrinsically motivated to be prosocial in the positive sense. In particular, they often point to results from Ultimatum games, which purport to show we’ve an evolved preference for flat fairness, what the behavioral economist Herbert Gintis calls “strong reciprocity.” Ultimatum games have two players, Proposer and Responder, and a set sum of “money,” say, $4. The Proposer can offer any sum to the Responder. If the Responder accepts, they both get their cuts, but if he rejects, neither get anything. Research indicates that Responders often reject what we would consider unfair offers and Proposers often start with close to a 50-50 split. In other words, Responders are willing to sacrifice potential gain if the Proposer is unfair and many Proposers seem to immediately offer the fair split. Supposedly this shows that people have a simple preference for fairness, which they argue is a hard-wired, positive-prosocial drive.

Obviously, we’re social animals and thus have an evolved knack for some sort of prosociality. But we have to be clear what’s actually exhibited by the Ultimatum game. Relying on a literature review and critique by Cristina Bicchieri and some ideas of Ken Binmore, some Ultimatum game experiments suggest that these results may be more an effect of the perception of the situation or context than a simple preference for fairness. People are conditionally fair depending on their expectation of other’s expectations and often on what they think others will accept given these expectations. It’s not a matter of fairness, per se, as much as it's a matter of what others expect and thus what you can get away with.

For instance, cross-cultural studies [pdf] of Ultimatum games suggest that people’s rejection rate is culturally determined. In some societies, rejection is very rare regardless of the offer. This suggests that people have a preference for meeting situationally relevant expectations rather than a preference for simple fairness. It's the norms in play that matter rather than some absolute or hard-wired standard of fairness, and this influences both what Proposers expect to be acceptable and what Responders expect to be offered. Similarly, Ultimatum games with asymmetric information suggest that Proposers are generally more interested in appearing to meet salient fairness norms – thus decreasing likelihood of rejection – than actually being fair. Consider the case in which the Proposer knows that the chips used in the experiment are worth 3 times more to him than to the Responder and he knows that the Responder doesn’t know this. If we all simply preferred being fair, as opposed to appearing fair to hedge our bets going into interactions, the Proposer should most often offer 75% of the money. That’s the fair value split. As it is, Proposers in this scenario offer slightly less than 50% of the chips on average. Obviously, we don’t necessarily just prefer fairness for its own sake, which is what’s assumed by the positive-prosiciality idea. Rather we prefer to follow whatever norms we take to be expected by others – the less than estimable prosociality.

So if it’s the case that we’re not so much driven by simple, positive prosociality as we are by the desire to do as our peers expect of us – as the situation-relevant norms suggest we should act – then the prospects for this weakened form of web utopianism aren’t all that great either. That is, if the utopianists argument is that the web is essentially good because it allows the intrinsically positive-prosocial motivations of agents to flourish unhindered by the overhead of real world socialization, then it’s founded on a mistake. Our prosociality isn’t as normatively rosy as they assume. It’s a desire to do as we think others expect of us, which is not necessarily good in the way utopianists need it to be for their argument to work.

No ‘Topias

In this post, though we avoided the political versions of web utopianism, we’ve discussed two of the most interesting non-political strains. They clearly don’t hold water. The web isn’t the wholly positive boon to humanity the utopianists want it to be. But neither is it the great destroyer of culture the panicky, proudly paternalist web dystopianists take it to be (aside: avoid Lee Siegel’s Against the Machine. It’s one of the few books I’ve read that actually deserves to be called a rant or a screed. It’s like the web insulted his mother or something). The web is just a massively influential tool – or fact – with impossible to predict social and cultural impact. Some of the impact will be positive, some negative and some both on different time scales. We just don’t know. But utopianism of the sorts considered above assume that the web just is positive. Taken seriously, this assumption mitigates our responsibility as designers, creators, sponsors and consumers of content and experiences on the web. In reality, we don’t know what the real social and cultural results of our actions will be. But we need to act as if they could be negative so that we feel compelled to strive for – not just expect – the positive in the long term. Doing anything else is irresponsible no matter how nice or progressive it would be to believe the web a simply positive force.

Wednesday, 18 March 2009

Why Does Fad = Bad? Velocity, Autonomy and Constancy

It’s a fact that trends end, but most accounts of the uptake of cultural artifacts don’t really get into the end too much. They focus on the dynamics in regards to acceptance or consumption, not really touching on how these very same dynamics can lead to the cultural item being very quickly dropped. Trends, after all, are only as stable as people’s beliefs about other people’s preferences and expectations.

Which brings us to fads. Fads can be thought of as cultural trends – tastes, styles, fashions, attitudes, jargon, slang, etc. – that peak rapidly and then quickly die out. We’re all familiar with these: a cultural item – consider those annoying rubber wrist bands from a couple of years ago – suddenly shoots up in prevalence, appearing everywhere at once, and then suddenly pops back out of sight. So, fads by definition are trends that end quickly, but speed of uptake is obviously a key signal. In this post we’ll look at two different accounts of the relation of speed to ends and how our evaluation of fads differs from our evaluation of other trends. Finally, we’ll try to figure out what is “symbolically” at stake in the distinction between a trend and a fad and how this makes the latter automatically less worth joining.

First of all, Luis Bettencourt presents a model of the trend life cycle in which agents use a trend's relative speed of adoption as an indicator of viability and value. The faster the speed of adoption relative to other competing items, the more attractive an item is. When the speed slows, as it inevitably will given a finite population, agents will begin to abandon the trend provided the speed falls below an individually determined critical level. In this model, speed is a positive factor as it indicates the potential value, a form of “social proof,” of a trend. But it also inevitably brings about the trend’s end. Because of the way the dynamics are structured, trends that move fast, will move even faster, quickly bringing about their own crash. These are clearly fads.

Contrary to Bettencourt’s model, Jonah Berger and Gael Le Mens claim [pdf] that a high perceived rate of adoption of an “identity relevant” cultural item decreases our likelihood of adopting or consuming it. Identity relevant cultural items are those that can be thought of or used by potential consumers as a means of communicating some desirable or esteem-generating information about who they are or – more likely – who they’d like others to think they are; identity relevant cultural items are conventionally meaningful signals of group-specific tastes, social affiliation, status, etc. If adoption is too fast, the item might be a fad, which is a bad investment in the social identity stakes because, by definition, they can’t sustain popularity.

An important difference between the two views is obviously the latter’s focus on “identity.” In Bettencourt’s model, the faster the better for all items; it’s the fact that velocity is unsustainable in finite populations that leads to collapse. In Berger and Le Mens’s view, beyond a certain velocity threshold, the faster the worse, at least for publicly visible cultural items with identity implications. They argue that it’s our concern for the “symbolic value” cultural items may have for the development and maintenance of social identity that explains the desire to avoid fads. High velocity may decrease the attractiveness an item might have because it could be just a “flash in the pan” or of fleeting popularity. In other words, faddish cultural items are a bad investment as they don’t maintain symbolic value. So, in both models, speed is information, but in Berger and Le Mens’s, identity concerns decrease the item’s attractiveness (i.e. likelihood we’ll partake) as speed increases.

Their goal is to show that we judge potential faddishness by velocity and that we avoid fads because they're viewed negatively, which might be tied to the fear that they don't return symbolic value. Though they never really get into why we view fads negatively, I feel that we can say something stronger than that fads are unattractive because they’re bad social identity investments. Indeed, it seems that public association with a fad in an identity relevant domain may ultimately deliver disvalue as opposed to just decreased value. It not only won’t add to your social identity, in some situations it might actually damage your social identity. It’s often embarrassing or somehow dis-estimable to have been caught in a fad, to have publicly invested in or consumed some short-lived and now unpopular cultural item (if there are any pictures floating around the internet of you earnestly rocking a white Miami Vice blazer and woven loafers, you know what I mean).

So, I think that with a little reflection most will admit that getting publicly caught in a fad is somehow embarrassing or to be avoided, i.e. a disvalue. Can we articulate what it is about faddishness that actually bugs us? Why should the perception of an identity relevant cultural item’s faddishness make it less valuable or even potentially disvaluable? If it’s just that “flashes in the pan” – items that don’t sustain popularity/potential symbolic value – provide limited or no return to social identity on investment, why do we actually feel embarrassed about fad association as opposed to just annoyed at the wasted time? What is symbolically at stake in the distinction between fads and trends?

Autonomy and Constancy

Our social identity is impacted – often mediated – by the cultural items we consume or otherwise associate ourselves with. Cultural items have symbolic value related to what consumption of the item conventionally means or communicates. For example, the shoes you buy do more than just protect your feet or give you extra purchase on slippery sidewalks. They also communicate something about you, your tastes and often your position in a socio-cultural taxonomy (you’re a hipster in Converse or narrow slip-ons; an urban kid in puffy, tricked out Nikes; etc.). They’re public signals with conventional cultural connotations that you can exploit to manage your social identity. It’s these conventional connotations, the realm of taste and tastes, that confer the symbolic value. We manage our social identities partially by managing the set of cultural items we publicly associate ourselves with.

But clearly it’s not the cultural item alone that confers the symbolic value we exploit in managing perceptions of social identity. Perceptions of our motives and the longer term regularity of our social identity also impact our ability to wring symbolic value from a cultural item. For example, I had a friend who rapidly cycled through “personas,” going from punk to skinhead to b-boy to truck driver. With each new identity came a boatload of highly appropriate, conventionally meaningful gear, slang and comportment. But no matter how cool or dead-on they were for the current identity, he could never really wring any value from them. In fact given the suddenness of each of his transformations, they just seemed like cynical accoutrements, rendering symbolic disvalue and actually damaging his social identity to those he presumably most esteemed. The more he desperately tried to construct a "cool" or valuable social identity, the less likely it became that he actually could. Clearly, the items' perceived appropriateness or continuity with past social identity is important. More generally, perceptions of the motives for associating oneself with cultural items effect the items' symbolic value (and possibly one's larger social identity). If it seems out of character or if the social identity motive is too obvious, we likely won’t be able to garner any actual symbolic value from them no matter what they are.

So, in managing our social identities, we also have to manage perceptions of our management; there are “perceived motive” conditions on the symbolic value we get from some cultural item in an identity relevant domain. More specifically, there are autonomy and constancy conditions, which, if not met, decrease or possibly reverse the symbolic value available from a cultural item. Fads, pretty much by their nature, trash these conditions. Before saying why, let me look a little more closely at the perceived motive conditions, which must be met for an item to confer symbolic value.

Sometimes people try too hard to impress. They obviously speak, dress or comport themselves in what they assume relevant others consider the “cool” way. People like this are often called posers (or poseurs... but when I spell it that way I feel like one). They could be doing the same thing everybody else is doing, but their identity relevant moves are perceived as desperate, “inauthentic,” or even cynical.

As Jon Elster says, “nothing is so unimpressive as behavior designed to impress.” I think this “general axiom” extends to our public association with cultural items as well. Of course, we all recognize that our tastes are shaped by our in-group and larger culture. If not for the conventions and norms against which we evaluate cultural items and social identity, “symbolic value” really wouldn’t exist. But when identity relevant choices appear unduly concerned with others’ perceptions they start to lose value and may even damage social identity. When it appears that you’re dressing or talking a certain way solely out of concern for others’ perceptions of you, you may seem, for example, pretentious, conformist or cynical as opposed to cool. In short, in the West at least, we disvalue identity relevant moves that appear to be completely externally or cynically motivated, while we greatly value those that appear to be internally, “authentically” or autonomously motivated.

When it comes to your social identity you don’t want to appear that you’re trying on personas. This is closely related to autonomy, but conceptually distinct. You can switch your style daily, trying to align yourself with various cultural groups, in which case you’re a poser or social butterfly. But you can also switch daily just because you’re an odd loner, a kook. In this case, you’ve high autonomy, but low constancy. They’re distinct, but either way fickleness when it comes to identity is frowned upon. If you give the impression that you’re actively searching for an identity, it’s unsettling: it feels stagey, shifty and possibly cynical.

Though research suggests that our perceptions and expectations of singular self-identity are sort of illusory – more a product of self-narration and fundamental attribution error than some continuous, thing-like Self – we clearly assume and value the idea of constancy, consistency and continuity when it comes to social identity. Pomo identity theories notwithstanding, fickleness, or publicly searching for some sort of social identity, is often interpreted as dishonest, “inauthentic,” self-defeating, cynical and sometimes even pathological. It’s probably a cultural artifact of the West, but we really like to think of ourselves and others as Selves in the ideal sense. Acting otherwise can turn social identity moves on their head, making them seem like a sham and destroying any potential value.

Why We Avoid Fads

We are socially incentivized to manage our social identity management. We seek to construct social identities that are meaningful for and align with existing socio-cultural groups. But in order to create a valuable social identity, we have to construct it in a way that gives the impression that our choices are the autonomous decisions of a constant, stable, already fully constituted social identity. Failing to give the impression of autonomy and constancy in our decisions decreases the “value” of social identity relevant moves.

Fads are culturally salient; their sudden appearance everywhere gives them high visibility and holds our focus. Because of our (probably cultural) bias toward autonomy and constancy – that is, for evaluating identity moves against the ideal of individual, reflective choice by a singular Self – we interpret these waves of uptake and failure in individual terms. After a fad crashes, our bias pushes us to interpret it negatively as a case of social influence over autonomous decision, as a case of herd mentality. The speed of the descent we interpret as social proof of the emptiness or valuelessness of the fad. But the speed of the ascent we interpret as the dis-estimable actions of non-autonomous, inconstant conformists: it’s the result of unreflective "bandwagoneers," people with no strong Selves embarrassingly misled by social influence.

Provided you want to optimize the potential symbolic value from a cultural item, you had better take into consideration whether or not it’s a fad. But, that’s not the only reason you should avoid them. Publicly associating with fads may damage your larger social identity. That is, association with a fad is interpreted as a failure of the autonomy and constancy conditions and specific failures of general conditions reflect on all of your identity relevant decisions. Publicly joining a fad changes the way others interpret your motives for all social identity relevant decisions; your motives become suspect not just in this case, but to some limited degree in all past and future cases. Provided information is publicly available, association with fads incrementally whittles away at the perceived "authenticity," "autonomy" and, effectively, "value" of your social identity. So, it’s a wise strategy to wait and see. Failing that, use the information at hand to judge the probability that some cultural item is a fad. Keep your eye on the speedometer; velocity of uptake is the most salient indicator of a fad pre-crash.

Thursday, 5 March 2009

Nice For a Price: esteem as a social incentive

Daily, enormous numbers of people interact in digitally mediated social groups. They usually behave prosocially, that is, they generally play nice with each other, often cooperate and sometimes even collaborate in the creation of a social good. As a result there's a lot of discussion around the sorts of stable social contracts groups can coordinate on; how new tools and designs for getting together impact and are impacted by these possible social contracts; and what sorts of incentives or motivations individuals have to play nice in the first place given that cooperating isn’t necessarily as easy, cheap or materially profitable as being a jerk.

This (way too long) post is all about motivations to join a group and act collectively. It’s the second post in a series (first one) trying to show that the recent focus on the deeply social or other-regarding nature of social and collective action groups online is too one-sided, simplistic and, frankly, ideological. Certainly we’re social animals with frequently benevolent motivations. But that doesn’t mean that we’ve no self-centered, broadly (as opposed to narrowly or materially) rational motivations. Extrinsic incentives, which we scheme to “maximize” in our limited way, abound in social situations. However, they’re mostly non-material things like esteem – the positive evaluation of your actions by the norm-based standards of your group – and status – deferential position in a social group. These are real motivations. Even Yochai Benkler, a genuine web utopianist, mentions them briefly in his Wealth of Networks. He says in passing that they’re part of our real motivational repertoire. But generally these issues are treated as either a dirty secret or beside the point. They’re neither. We need to really look at how these inescapable self-regarding motivations can help us be more prosocial and not just blame them when we’re antisocial.

Since it’s the most completely worked out account of the effects of social functionality on group creation and maintenance, I’ll focus on Clay Shirky’s excellent Here Comes Everybody. Smart as he is, I’m sure he understands the point about the value of broadly instrumental motivations for prosocial behavior. But the hero of his book is the genuine prosociality of humans freed from the limitations of “real world” organizational overhead by digital social functionality. And though he addresses self-serving, antisocial behavior like disinhibition and free-riding, he avoids talk of self-interested, extrinsic motivations for collective action almost entirely, focusing instead on intrinsic motivations such as vanity, self-esteem, true interest in the public good and simple innate pro-sociality. I agree with most of his conclusions, I just think he left out a huge factor that should inform thinking and design around online sociality.

Two Motivation Problems

There are two big problems of motivation here that are conceptually, but probably not practically, separable. First, there’s the motivation connected to the collective good some group’s trying to bring about, say, a high-quality, UGC encyclopedia like Wikipedia. Second, there’s the motivation to play nice, to make your contribution to the collective good you value in a way that the group approves of, like sticking to citable facts as opposed to stating an opinion when adding to a Wikipedia entry.

Riffing on a very useful distinction made by Clay Shirky in his latest book, we can divide our motivations between the group’s promise, whatever collective good or goal it’s trying to achieve, and the bargain, or the social contract within or through which the goal is to be achieved (he also distinguishes the tools, or the actual functionality the group employs to achieve the promise, but that’s for a different post). For Shirky, if you care about the goal embedded in the promise – e.g. collectively create a quality, UGC encyclopedia – and consider it actually attainable as it’s implicitly “stated,” then you’ll be motivated to contribute. Indeed, Shirky explicitly considers motivation only in regard to the promise: “The promise is the essential piece, the thing that convinces a potential user to become an actual user.”

In practice, it’s not so easy to separate the promise from the bargain. But in theory we can say that the promise is normatively degenerate. That is, you can achieve the promise through a variety of bargains. Although the details of the bargain have a huge impact on the viability of the promise, you could theoretically develop a quality, UGC encyclopedia without the exact same contributor/editor structure or implicit/explicit rule sets operating on Wikipedia. Again, they’re theoretically separable but probably not practically separable; it’s most likely the case that people don’t even know they “have” a goal or value the content of the promise until it’s articulated to them via the specific implicit promise couched in terms of some tentative, operating bargain. That is, the actual “articulation” of the promise in the context of the bargain probably creates people’s desire to reach the collective goal as much as it meets it. Also, bargains tend to be continually re-negotiated during the process of actively pursuing the promise.

Shirky would undoubtedly disagree, but it seems like the bargain impacts overall motivation in addition to the motivation arising from the promise. If you value the promise and think this group’s specific bargain is a decent way to achieve it, then you’ll be motivated to join this specific group or contribute to this particular collective good. But if the social environment in which you can contribute to your genuinely desired social good blows, then you’ll likely have to be a lot more motivated by that good to stick around. Similarly, if you're only moderately motivated by the promise, but the bargain offers something exciting in itself – say, the public adoration of your peers – then that might be enough to get you contribute when you otherwise wouldn't. Clearly, the bargain impacts overall motivation, positively or negatively, though some initial interest in the collective goal is probably necessary in the first place.

Anyway, one of the big arguments of Shirky’s book is that toady’s social functionality lowers the cost of contributing and collaborating to such a degree that people motivated by the same promise or goal – latent groups to use Shirky’s Mancur Olson-inspired term – can actually get together to do something about it. Even relatively weak goal motivation is no longer blocked from actually resulting in action since the overhead associated with coordination has dropped so dramatically; thanks to social functionality like wikis, blogs, tagging, cell phones, etc., it’s really “easy” to get dispersed people together to focus on some collectively defined good.

However, since the promise is “normatively degenerate,” i.e. can be achieved by means of a number of distinct social contracts or bargains, we still need to consider people’s motivation to comply with a specific bargain given the goal. What would make people choose one Bargain over another? Does the nature of the Bargain do anything to the overall motivation to contribute or cooperate? To answer these, we first need to say what a Bargain is. Fudging a little, we can say that Bargains, as little social contracts of sorts, are collections of norms

Why Comply?

Social norms are the implicit (but they can become explicit like some laws or codes) interaction rules that keep social groups from deteriorating into antisocial free-for-alls. They turn potentially state-of-nature, every-man-for-himself interactions into coordination games. Humans are social animals, but that doesn’t mean we always play nice. If it did, we’d rarely see any groups fall apart. Everybody would play nice by their group’s standards. As it is, though, we see a lot of antisocial, anti-normative behavior in contexts in which prosocial norms would be appropriate, particularly online. This suggests that we have a conditional preference for following norms. If the conditions are met, we play by the norms, otherwise it’s a melee. Obviously, the conditions under which we can generally induce a preference for social norm following on the part of most users should impact our designs for social spaces online and elsewhere.

Just to solidify everything, I’ll borrow (with slight fudging/modification) Cristina Bicchieri’s idea that there are three types of conditions which have to be met for someone to prefer following a norm in a given situation.

  1. Empirical Condition: Do you expect the norm to be observed by most people in this sort of situation?
  2. Normative Condition: Do you believe that most people expect or prefer you to follow it in this sort of situation?
  3. Motivational Conditions: There are several of these and they relate to your reasons for complying given the truth of 1 and 2 above.
  • Fear: Is your compliance solely based on fear of negative sanctions for non-compliance?
  • Esteem: Is your compliance based on positive sanctions like praise and esteem?
  • Reasonableness/Internalization: Is your compliance based on your assessment of the reasonableness of the norm or possibly on a non-reflective, conditioned expectation?

The first two are necessary existence conditions for the norm. If neither are met, even if there's ostensibly a norm in place you won’t follow it because either you don’t think everybody else will or you feel it’s not really expected of you. In any given situation the norm may not be salient, and thus people might not know they should be following it. So, designs for spaces in which you want users to coordinate on a cooperation-inducing social contract should include provision for making norms salient (e.g., user-rating of UGC, contribution sorting by quality, etc.). But salience isn’t really what interests me here. Let's look at the Motivational conditions, which are about the reasons for personal compliance provided the Empirical and Normative conditions are met. If you think we just have a basic preference to do as others do regardless of our relationship to them, then consider these supplementary.

Compliance out of fear of negative sanctions, from frowns to finger wagging to expulsion from the group, can be very powerful. Often people play by the rules because of perceived threat of force, even if it’s just social force in the form of conventionalized displays of displeasure (reprimands from admins, poorly rated UGC, banishment, etc.). This is the most widely recognized motivation in the literature. Clearly, it only operates under conditions of at least partial transparency or incomplete anonymity and some sort of publicity or public availability of contributions.

Viewing the norm as reasonable or maybe even internalizing it, renders compliance largely non-reflective. You don’t act out of fear or positive incentive, you just obey the norm. This motivation doesn’t have any observation conditions on the situation; you’d obey the norm in totally anonymous interactions.

Esteem motivated compliance is a little different from the other two. In this cases you obey because obeying results in a personal gain or good in addition to whatever motivations you take to be intrinsic (e.g. general preference for conformity). You get something of value from others – esteem – for acting appropriately or contributing at a high, group-defined standard. The esteem motive is related to the fear motive, particularly if we consider disesteem a negative sanction. And like the fear motive it requires limited anonymity and publicity of contributions.

If esteem is a sought after item in limited supply, then it can add to the overall motivation one has for contributing to the larger social good defined by the group’s Promise. If complying with a social contract and contributing at a high level relative to the group’s norms can generate esteem, then people already motivated by the group’s goal have additional motivation to work. They have been significantly incentivized beyond the presumably prosocial incentive of the collective good.

Even if the collective good is materially negligible or even costly to the individual, the esteem motivation associated with group norm compliance can incentivize behavior to bring about the collective good (provided overall motivation can be the aggregate of promise-based and bargain-based motivations). Basically, implementing the social contract in such a way that esteem can be accumulated (limited anonymity and institutionalized feedback channels, for example) can make selfish jerks work prosocially. However, it also makes people for whom the collective good is genuinely motivating even more motivated.

Obviously, there is considerable space for trouble here. Unethical, mistaken and outmoded norms can easily be perpetuated by esteem motivated compliance. Agreed. Our task as designers and theorists in this space is to figure out how these mechanisms work and whether or not they can be reliably used for prosocial ends. That's why I think the bias for consideration of intrinsic and clearly prosocial motives only is shortsighted. Esteem and other extrinsic non-material motivations operate in all social groups. Unless we face our less social, selfish selves and try to understand how they operate, we can't even begin to make design decisions that will allow us to channel this ignoble element in our character for proscocial – or at least not antisocial – ends.

A Note On Near Anonymity, Limited Publicity and Esteem... skip at will

You might think that the near anonymity and volume of contributions of UGC creations like Wikipedia would destroy the esteem motivation: if next to nobody knows who you are and the volume of contributions mitigates the publicity any specific addition gets, then no esteem accumulates. I don’t think this is the case for several reasons.

For one thing, a study of handwashing habits in public restrooms in New York city showed that the mere presence of an anonymous stranger increased handwashing from 40% to 80%. (Munger, K. and S. J. Harris. 1989. “Effects of an Observer on Handwashing in a Public Restroom. Perceptual and Motor Skills 69(3):733-734.) The anonymity of the esteemer doesn’t seem to matter, but most importantly it doesn’t seem to matter that the esteem-seeker is anonymous as well. The mere fact that someone – anyone – could judge, the normatively charged public situation influences behavior regardless of anonymity.

Also, in many esteem situations, relative anonymity and obscurity actually heighten the potential for esteem. Basically, in many situations in which esteem is at issue, the potential esteemer judges not just the action but also the disposition. If calculation is obvious and the desire for recognition glaring, then you’re likely to give less esteem than if the estimable act occurred “for itself,” as it were. The public forum in which esteem is judged and the conditions under which credit is claimed ("I did this" or "I did this for the group") has an impact on this dispositional judgment. Too public and too obviously about you and not the good, and it might be showing off. But in conditions of limited publicity and partial anonymity your estimable act gives the impression that you’re doing it for its own sake. That is, an estimable act in situations where the probability of being seen and personally esteemed is relatively low (and all the other members knows this) renders much greater esteem if it is in fact seen and identified. In this situation, it looks like the act arose from a natural disposition and not esteem seeking. Near anonymity and limited visibility in some sense optimize esteem, in the words of Geoffrey Brennan and Philip Pettit.

Finally, esteem doesn’t have to circle back and add to our personal pool of accumulated esteem for us to feel it in some measure. That is, esteem attaching to your online identity, which nobody knows is really you, has its own reward. Indeed, we don’t always want the esteem we receive in one arena to mingle with (or be contaminated by) the personal esteem (or disesteem) earned in another. It’s a form of esteem management. Additionally, if Kai Spiekermann is right and the esteem others are willing to give often has less to do with what you’ve done than who you are, anonymity can level the playing field a bit and allow esteem to flow more freely than it would under more transparent conditions.