Friday, 29 May 2009

Being-on-the-web: Weinberger's "infrastructure of meaning"

Writing about the web is increasingly “post-utopianist” meaning that it doesn’t expressly argue for the essential goodness of the web; doesn’t assume we’ve no real responsibilities as designers and consumers; and doesn’t assume that people are always nice or positively prosocial in their behavior online. Writers like Cass Sunnstein and Clay Shirky are at least trying to get past the breathlessness of the early days of communitarian-turned-capitalist web boosterism to a more realistic – but still hopeful – place. This is great and it’s exactly what was needed.

Dystopianism – the view that the web is essentially for the worse – is clearly the wrong reaction to utopianism. But if you can get past the panicky, paternalist fist-shaking and general curmudgeonliness of a lot of the dystopianist writing, their arguments tend to be of two types: appeals to popular, romantically conservative conceptions of individuality, creativity and culture or arguments showing that the boosters’ optimism is actually misplaced and the web doesn’t work the way they claim. The latter arguments are actually helpful, just showing that, if you’re attempting to draw evaluative conclusions from theoretical arguments, you can’t simply rely on your assumptions about what we all consider positive. For example, you can’t just assume that removing barriers to “publication” is good in and of itself, regardless of whether or not this is an increase in overall “freedom.” Some might not like the idea that there’s no longer a practically imposed institutional filter on publication because it results in enormous choice and verity problems. In addition, we’ve no real proof – plus a lot of theoretical objections and some disconfirming experimental data – that the web’s structural “solutions” to the problems of choice and truth actually work.

The other dystopian “arguments,” however, tend to be limited to dismayed hand wringing and unkind caricature about the state of culture, creativity and taste. So ultimately, the dystopianist/utopianist battle boils down to competing intuitions about what we should value and what we’re giving up or gaining by embracing this new and powerful medium. David Wienberger, a brilliant and proud utopianist, casts this battle in political terms as conservative dystopianists versus liberal utopianists. Of course, analogizing to a dichotomy in another domain doesn’t really clear things up. It’s a rhetorical move trying to get you to cast your lot either with the progressive forces of liberalism (yay!) or with the regressive deadweight of conservativism (boo!). But along the way he throws a third type into the mix, realists. He defines realists as the pragmatists in the middle, rational, level-headed and myopically obsessed with facts, data and history, i.e. boring. Supposedly, the realists feel that the web isn’t that different from other media, that the rhetoric on either side is hysterical and needlessly sensational. We just need to step back and think rationally about this new medium.

Weinberger thinks that the realists are valuable but essentially wrong about the web. That is, they’re wrong about the essence of the web, which is totally different and wholly revolutionary. Realists’ calmly rational judgment of its potential and possibilities will only blind us to its true innovative potential in the long run. For example, thinking of, judging or predicting the web’s impact and future in terms of past media may keep us locked in old patterns and thus foreclose potentially valuable new paths. So realists are valuable advisors and functionaries, but they shouldn’t be allowed to steer the ship or even navigate. After all, you’ll never discover new worlds by reading old maps... or something like that.

Anyway, if we define the realist as somebody who feels that the rhetoric on either side is overheated, that the whole debate needs a dose of reality and that the web isn’t really all that different or revolutionary, then I’m clearly not a realist. The web is indeed different in many respects, mainly in its decentralized structure, wickedly low entry cost and sudden ubiquity. I do think we need a dose of reality, but not in the way Weinberger’s realist thinks. Sure, reality is about facts, a claim most utopianists belittle via scare quotes, but these are facts about mechanisms – what structures foster and propagate knowledge, truth and quality and what we can expect from interacting agents, etc. – and not necessarily facts about history, which really are subject to biased “framing narratives.”

Finally, both utopianists and dystopianists agree that the web is revolutionary, but the former consider it a positive revolution, while the latter consider it negative. The realists, in contrast, don’t think it’s revolutionary at all, but rather more of the same only louder. Unlike Weinberger’s realists, I think the the web is revolutionary, but I use this word advisedly and without the attached evaluation, good or bad. Normatively evaluating a fact is clearly a case of interpretation, it’s identifying a fact as good or bad according to some evaluative scheme. It’s the interpretation that makes the difference. So what’s Weinberger’s interpretation?

Like Shirky, Weinberger analogizes the web revolution to the socio-cultural impact of the printing press or rather moveable type. Just as the printing press led not only to affordable books but also the dissolution of old social/labor orders and the growth of a literate, educated public, so too the web is leading to a boom in bottom-up social organization, individual creation and the general overthrow of old-guard cultural gatekeepers and entrenched hierarchies.

Now, Weinberger and Shirky, never tell us how to define institutions (or norms, conventions, etc.) – the socio-cultural structures overthrown by these revolutions – but to me they’re just self-reinforcing patterns of conditioned preferences and expectations structuring our repeated interactions. They aren’t etched in stone or handed down from on high. Rather they are slowly coordinated upon by generations of locally interacting humans. Thus, they’re contingently coordinated upon interaction and preference structures suited to the circumstances in which they developed. Change the situation or circumstances, and there will be pressure to change the institutions, norms, etc. If the situation changes radically, they will crumble and chaos will ensue, lasting just until new institutions and norms are either coordinated upon or imposed. This is a revolution.

But Weinberger in particular – echoing Marshall McLuhan, Walter J. Ong and my old teacher Greg Ulmer among others – likes to point out that media transform our ways of thinking, thus a revolutionary medium will radically change us. I agree, but only insofar as new media destroy old and foster new norms, conventions and institutions of creation and consumption. It’s the old and new norms and institutions that structure our interactions, inform our preferences and cement our expectations. So we agree on a lot, but notice, we’ve yet to see anything in this revolution that would lead us to evaluate it positively, i.e. as a utopian revolution (conservative old dystopianists, on the other hand, started frowning the second institutions felt pressure). Disruption, difference and impact don’t necessarily equal good. So how does Weinberger get from revolution to positive evolution?

Well, I can’t definitively say, but there are hints throughout his writing. Take this passage:
...Access to printed books gave many more people access to knowledge, changed the economics of knowledge, undermined institutions that were premised on knowledge being scarce and difficult to find, altered the nature and role of expertise, and established the idea that knowledge is capable of being chunked into stable topics. These in turn affected our ideas about what it means to be a human and to be human together. But these are exactly the domains within which the Web is bringing change. Indeed, it is altering not just the content of knowledge but our sense of how ideas go together, for the Web is first and foremost about connections.

And in what way is it altering “our sense of how ideas go together?” In his wickedly clever Everything is Miscellaneous Weinberger claims that the web is an “infrastructure of meaning” as opposed to just stodgy old knowledge. He trots out the philosopher, Nazi and all around dour grump Martin Heidegger to explain his notion of meaning. Basically, it comes down to the humanly grounded, intricately woven, real-world web of warm significance that we actually live in daily as opposed to the cold, objectified, brutally subdivided grid of "official" knowledge. Just as printing initiated a revolution that separated knowledge from the lived world and brought us the evils of categorization, specialization and scientism, so the web – with its personalizing “tags” and ability to instantly pair even the most unlikely contents regardless of official taxonomies – is initiating a sort of counter-revolution in which content and knowledge are re-imbued with subtle, non-taxonomic human significance. Thus, the web – particularly the user-enhanced, user-responsive (if jargony) “Web 2.0” – is an “infrastructure of meaning” insofar as the thickening accretion of human metadata on boundlessly linkable content makes it implicitly available for officially unintended but humanly significant purposes.

So, the foundation of his normative claim that the web is essentially for the best seems to be the idea that it’s instituting a new, souped-up version of the old pre-printing press, pre-Enlightenment notion of situated and subtle human – as opposed to "rational" or scientific – knowledge. The web reclaims knowledge from the alienating pretensions of science, reason and “rationality.”

Versions of this idea have been around for a while. As Weinberger mentions, McLuhan argued for the human impact of media, as did the Jesuit scholar Walter J. Ong. Ong’s book Orality and Literacy was expressly devoted to the cognitive, epistemic and human impact of media types. You could interpret Michel Foucault’s claim that knowledge structures are imposed power structures as a version as well. I even agree with part of Weinberger’s application of it to the web: the web really is revolutionary in the extent to which it puts knowledge at people’s fingertips and allows them to find, add to, connect and forward it at will. And this is a far more human – essentially human – way of interacting with and handling knowledge.

I just don’t agree that the “human” way is necessarily good. It could be great, leading to broader minds and deeper understanding of the world and ourselves. Or it could lead to increasing factionalization, self-absorption and distrust. After all, research suggests that, left to our own devices, people – humans – only seek out and retain confirmation of previously held opinions. So much so that we often ignore the true in favor of the convenient or comfortable. We’re also significantly biased toward things we’re already familiar with. It’s also unfortunately true that our moderate views tend to become more extreme in the sorts of echo-chambers the previous phenomena set up: seeking out confirmation from like-minded people and sources and the discomfort at differing opinions (justified and reinforced by the ready agreement of our like-minded contacts) tends to make our views ever more entrenched, absolute and resilient against contradictory fact.

Just because people can connect content in wickedly exciting but subtle new ways and access highly specialized information in seconds, that doesn’t mean they will be exposed to a breadth of opinion or even – sadly – the truth. The web, because of its native responsiveness to our individual desires, allows each of us to create a cozy cocoon of confirmation and reinforcement.

But maybe this isn’t all bad by Weinberger’s lights. Weinbereger adores Heidegger’s philosophy. Central to Heidegger’s understanding of meaning is the concept of Being-in-the-world: basically, the idea that all encounters with the world are already infected with our intentions, moods, cultural connotations, etc. and that there is no sense to the traditional notion of a pure object or subject. So meaning is pretty much an inescapable consequence of any encounter with the world. But this also suggests that context – physical, social, cultural and historical – is not only inescapable, but necessary for meaning. Objectivity becomes, literally, the view from nowhere, not just impossible, but unintelligible.

Maybe these cocoons of confirmation – these little webs of shared connotations and self-reinforced absolutist understandings, which I claim are negative aspects of a naturally biased humanity – are really what Heidegger’s beleaguered teacher Edmund Husserl called “lifeworlds:” the necessary and inescapable social, cultural and historical contexts within and through which we experience the world. Maybe so, but the problem is, these life worlds are hermetically sealed wholes of historical and cultural prejudice, incommensurable and unassailable. As Heidegger’s most influential student Hans-Georg Gadamer formulated it, prejudice – the historical, social and cultural “situatedness” we’re born into – is essential to Being-in-the-world. Outside of your lifeworld, your cocoon of prejudice, you simply aren’t... in the big metaphysical sense. Thus primordial prejudice – our cocoon of reinforcing ideas ever ready to disregard inconvenient or inconsistent “facts” – is the foundation of meaning in this Heideggerian sense.

Obviously Heidegger had nothing but disdain for the Enlightenment notions of reason, rationality and truth. It’s easy to see why. By his lights, there’s nothing over top of “Being-in-the-world” or “the lifeworld,” no outside facts to adjudicate between the “meanings” grounded in the various prejudice-composed contexts. The “lifeworld” or “Being-in-the-world” is the only ground of significance. Rationality, reason and science, on the other hand, are about seeking a global foundation (possibly in the real world) for the “intersubjectivity” that Heidegger seems to have thought only inheres in shared cultural, social and historical prejudices or contexts.

So maybe we who are stuck on the old-fashioned liberal hope of finding some common, testable ground of meaning, knowledge and intergroup understanding have it terribly, inhumanly wrong. We shouldn’t think of people’s natural drive to willful ignorance and reinforced, non-verifiable absolutism as an unfortunate legacy of our evolutionary past. They aren’t something that we as designers working in a world that desperately needs people to stop embracing local superstition, prejudice and dangerously out of sync norms have a responsibility to mitigate for the good of humanity. Rather we should just realize that these prejudices are the only foundation of truly human meaning and not pretend that there’s anything outside of them. Maybe this is the way Weinberger intends his “infrastructure of meaning” to be interpreted.

So, back to Weinberger’s utopianism. Remember that utopianism is the idea that the web is essentially good or for the best. Specifically that it’s native capacity to allow users to add metadata to content and make subtle, personal connections and relations is fundamentally and wholly positive. I’ve suggested that certain biases in humans – we only like what we know, we only want to be agreed with, agreement makes our prejudices even stronger and we’ll ignore the truth if it violates either of the first two – mitigate the positive prospects. In other words, because of the way we are, the web alone isn’t going to lead us to the promise land. That said, Weinberger’s rosy optimism seems to make sense only if you choose one of the following two options:
  • Ignore the unfortunate facts about humans’ tendency to avoid disconfirmation and neglect what some would call the truth for cognitive comfort and personal consistency.
  • Or, as his preferred philosophical tradition might recommend, embrace these tendencies as a prerequisite of authentic, human meaning. It’s not a bug. It’s a feature.
You do either one of those and I could see how Weinberger’s web utopianism might work. Personally, I find neither particularly appealing.

Of course, I’ve fudged along the way. Heidegger wrote extensively about “authenticity” as a refusal to unreflectively live the conventional life your peers demand etc. Also, “Being-in-the-world,” “lifeworlds” and even Gadamer’s prejudice soaked “horizons” aren’t exactly like the little cocoons of auto-agreement people tend to create around themselves and which the web makes ever easier and more complete. But fudging aside, this doesn’t alter the basic thrust of this popular philosophical tradition’s radical perspectivism. I just wanted to investigate whether this could be what Weinberger has in mind given that he probably knows people aren’t as reasonable as we could hope. Finally, it could be that Weinberger is just trying to say that the web provides wicked cool new ways of getting people to content. Which it does. But I think he’s going for something more.

Wednesday, 27 May 2009

Ha-ha, Ah-ha and Oh-yeah: cultural irony and rediscovery

A lot of the cultural items we consume or partake of – hairstyles, shoes, tv shows, slang, professed values, bands, etc. – can be thought of as socially instrumental. That is, they can have a “symbolic value” over and above their use value, entertainment value or whatever. We often consume them not only because of what they do, but because of what we hope they add to our social identity in the eyes of those we esteem and those we despise, to our in-group and our out-group.

A while ago, I wrote a post illustrating part of this (intuitive? clichéd?) idea. I tried to show how the mutual interactions and reactions of three distinct cultural subgroups – trendsetters, hipsters and regular joes – can drive cultural items through their life cycle. This idea has occurred to a lot of us: many social groups’ preferences (for shoes, bands, styles, slang, etc.) respond – positively or negatively – to other groups’ preferences. Slang and cadence, for example, are often valuable signals and affirmations of group affiliation, so preferences for specific slang terms change rapidly with diffusion outside the group. We illustrated this as interwoven curves along the path from few partaking (o) to most everybody doing it (n).


But so far we haven’t really discussed the later part of a cultural item’s life cycle, the point after c in the graph. Frequently, cultural items just go away and are never heard from again. But sometimes they come back around. In this post I want to look at some late stage possibilities for trends, particularly cultural irony – imbuing cultural items with a different, more "self-aware" symbolic value than they originally had – and rediscovery – rehabilitating older cultural items for current use.

Irony and Rediscovery

First of all, some items never really go through the creep from fringe to mainstream. Agreed. The idea here isn’t to model the essential, inviolable profile of a trend. Rather, it’s just that in most collections of people presented with a cultural choice you can roughly define different subgroups by how their preferences change relative to others’ preferences. For example, within the regular joes there are likely to be different constituencies analogous to trendsetters and hipsters. That is, some regular joes will be a lot like hipsters in their preferences – ready to partake of items not quite fully mainstream. If we restrict n to regular joes, the preference profiles of the different types within this group might look something like our graph.

Anyway, cultural items that never go through the full cycle at the highest, all sub-groups cultural level – that take off in or are specific to one sub-group only – become particularly interesting when we consider irony.

Ironic embrace is cultural consumption that’s generally very aware of the consumed item’s cultural history. This awareness often becomes an explicit part of its new symbolic value. Consider the recent ironic rehabilitation of Jean-Claude Van Damme, Rick Astley and ‘90s pop (Bell, Biv, DeVoe is suddenly on every hipster playlist). Faint nostalgia notwithstanding, most of the consumption of this stuff (like 70s and 80s ironic embrace before it) seems to be ironic.

Grossly simplifying, there are two big possibilities for ironic embrace. The first is when one sub-group appropriates a cultural item from another group after the latter has already abandoned it.

When this happens, we often get what I call ha-ha irony. An example – also illustrating my advanced age – might help. At a “noise” show back in the very early ‘90s (noise became fleetingly cool when “alternative” was mainstreamed by bands like Nirvana and Jane’s Addiction) the headliner’s lead screamer wore a New Kids On The Block t-shirt. At this point, NKOTB’s popularity had dried up even among their teen target. The contrast between a defunct teeny-bop group and the aggressive, self-consciously oppositional posturing of noise music was obviously the ironic point. This is a case of ha-ha irony. It’s just a broad joke or gag and in no way even remotely critical. In fact it even has the prototypical joke structure: an unexpected shift in reference or clash of expectations results in humor.

Obviously, this isn’t a full-blown new trend arising out of an old one. Rather it’s a cultural item that typifies the prefab pop trend previously popular among the mainstream appropriated for new symbolic purposes by a self-consciously opposed sub-group. Bearing this in mind, it could be graphed something like this:


This time we start at c, the point on the original graph where the item peaks for the regular joes, and proceed to n. After n the item starts to become a cultural liability for regular joes and the total population partaking plummets to m, which is much less than n. Now the trendsetters can claim the cultural item for ha-ha ironic purposes. The trendsetters, of course, will start to abandon if the hipsters pick it up at a’, that is, when the hipsters come to see it as a codified ironic strategy (see). But this case probably wouldn’t get past a’. (Although, an ironic mini-trend did occur in the early ‘90s when noise acts started appropriating the insipid graphics of those new-agey “Smooth Sounds” whale-song albums.)

The NKOTB case involves an item that never went through the full cycle from trendsetters to regular joes. Or rather, it’s an item that the hipsters at the noise show most likely never invested in. NKOTB more or less started out with the regular joes. My guess is that this is often the case with ha-ha irony: the items that get ironically rehabilitated by one sub-group tend to be yanked off the junk heap of another subgroup. In this case, it was the hipsters using teeny-bop detritus to highlight their aggressively oppositional stance to pop music. It was a joke that everybody – even the regular joes once into NKOTB– would get. As a sort of rule, we could say the greater the item’s one time value to a subgroup, the greater its potential to be used in a ha-ha ironic way by members of a self-consciously oppositional group.

The second possibility for ironic embrace is when one group ironically appropriates a cultural item from another while the latter group is still into it.

Not surprisingly, we usually get ha-ha irony here, too. Consider some clever hip-kids’ “love” of geeky sci-fi/fantasy conventions, like Dragoncon in Atlanta, Georgia where middle-aged IT professionals (that’s actually unfair... young IT pros dig it too) party all night in DIY Klingon armor. These fringe affairs are really, really popular among die-hard fans and represent for them a market for a very specific sort of symbolic capital. For the hip-kids, on the other hand, it’s a lark, a gag, a chance to ogle the arcane rituals of nerd-communion in their proper environs. The hip-kids’ intended audience – the group from whom they seek recognition of the value of attendance – is their buddies, not the group actually attending the convention. Also, the hip-kids' symbolic value comes from a completely different cultural and symbolic arena than it does for the earnest fan-boys.

It’s sort of like cultural poaching for laughs. Once a few ironic trendsetters start doing it, the very next year will see hipsters joining in. We can graph this second ironic configuration like this:



Under certain circumstances, this sort of value relationship can result in what I call ah-ha irony (as opposed to jokey ha-ha irony). We can illustrate ah-ha irony with a slight alteration of the noise band example. Suppose it had been a Nirvana shirt instead of NKOTB. At the time, Nirvana was wickedly popular and symbolized the mainstreaming tendency that allowed noise bands to arise as an oppositional alternative in the cultural marketplace. Nirvana had gone through the full cycle from trendsetter popularity on the periphery to mainstream pop adoration among the regular joes. Wearing a Nirvana shirt – the incarnation of the new ‘90s pop which many hipster fans viewed as a sort of personal cultural theft – would have been a really critical, really exclusionary (in the sense of in-group/out-group defining) statement that few would have gotten. After all, most of the kids at the show had been – or still were – into Nirvana. That ambiguity of intention is sort of the calling card of “good” or at least powerful irony: it should be sneaky or at least not intelligible to all and have some sort of critical quality.

What seems to distinguish these cases of ha-ha and ah-ha irony is closeness to the cultural item. In the ha-ha irony cases, the kids who were being ironic probably hadn’t been part of any of the groups involved in the item’s trend cycle. They were outsiders who could objectify the cultural item. However, in the ah-ha case, it’s trendsetters using something that most hipsters (and they themselves) had recently invested in as an ironic prop.

It’s probably not anything like a rule, but this specific example of ah-ha irony looks something like this:


The item went through the whole curve; the trendsetters and hipsters had been committed to it at one time. Needless to say, ah-ha irony is really rare (or maybe not and I just don’t get it). It’s usually used solely by fine artists, motivated by chronic self-awareness and cultural inferiority complexes, which drive them to theoretical, unaesthetic excesses. I know because I was one... probably still am.

Let’s look finally at “rediscovery,” earnest and ironic. Sometimes cultural items come back from the dead. Sometimes the folks doing the reviving are earnest (the Nick Drake revival about 12 years ago and the garage rock revival about 4-5 years ago). Sometimes they’re ironic (disco’s many revivals and Enoch Light). But most of the time, it’s a mix of both (the ‘80s synth-pop sound, particularly in contemporary French and West Coast alterna-pop) and it’s always with different intentions than when the item was actually culturally current.


This graph, like the irony graphs, starts at c and goes through the crash at n. After n there’s a period of cultural hibernation while all of the groups assume their original relative positions. At some point, the item gets picked up by the self-conscious cultural adventurers (earnest indie rockers for Nick Drake, the gay community for at least a couple of the disco revivals) and the cycle starts again.

So why is rediscovery sometimes earnest and sometimes ironic? Well, I think part of it might have to do with uptake among past cultural groups and the perceived genealogy of contemporary cultural groups. Contemporary groups that understand themselves as having “descended” somehow from traditionally oppositional subcultures often approach items from these “related” subcultures earnestly and items from “unrelated” or mainstream culture ironically. Regular joes, since they’re not quite as culturally sensitized or obsessively self-aware as trendsetters and hipsters, generally shoot for ha-ha irony unless the item has already gotten past b’. In that case, it’s no longer really “rediscovery”: the item has been “contemporized” or brought back into currency. (Regular joes that still dig the music they loved in high school – “it’s not about new or old...Aerosmith just made quality rock, man!” – aren’t rediscovering anything... they’re just frozen in a particular cultural period.)

A Last Note

But this whole graph-y, representational thing I have going skirts one obvious and over-talked point about contemporary culture: it seems to be moving faster. The trend circuit from hip to passé to rediscovery is getting quicker and quicker. So much quicker that the whole concept of rediscovery makes less and less sense every day. Something similar is happening to the idea of mainstream; it doesn’t really seem to have the old, easy to poke at stodginess it used to. Actually, it’s pretty hard to even locate in the first place. Why is this happening?

Just speculating here, but pervasive media probably helps. Modern user-tailored, user-driven media like the web is really good at getting stuff from the fringe to the center, from “hip” to “mainstream,” overnight. Stuff that used to take years to bubble to the surface through old media channels now zips up almost instantly in a process of accelerated mainstreaming that calls into question the whole idea of fringe and center, counterculture and mainstream.

But in the west at least we still seem to highly value the idea of oppositional individualism and the autonomy of our choices, of trendsetters, “mavericks” and nonconformists, out there marching to the beat of a different, etc. A significant number of folks in the west – most I’d say – have internalized this cultural value or ideal. Trendsetters and hipsters probably wouldn’t be our culture’s marketing holy grail otherwise.

You put these together – media that rapidly drains oppositional cultural positions of their “outsider,” “in the know” status and an internalized cultural admiration of the “individualist” or the “nonconformist” – and you get accelerating trend cycles. After all, if cultural items come larded with a symbolic value that is partially determined by the item’s prevalence, and modern media provide a fat but highly user-responsive channel to spread the word, then you’ll have to act quickly to stay relevant. In this environment, uptake and abandonment of trends is going to speed up.

Adding to the mix cultural industries like film and fashion, that, to a certain extent, have institutionalized in their marketing and business models ideas of constant opposition, innovation or nonconformity, and things really get moving. Taking just one example, the fashion industry is built on the idea of annual overthrow, of mainstreaming (i.e. making passé) last year’s line so this year’s can supplant it. It’s a business model founded on the idea of the incessantly new. Fashion marketing hinges on – and thus amplifies – the desire to be slightly ahead of the curve, to break with the currently mainstream fashion, to be more distinct and “original” (in acceptably fashionable ways) than your peers. In the present media context of almost instant diffusion and accelerated mainstreaming, their business model of providing “the new” and their marketing model of codifying, amplifying and creating a “need” for “the latest,” results in accelerating demand that outstrips their creative capacity. The result: unrepentant cultural recycling at a faster and faster pace.