Saturday, December 5, 2009

10 Things to Watch About Comcast-NBCU

Amidst so much other commentary, and still more possibilities for further claims about the merger's significance for industry, platform and user convergences (and beyond #10 here, synergies), a nicely grounded piece from Ben Grossman at Broadcasting & Cable.


"10 Things to Watch About Comcast-NBCU"

First came the deal, now comes the waiting. As the mega-merger between Comcast and NBCU goes through its process, here are 10 things I'm wondering about.

1 Jeff Zucker's fate. Comcast can appreciate a great cable business as much as anyone, and under Zucker, NBC Universal has grown into one. But the broadcast network's fall under his watch has industry insiders buzzing about whether he will make the cut or not.

2 Jay Leno's fate. Will Comcast share NBC's long-term view of the 10 p.m. experiment and Conan at 11:35, or will they use the ownership change as an excuse to move Leno and his 5 million faithful viewers back to 11:35 and let Conan become a free agent? Either way, you have to admire Leno's sucking up the night before the deal was announced. His guest: E!'s Kim Kardashian.

3 Versus/NBC Sports. Amortizing rights fees over the cable side could give Dick Ebersol a new sling of arrows to fire at some major sports acquisitions. While a deal like the NFL on NBC is a money loser (as are all NFL network deals) having a full-time cable outlet, as well as some regional sports nets, opens up a whole new ballgame. Whether or not they “go after ESPN” is not the point; there is plenty of room for both if Comcast can grab a big-time property or two.

4 The Olympics. Does this deal throw a wrench into the conventional wisdom that Disney will easily outbid everyone for the next Olympics package? Zucker says Comcast-NBCU will look at it “if it makes sense.” Fiscally alone it doesn't, as evidenced by the right fees NBC faced in Beijing and even more so in Vancouver. But it is a major vanity play as well, and if Comcast is serious about getting into sports as it has always planned with Versus, this would be a tough property to lose.

5 The NBC Name. Will it go away? More than one NBC insider has guessed Comcast may want to ditch the name NBC Universal altogether at some point. The guess here is Comcast Entertainment becomes the parent but the NBC network keeps its name.

6 Musical Chairs. Take Jeff Shell, Ted Harbert, Jeff Gaspin, Marc Graboff, Bonnie Hammer and Lauren Zalaznick, to name a few. There's lots of talent in this group. But will there be enough seats for all of them when the regulatory and logistical music stops?

7 Hulu. Last week, the Comcast execs said they could see network programming stay available for free on Hulu, with cable programming living behind an authentication wall to protect cable operators. By the time this deal closes, there will be a pay model for Hulu, so despite what the bigwigs said, it's really not that simple. I'm not sure a Comcast-Hulu marriage will last; at the very least, it should see some Eldrick-Elin-type bumps.

8 Xfinity. Apparently, Comcast is changing the name of OnDemand Online to “Xfinity.” If that doesn't sound like a club in Vegas that guys go to with a stack of $1 bills, I don't know what does. Not that I would know. Anyway, here's hoping the NBC creative types can help Comcast come up with a more appropriate moniker.

9 The Friday Night Lights Model. A while back, Steve Burke told me that if the right opportunity arose, Comcast would look at a similar model to the DirecTV deals, in which the satellite provider gets a first run of a show and picks up a chunk of the production tab, with the second run airing on a network. It'll be interesting to see if Comcast experiments with some reverse-windowing now that it has its own network, or if that model is just dead.

10 Synergies. Obviously, brands like Bravo and Style have crossover to spare, and Joel McHale already traversed the companies with shows on both E! and NBC, but how else will synergies pop up? A Kardashians theme-park ride? Must. Resist. Lewd. Joke.

E-mail comments to ben.grossman@reedbusiness.com

http://mobile.broadcastingcable.com/article/438679-10_Things_to_Watch_About_Comcast_NBCU.php?rssid=20065

Monday, August 17, 2009

Hollywood's Self-Fulfilling Marketing


Two recent pieces astutely cast light on the Hollywood's stubborn predilection for mainstream filmmaking. A.O. Scott in the New York Times and James Surowiecki in The New Yorker discuss why mega-franchises like Transformers continue to dominate film production. In part, as Scott observes, this is the ritual of questioning summer schlock fare. More tellingly, though, both pieces ask whether there's a failure of nerve by studios in their continuing reliance on action blockbusters aimed at the mythical teen and young adult demographic.

Surowiecki focuses on Kathryn Bigelow's excellent war drama, The Hurt Locker, which received little marketing support for its release in summer blockbuster season. He nails the issue in writing, "Hollywood decided in advance that Americans weren’t going to watch this kind of movie, and then made sure they wouldn’t." This is not just an unwillingness to take risk with a smaller film (on an admittedly uneasy topic, the Iraq War): it's a failure of imagination from an industry that is supposed to be awash in it.

The global economic recession is fairly invoked as cause for contemporary caution by the media conglomerate-held studios. Yet the conservatism driving production and marketing decisions long predated the current crisis. Amidst a much farther-reaching transformation of media and fragmentation of audiences, the immediate-term thinking seems terribly short-sighted. The blockbuster mentality has been around for decades, guiding most studio operations at least since the late 1970s. At a time when diversification is a watchword for success across other troubled and evolving industries, Hollywood might do well to consider adopting it as a strategy for winning the future.

http://www.nytimes.com/2009/08/09/movies/09scot.html?_r=2&pagewanted=1

http://www.newyorker.com/online/blogs/jamessurowiecki/2009/08/the-hurt-locker-what-is-hollywood-thinking.html

Saturday, August 1, 2009

Social Media and Change in Moldova

Moldova has just held another contested election. The small eastern European country, which rose to international headlines after elections in April provoked two weeks of anti-government protests (amplified, it was celebrated, by Twitter and e-mail communications), had another very close vote this week that appeared to produce a victory for opposition parties seeking closer ties to Europe. The electoral closeness emerges in part from the need of these parties to preserve a fragile coalition. Still, one could see progress in challenging the authority of the pro-Russian government by a younger generation able to mobilize in important part with new technologies.

It is possible to conclude summarily that the Twitter Revolution of April has finally succeeded, albeit after a delay of a hundred days and still only gradually. Yet we need to be cautious. If indeed it has happened, the political shift toward a European-leaning coalition and away from the Russian-supported Communists may well be more a reflection of longer-term generational changes and the continuing drift in former Soviet republics and bloc countries away from communist or socialist rule. Did Twitter accelerate this process in Moldova? Perhaps. Better to say now that events – and communication media – in the spring contributed to an array of compelling trends toward change.

However, those trends are both political and economic and finally transcend the familiar East-West reading. On the ground, in the hearts of many Moldovans desperate for greater opportunity and change, they are trends that often converge in ways that contradict distant analyses grounded in pitched oppositions of Russian and EU-supporters. The Financial Times says Moldovans "want it both ways." Quite right. Rather than this being a sign of greed or unreasonableness, though, it is more likely a symptom of wanting and needing to embrace as many possibilities as exist. As the FT piece concluded on Thursday, "Moldova has no interest in choosing between them. It needs them both."

http://www.ft.com/cms/s/0/2f19164a-7d2c-11de-b8ee-00144feabdc0.html

Tuesday, July 28, 2009

Bauhaus: Still Teaching after 90 Years


A fascinating exhibition opened last week in Berlin on the Bauhaus. This innovative school of architecture, design and visual arts was founded 90 years ago at the end of the First World War and was closed in 1933 by the Nazis when they rose to power in Germany. For those 14 years, however, the Bauhaus represented a vibrant interdisciplinary school and community of teachers and practitioners committed at once to re-examining the very roots of Western aesthetics and design concerns and to extending the experimentation and social critique of modernity. The current exhibition at the Martin-Gropius-Bau is the largest exhibition on the Bauhaus in history and comprises more than 1000 objects. More info at http://www.modell-bauhaus.de .

As a laboratory for exploring artistic, educational, and social issues, the Bauhaus rewards exploration from multiple perspectives. For me, as an educator committed to interdisciplinary teaching and learning, the launching of workshops involving talented students and gifted practitioners and thinkers from different fields (Walter Gropius, Paul Klee, Laszlo Moholy-Nagy, Wassily Kandinsky among them) is inspiring. That this was accomplished in such a penetrating way at an historical moment of sweeping technological change and social transformation makes it all the more extraordinary. Viewing the show today, as we again confront the changes wrought by technology and a wide-scale reconceptualization of the world, Bauhaus continues to provide lessons in how we might pursue, with rigor and openness and imagination, persistent questions about creativity, what it means to be human, and how to relate to the world around us.

Thursday, July 23, 2009

Media Enchantment and the Real World


I recently received a pair of e-mail announcements from The Economist magazine (I’m a happy subscriber to the print edition). The first message indicated that an electronic version of the magazine was now available for the Kindle e-reader. The second was that the latest in the magazine’s ongoing online debate series, on “Israelis and the Arabs,” was now being launched and could be followed on the e-reader.

What was telling for me was how the messages combined media and real world items. Now, media exists in the real world, I know, and a debate about conditions among Israelis and Arabs or anyone else is not the same as the conditions themselves. But those are more abstract quibbles.

The issue here is that amidst our generally justifiable techno-euphoria today, especially regarding social media, the connection of evolving technologies to what’s happening in the actual world is often neglected or at least downplayed. Our very celebration of the speed, variety, mobility, and accessibility of digital media can easily lead to an emphasis on proliferating and interconnecting technologies themselves and only a superficial or fleeting engagement with whatever information they are ostensibly communicating.

In other words, and perhaps unavoidably updating McLuhan, it’s a reminder that while (new) media are themselves an important message we can dwell over, media technologies also (still) communicate about issues that have meaning for flesh-and-blood human beings and consequences on the ground and in actual lives.

Sunday, July 12, 2009

Thanks for the Compliment (about not being simplistically partisan or ideological)

A few words about a direct message I received on Twitter. It made my day. I just signed on a couple weeks ago and still notice and appreciate new followers. Here's what came in earlier:

I can't figure out if you're a liberal or a conservative. But your tweets are interesting.

I was glad to know at least one person finds my tweets interesting, but even more was pleased to learn my messages didn't betray any simplistic political perspective. I definitely situate myself on one side of that seeming divide, but believe doing so publicly, at least through a one-word label, is counterproductive. I'm convinced that the effect of such simplistic partisan or ideological affiliation has been toxic for our politics over the last two decades (at least). Of many examples, recent events in the New York State legislature, which for weeks were deadlocked in a 31-31 partisan tie, with both Democrats and Republicans wanting to be the majority, come to mind as a ridiculous, adolescent exercise serving no one.


In the echo chamber of contemporary media politics, I've long thought that media and journalistic reports should stop automatically including the party affiliation following a legislator's name (e.g., Peter King (NY-R) or Al Franken (Minnesota-D)). What value is added by those letters? Yes, I recognize that politicians self-identify with parties, rely on them for funding and support, and work in groups or caucuses organized along party lines. With so much information available from so many sources, it's perhaps understandable that having ready hooks on which to hang one's views and build communities of interest makes not only good sense but effective strategy.
Perhaps most fundamentally, an R or D, a L or C not only neatly -- too neatly for me -- summarizes re-assures us that we belong to a political tribe. Of course, the price, the loss of genuine nuance and robustness and contrarianness in our social and political discourse, seems much greater.

It's probably naive to think so, but every step that can be taken, by media organizations as well as citizens and social media participants, to acknowledge more fully the complexity of political and social life today should be embraced. So I hope I can keep on being hard to figure out, at least in terms of labels. The best ideas, I'm convinced, are often interesting precisely because they don't simply re-affirm already known positions or platforms but provoke one's thinking beyond.

Thursday, July 9, 2009

Social Media Consulting Du Jour

Great piece today at mediaite.com by Anthony de Rosa about social media consulting (http://www.mediaite.com/online/the-social-media-sommelier). It rightly shines a light on the important and often very lucrative role played by consultants these days as corporations realize the necessity of strong social media connections with customers. The point is that during these transitional days, more traditional corporations without ready audiences through blogs, Twitter, Facebook and the like can rely on individuals who do. Whatever the origin of their audiences and followers, the individuals can profitably leverage those numbers in consulting.

Two comments. First, de Rosa does open the piece noting we are in a transitional moment: "New media clout scoring old media dollars." His piece dwells on the example of the Vaynerchuk brothers, one of whose winelibrary.tv allowed the generation of vast follower lists that he's been able to leverage in social media consulting with corporations from industries far away from the world of wine. The question of relevance is not so directly posed here but it might be: how do the new media, and the consultants shaling them, re-make the old brand and message? Do the new voices offer a healthy and overdue wake-up call to old brands and organizations or will they ultimately prove blips in brand development that will be ultimately irrelevant in the long-term?

Even more, I think of a possible cautionary tale from a decade ago in the university world. In the late 1990s, when technology was promising a quantum leap in distance learning, many schools contracted outside vendors to develop the requisite technologies and services. Other schools or consortia formed for-profit start-ups, believing that technology-supported distance learning would be a sure money-maker. In most cases, particularly following some rather public failures in the latter group (think Fathom), universities quickly moved beyond their initial exuberance and have pursued in-house development of distance and e-learning resources. This more course has still, in many cases, proven quite ambitious -- consider MIT's open course offerings or Yale's webcasting of classes -- but it relies less and less on the outsourcing that relied on individuals who perhaps knew more about fledgling technology than specific institutional cultures or offerings.

A second issue here releates to the so-called Twitter or social media revolutions claimed for Moldova and, more recently and prominently, in Iran. This is not as much of a stretch as it may first appear. With the same regimes against which the partly Twitter-driven protests were organized still firmly in charge in these countires, we should rightly ask two related questions: what did the revolutions (better: protests) actually achieve? And what role did Twitter play in those protests? Both deserve fuller answers than I'll offer here (for a likeminded skeptical take, see Trevor Butterworth at http://www.ourblook.com/Social-Media/Trevor-Butterworth-on-Social-Media.html ). Briefly, though my concern is that the questions, while related, remain importantly distinguished and that the latter one be put in context. If Twitter had a "multiplier" effect in Iran or elsewhere, what did it multiply and why? And further, how will that effect persist over time, particularly as regimes themselves upgrade their own understanding of technology and engage in what David Bandurski of the China Media Project called "Control 2.0"?

While I am not at all suggesting an (economic, symbolic, moral) equivalence between corporate control of branding and governmental control of dissent, I do believe the multiplier effect in play in politics globally is also relevant to the current and dynamic role played by social media in corporate branding. We need not only to develop a better, more nuanced and in ways more data-driven understanding of that effect. We should also appreciate the role of social media consulting in fostering and managing that effect. We likewise should acknowledge how fleeting the phenomenon, at least in its current form, might be. It's not only a matter of co-optation and control but of the inevitable integration of this new, exciting and potentially powerful set of technologies into longstanding patterns of social and organizational behavior.

Tuesday, July 7, 2009

Imagining Moldova -- and the First Twitter Revolution (Part 1)

I recently had the opportunity to spend a week in Moldova. I confess I knew little about the place before my plans formed. Most of what I knew (vaguely) derived from the public protests in the capital, Chisinau, as well as the second, city, Balti, that occurred last spring and briefly dominated Twitter.

Protesters took to the streets in early April following Parliamentary elections in which the ruling Communist party won roughly 50% of the seats. They picketed the Election Commission Headquarters and then the President’s residence before temporarily occupying both the Parliament building and the President’s office. Organized largely via Twitter calls under the tag, “#pman" (for the capital’s main square, “Piata Marii Adunari Nationale”), sizeable public gatherings numbering as many as 15,000 continued daily for more than a week claiming election fraud and later illegal arrests and the violation of human right. While the government agreed to a re-count, the election results stood and the Communist party president and parliamentary majority remained in power.

Wanting to know more, I consulted with several Romanian friends and their advice was simpler: the country is poor and stagnant, they responded quickly, but it has great wine and beautiful women. Perusing maps of the region and tourist websites, friends in New York had an even more peremptory assessment: I was heading to an only slightly Europeanized land of Borat -- Kazakhstan with a splash of Romanian charm. Okay. Thanks.

So I sought out more background. There's not a lot out there in terms of books or detailed websites. Wikipedia has a cursory if up-to-date entry. Lonelyplanet.com offered a worthwhile download of pages from a travel guide primarily focused on Romania. The one helpful book available on Amazon was the scholarly if conservatively slanted The Moldovans: Romania, Russia, and the Politics of Culture, by Charles King (2000). (Another that I ordered but didn't arrive before departing was Steven Henighan's travelogue about a Canadian teaching English in the country, Lost Province: Adventures in a Moldovan Family [2003].)

The broad strokes of what I learned are these. Referred to by some as the poorest country in Europe, with a GDP per person estimated by the IMF at only $2200, Moldova is situated to the far east of the continent, nestled between Romania and Ukraine. The land is arable (to the degree that volumes of its soil were actually shipped to the Soviet Union in past years) but holds few mineral reserves. The geographical position speaks to the complex status of the country’s people, politics, culture, and even language as a meeting ground of east and west, of Romania and Russia, of Europe and Central Asia. As former Soviet Socialist Republic of the USSR, Moldova's present-day relation to Russia remains strong, not least in the continuing rule of the Communist party. Complicating politics further are two regions of simmering independence movements. Transdniestr, which declared its autonomy shortly after the fall of the Soviet Union and Moldova's own declaration of independence, and Gagauz, an area in the country's south populated by Turkic Orthodox Christians.

Not a bad overview, particularly in the individual strands of historical development. But in pursuing various sources, a serious question arose for me: how does Twitter or any of the vaunted digital information and communication technologies we enjoy actually deepen our understanding of the world to which we seem to have much fuller and more rapid access? Part of this concerns Twitter specifically, with its endless stream of brief information text and its ongoing tracks of trending for certain topics that seem to feed on themselves. While many well-researched sources are only a link away from the tweets, there’s little telling how many are accessed or read (or, particularly for the uninitiated, which are genuinely well-researched and which to be avoided). The result is that Twitter becomes the latest manifestation of a digital source of nearly endless information for which the political (and reading) preferences of the user shape the eventual output.

Put differently, it’s very easy to maintain a thorough familiarity with headlines and the soundbytes of political rhetoric or policy and other debates, but delving beyond that superficial and ephemeral familiarity to a deeper understanding is anything but assured. That seems especially true for geopolitics today, when news cycles and attention economies rely on a dizzying shifting of media focus (yes, trending) from one hot spot or crisis or disaster to another. It is still more an issue with the lack of history that figures into even many of the better accounts of contemporary events. Beyond the disconnected entries offered by Wikipedia and other scattered websites, printed materials and fictional films, the history even of the late twentieth century that unavoidably shapes our lives and world today is increasingly grounded in fragmented digital sources.

I offer all this as prologue to recounting my physical entry to Moldova precisely because my reliance on Twitter and various, mostly web-based accounts of politics and peoples so strongly framed my thinking and expectations of this place about which I knew so little. While similar in ways to what has long been available to travelers in guidebooks, from the nineteenth-century Baedeckers onward, the contemporary mediascape has grown both quantitatively and qualitatively different. The digital world is ultimately smaller, infinitely more accessible, and, particularly as one imagines lesser known places like Moldova, conducive to unprecedentedly superficial and partial understandings.


In Part 2, I move from my imagined Moldova to the actual, physical country.

Monday, July 6, 2009

Honda's "Power of Dreams" Online Films

I've been viewing the short films online at Honda's "Power of Dreams" series (http://dreams.honda.com). While very clearly advertisments for Honda, its history and current operations, the films can also be alternatively smart and inspiring, brimming with au courant ideas of management, risk-taking and innovation. (One does wonder, of course, how many of the ideas are actually implemented and practiced in the everyday.) Besides the corporate figures, many of the faces and voices are familiar (Deepak Chopra, Danica Patrick) and some refreshingly unexpected (Christopher Guest, Clive Barker). And the "Mobility 2088" instalment is simply cool. Viewed collectively, these films aspire to be a cross between the groundbreaking BMWfilms.com series of shorts, The Hire, directed by luminaries from John Frankenheimer to John Woo, in 2001-2002 and more recent online salons, from TED to the Aspen Ideas Festival (http://www.ted.com and http://www.aifestival.org). They finally fail to reach that standard, but do offer an excellent summary of how a company, particularly in a challenged industry like automobile manufacturing, reflects on -- and, with polished production values, presents -- itself and its vision of the future.

Saturday, July 4, 2009

Andrew Lih, _The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia_ (Hyperion, 2009)


Opening this account of the history and current reach of the World Wide Web’s phenomenally successful encyclopedia is a foreword by Wikipedia’s founder, Jimmy Wales. In it, Wales speaks briefly of some of the values guiding the project: individuals doing good, trusting each other, and using old-fashioned standards of clear writing and reliable references. His most important observation, though, building on these other values and view of beneficent human nature, is that Wikipedia grew as a kind of social software that both fostered and relied upon community.

That basic if imprecise idea guides much of the following account of the early years of technological developments that allowed Wikipedia to emerge. From Linux and Nupedia to WikiWikiWeb and Hypercard, the evolution and linkage of various innovations through the 1990s makes for a fascinating read. The individuals responsible at each step in the process, including Wales but also Ward Cunningham, the father of Wikis, and Larry Sanger, the original Nupedian, and others are also nicely drawn. Throughout, the imperative to create formative connections both between and for a networked community remains consistent.

In the middle of the book is a 50pp chapter that draws together various central issues but also covers a series of incidents and events, policies, and internal practices. It exemplifies the book’s strength and weakness. On the one hand, it delineates clearly the development and coordination of various technologies into a fully viable site for widespread public participation, production and usage. On the other, the recurrent attempts to make sense of these developments in broader social and cultural terms are frustratingly lacking. That sense-making is not necessarily required in an historical account, of course, but the recurrent suggestion here of metaphors and models to interpret the cultural significance of Wikipedia only highlights the failure.

Subsequent chapters are event-driven, showing how Wikipedia continues to be shaped, across languages, in face of different competitors and a changing web and mediascape, and finally how the project is managing growth. The book concludes with questions about the scaling of the project and the persistence of its originary values of community. Will increasing numbers of participants continue to do good and trust each other? Will the result, the “Wiki-ness” of Wikipedia endure? And crucially, how should the stewards of the foundation, like Wale, respond to the shift from being like a village where everyone knows each other to “more of a faceless impersonal metropolis” that is “driving the adoption of hard, cold, binding policies” (176).

As this challenge for the future suggests, the book dwells on the idea that we have come to describe as the wisdom, collaboration and dynamics of crowds. Yet detailing the Wikipedia case hardly settles the matter: did crowds create Wikipedia or did Wikipedia create the relevant crowds? More intriguing, the book seems to question the relationship between the individuals who developed Wikipedia and the crowds so regularly invoked by them as responsible for its growth. Are crowds possible, that is, without individuals orchestrating their collaboration?

Lih makes clear that the answer, at least in terms of the history of “the world’s greatest encyclopedia,” is no: remarkable, innovative leaders were as indispensable as the crowds themselves. In his Foreword, Wales underscores the socializing power wrought by technology and the World Wide Web. But he doesn’t pursue it, possibly because a fuller explanation would involve him directly in ways that run somewhat counter to better publicized tenets of community and collaboration. Perhaps the ultimate lesson here of Wikipedia’s creation and continuing growth is that an essential aspect of celebrating the creation and ongoing growth of global community of contributors remains the recognition of key leaders able to envision the scope and direction of that collaboration.

***

The Wikipedia Revolution also foregrounds another question. Going forward, how will we write – or, more to the point, research – histories of the digital age? The matter of research materials is a major concern: what will be the digital archives of sites and other projects that change and transform themselves so quickly? Again, one answer to this returns us to the issue of individual rather than collective voices. Invaluable to Lih and to us, for example, is Larry Sanger’s 16,000+ word account of the “Early History of Nupedia and Wikipedia” from 2005, available at Slashdot.com. At least for the near future, when such individuals remain alive and available to provide their recollections, they will remain vital resources. Beyond that, particularly as access to and preservation of digital projects fades, the matter becomes murkier.

Friday, July 3, 2009

Ruins of the Second Gilded Age


Amazing photo essay by Edgar Martins from the NYTimes Magazine (July 5, 2009) on the what the US real estate boom has left behind. It's a strangely unsettling group of images, both unavoidably nostalgic for pre-bust days of irrational expansion and eerily still (and depopulated) in their uncertain drift toward the future. Gives pause about how far we have yet to go to undo the work of those heady days. Thanks to David G for calling early attention to this.
http://www.nytimes.com/slideshow/2009/07/05/magazine/20090705-gilded-slideshow_index.html

Thursday, July 2, 2009

A Broadband Plan for Whom?

Julius Genachowski was finally confirmed last week as the Chairman of the FCC. Today, he presented, "The FCC and Broadband: The Next 230 Days." A bold action plan for expanding broadband across the country? Maybe. Eventually. Right now it looks more like a primer on bureaucracy and abstract project management. (The presentation is available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-291879A1.pdf)

Two brief thoughts. First, recall the report on global broadband penetration from Strategy Analytics in mid-June. The United States ranked 20th, with 60% household penetration -- just after Estonia and Belgium and just in front of Slovenia. The top three spots went to South Korea, Singapore and the Netherlands with, respectively, 95, 92 and 88% penetration. The report also concluded that U.S. prospects aren't improving: the forecast is that the United States will fall to 23rd by the end of 2009. As if we needed a further reminder that mid-twentieth century American institutions -- the auto industry, medicine, here, technology -- are no longer automatically pre-eminent in the world. (http://www.strategyanalytics.com/default.aspx?mod=PressReleaseViewer&a0=4748)

Second, at a time when free-market principles are justifiably being questioned in various industries, the priority of the government program appears to encourage private sector development through billions of dollars in stimulus grants and subsidies. A lot of funding, to be sure, at least for the companies being subsidized, but how coordinated will the resulting developing of broadband actually be. One of the mitigating factors in cross-national comparisons of broadband penetration is the size of countries -- expanding technologies in Singapore and the Netherlands is obviously a much lesser order of magnitude than in the U.S. Yet isn't that exactly the reason why there needs to be an overall strategic effort rather than one that's left to a market that has proven itself dysfunctional and unable to grow in a concerted way in the past? I'm not suggesting an entirely top-down government program. What does seem to make sense, though, is a plan that puts the larger public first and the vaunted entrepreneurs and technology companies, who obviously have heretofore not seen an economic motivation in expanding broadband across the country, second.

On Dying Young: _Public Enemies_ and Michael Jackson

It may be the unavoidable glare of the Michael Jackson media juggernaut, but I saw the new movie about John Dillinger, Public Enemies, and immediately believed it was a fitting release for this exact cultural moment.

Why? The film ends with Dillinger's storied killing by FBI agents outside the Biograph theater in Chicago in 1934. Or very nearly ends. A coda follows in which one of the lawmen responsible for the killing makes a touching visit to the bank robber's love interest to share with her the dying man's whispered last words. That sentimental closing moment underscores how the film presents the Public Enemy #1 to be remembered: as an outlaw with a heart of gold, who genuinely loved a woman and sought to escape with her from the midwestern life of crime.

In a way, this is the male, gangster version of the whore with a heart of gold story. But it's also a story that has changed over time. Manhattan Melodrama, the 1934 film starring Clark Gable as a gangster that Dillinger saw that fateful night at the Biograph, was equally a production of its time. Gable dies in the end in the electric chair but the closing is really about his childhood pal, now the DA, played by William Powell, and the woman they both loved, portrayed by Myrna Loy, affirming their marriage and future together. That sort of affirmative Hollywood ending was mandated in productions of the time, particularly those involving gangsters, and at least tempered the sympathies of viewers for criminals and their misdeeds.

In the current film, director Michael Mann has built our contemporary Dillinger to be a legend -- or rather a larger legend than he was. Played by Johnny Depp with an angular cool, Public Enemies offers little insight about his motivations for the string of action sequences that constitutes it. His lawman nemesis, G-Man Melvin Purvis, is similarly undifferentiated as played by the increasingly ubiquitous Christian Bale. (This lack of dimension becomes all the more conspicuous in a closing title, where we learn Purvis not only quite the FBI a year after Dillinger's demise but then killed himself some two and and half decades later.) That lack of character dimension leaves the action but also allows the broad, even archetypal contours of the outlaw story to be foregrounded.

Outlaws can occupy a special social status between the people and the law or legal institutions and authority. Allowing everyday citizens to keep their money while taking the bank's funds during a robbery is only the most obvious way this status is presented. The recognition of the power of a nascent national media by the manipulative FBI director, J. Edgar Hoover, makes clear how those claiming the legitimacy of the state or police must use the press to battle with so-called outlaws for the public's hearts and minds as much as with tommy guns. Especially in hard economic times, when the political and economic system is under duress, that battle for public confidence and the outlawry it facilitates is vital.

If the enduring fascination with Dillinger and his status as a Public Enemy is only burnished by the new film, viewers in July 2009 may exit the theater thinking here was a charismatic outlaw who died too soon at the hands of a legal but not altogether moral order. The coda with his tearful lover is crucial because it emphasizes that he died too soon. For the two of them but also for us.

Dying young has a long history in Anglo-America, from Housman's poetic athlete to the more layered celebrity deaths of the last half-century. Marilyn Monroe, James Dean, Jim Morrison, Jimi Hendrix, and Elvis are among the performers whose early deaths are still cause for commemoration. More to the point, these deaths have enabled our individual and collective memories of these performers to remain fixed on their youth -- its beauty and its rebelliousness. The most common comparison made with Michael Jackson is Elvis, which is appropriate both for the rarefied cultural heights they occupied but also because they were ultimately not so young when they both died (Elvis 42, MJ 50).

Focusing on the past youth of the aging or dead is ultimately an act of wish-fulfillment for present-day onlookers seeking to arrest or even deny the passage of time. Amplified by the echo chamber of popular culture, such an act can also become an important affirmation by the public not only of its existence but its own vitality. That such affirmations so often turn on perceptions of beauty and rebelliousness, of the creative grace and outlawry of those who are gone, is being evidenced yet again today.

Saturday, June 27, 2009

Michael Jackson and Media Time


Amidst the wall-to-wall coverage of Michael Jackson’s death have been frequent observations about his transformative and ongoing influence. A regular assertion has been that Jackson broke the color barrier at the fledgling – and initially all-white -- MTV in the early 1980s. Another observation, both self-evident and self-fulfilling when communicated by global media news outlets today, concerns the performer’s longstanding worldwide popularity during the crucial globalizing years of the 1980s and 1990s.

Less thoughtful attention has been devoted to why this historical significance matters. Breaking down racial barriers and crossing international borders are important, of course. Commentators from Eric Lott to George Lipsitz have written about the centrality of African-American performance and creativity to American culture that, in turn, have gone global with the proliferation of American cultural forms and products. Likewise, our efforts today to make sense of changing technologies and intensified worldwide cultural connections can only benefit from fuller understanding of what occurred a generation ago as new technologies, driven by satellites and cable television, and corporate consolidations enabled a heretofore unprecedented wave of media globalization.

One of the great anxieties of the time involved the homogenization of media content that would inevitably occur around the world. Homogenization here was a code word for Americanization and typically was feared as a parallel on the content side to the industrial consolidation that was taking place in the growth of media conglomerates. While such concerns have persisted, many critiques since have developed more nuanced readings of the media landscape that focus on the complex interplay between global and local forces. Scholars like David Morley, Kevin Robins and Annette Sreberny have adeptly sought to reconceptualize the geographies of media emergent since the 1980s.

Corresponding to this recasting of media space should be a rethinking of media time. The surplus of digital media content, from 24-hour news to nearly limitless audio and video internet downloads, has produced a landscape that is marked by at least three temporal elements: media impressions are constant (we have continuous input from multiple platforms), ephemeral (what crisis is CNN covering today) and less anchored in time (think TiVo). As importantly, understanding media over time – that is, media history – at least from the 1980s until today turns perhaps most dramatically on a fragmentation of attention and consumption across an increasing number of channels and platforms. Michael Jackson, we have been told, was among that last generation of figures to dominate cultural experience before media fragmented (consider John Rash’s piece about the passing of Ed McMahon, Farrah Fawcett and Michael at adage.com: http://adage.com/mediaworks/article?article_id=137601 ).

Yet the coverage of Jackson’s death across channels and platforms today suggests that that fragmentation may not be so complete. Nor do I believe that the recent all-MJ, all-the-time is simply an acknowledgment of a towering figure who pre-dated the diffusion of media. Rather, what the widespread, cross-channel coverage suggests is an interplay between the admittedly fragmented media worlds we variously occupy and participate in and what can still emerge as a more unified, event-driven media landscape. Most often those events are personal tragedies – think Jackson or Princess Diana – or political occurrences, like the Iranian election and its aftermath, but they can also be more benign, like the annual American secular holiday, the Super Bowl. As events, they are indeed fleeting. Much like the interaction between global and local in media space, though, these occasional unifying events balance the ongoing, everyday dispersion of attention and consumption in media time. Their interaction and balance give shape to media time and, in the process, to our sense of shared experience and community in the here and now.

Saturday, June 20, 2009

A further take on Pirates and Privateers

Obviously my extended take on piracy, drawn in contrast to aspects of terrorism like anti-capitalism and mediated visibility, is only one of various possible approaches to the subject. In a fascinating piece of historical sociology, Bryan Mabee of the University of London, focuses on piracy in terms of historical changes in the contexts of war and violence.

"Historical accounts of private violence in international relations are often rather under-theorized and under-contextualized. Overall, private violence historically needs to be seen in the context of the relationship between state-building, political economy and violence, rather than through the narrative of states gradually monopolizing violence. Pirates and privateers in late-seventeenth and early-eighteenth century Europe were embedded in a broader political economy of violence which needed and actively promoted 'private' violence in a broader pursuit of power. As such, the de-legitimatization of piracy and privateering were the consequence of a number of interlinked political economic trends, such as the development of public protection of merchant shipping (through the growth of centralized navies), the move away from trade monopolies to inter-imperial trade, and the development of capitalism and industrialism. Present forms of private violence also need to be seen as part of a broader historical dynamic of war, violence and political economy."

Abstract for "Pirates, privateers and the political economy of private violence," _Global Change, Peace & Security_, Volume 21, Issue 2 (June 2009), pp. 139-152.

Terrorists versus Pirates


Introducing a book published four years ago, I acknowledged the quandary of defining terrorists and terrorism. Not only is one man’s terrorist another’s freedom fighter; taking a longer view of which actions or groups should be understood under the label of terrorism and which not tends to cast as much light on those doing the labeling as those committing the acts. As an analyst, I was thus left to choose between two poor choices: plunging into a definition that would seem tendentious and emphasize certain aspects of action or ideology, or sidestepping such emphasis and thereby fail to provide a useful critical frame for the subsequent discussion.

There are patterns, though, that have emerged over time and multiple studies of different violent actions labeled as terrorism. For the sake of orienting my readers, I condensed these to five:
1. The deliberate deployment, or threat of deployment, of violent action against persons or property;
2. The production of anxiety and fear, and the disruption of social routine, by this action;
3. The pursuit of this action by individuals, sub-state groups, or states motivated by criminal, political, or religious reasons including the desire to demonstrate their power;
4. The intimidation of, or impact on, individuals who are neither directly involved in the violent action nor the primary targets of the actors’ motivation; and,
5. The often clandestine or semi-clandestine nature of the action and responsible actors.

While a useful starting point, what’s largely left out of this list is the importance of anti-Capitalist violence to understandings of terrorism, at least in Europe and the U.S. since the late 19th century. From the Haymarket bombings and Italian anarchism to Bolshevism to the Red Army Fraction and ultimately Al Qaeda’s bombing of the World Trade Center, the violent actions mounted against property, especially private property, and the capitalist system it represents offer a continuing if quite varied thread linking so-called terrorists. Like any broad categorization of terrorist groups, of course there are exceptions; the point is that violence or its threat against property and the capitalist-system organized around and protected by nation-states has been a mainstay of the actions labeled as terrorism over the last century and a half.

The other aspect of terrorism worth highlighting is more familiar: the media. My contention has been that the production of anxiety or fear and intimidation of individuals distant from direct involvement in violent action requires mediated communication of information about that action to be effective. In other words, for a violent event to be terrorism, that is, qualitatively different from just another violent event (as traumatic as that may be), requires a certain framing, communication and interpretation of the event to those who did not experience it firsthand. Depending on one’s political leanings, then, parallels might be seen between a capitalist-media-system organized around and protected by nation-states and the resulting labeling of actions or events as terrorism. Put differently, as I did in Terrorism, Media, Liberation, media communication heightens the visibility of violent actions even as the violent groups themselves seek, at least most of the time, to remain invisible.

These two highlighted issues, anti-capitalism and mediated visibility, seem relevant not only for groups designated as terrorists. Both capitalism and media also raise questions about that other spectre of non-state group violence, pirates. Piracy, of course, is millennia old. Indeed, arguably a golden age, was in the seventeenth and eighteenth centuries when capitalism was emerging – along with nation-states and global trading systems.

Today, piracy has evolved along several tracks. The Somali pirates in recent headlines have primarily seized ships in the Indian Ocean and held them, and their kidnapped crews or passengers or cargo, for ransom. Elsewhere, in the waters around Southeast Asia, for example, the physical theft of cargo is more common. In both instances, the legal language around piracy entails violence and, tellingly, the use of “war-like acts” of violence or criminality – in other words, acts reserved in the inter-national system for nation-states. The status of “non-state actors” and their violation of the right to employ violence ordinarily the monopoly of states is as critical for the legal characterization of pirates as for terrorists.

(As a sidenote, the fuzzy status of airplane hijacking is intriguing here: there would seem to be a parallel with the seizure of ships at sea for which the term piracy is typically invoked, yet that same term is rarely if ever used for actions against airplanes. This might have to do with the apparent distinction between financial and political motivation for such action. The French use the term, pirate de l’air for the hijacker of a plane, and pirate de la route for the hijacker of a truck or bus.)

Many contemporary pirates are adept at the use of modern technology. But the more compelling connection between technology and piracy involves the illegal infringement of copyright or theft of copyrighted material. Digitalization has enabled the ready manipulation and circulation of such materials, with the internet, in particular, allowing unauthorized sharing. Probably the most famous example of this process was the appearance in the late 1990s of Napster and the illegal downloading and proliferation of audio files in mp3 format. Despite the successful shutdown of Napster by the Recording Industry Association of America in 2001, the episode altered the basic business model – and dominance by a handful of longtime corporate players – in the music industry. In the roughly decade since, the film industry has feared similar downloads of its productions, causing, as some see it, a delay in the implementation of widespread internet- or broadband-delivered digital film.

Like piracy on the seas, the piracy of copyrighted material has a longer history. The latter phenomenon dates at least to the early 17th century – again, the time when capitalism and the private property underlying it were assuming what are for us recognizably modern forms. The linkages between these notions of private property and intellectual property remain underdeveloped, however. Moreover, as my friend Martin Roberts has put it to me, both kinds of property speak to the emergence of commodities created and regulated not only by capitalism but the legal orders of nation-states.

Piracy of both kinds also relies on minimizing the visibility of its perpetrators. Terrorism, recall, depends for its success on the ever-increasing visibility of events to influence those not having immediate proximity to events. Pirates, on the other hand, not only employ stealth and invisibility in order to carry out their actions but generally seek advantage by denying visibility or awareness to those beyond immediate events. Anthropologist Carolyn Nordstrom and others have referred to the enormity of the “hidden economy” existing outside or on the margins of the official capitalist economy, and many groups identified as pirates are directly involved in its operation. The role of media in increasing or denying visibility thus becomes crucial for the continuing success of pirates.

The juxtaposition of terrorism and piracy turns on this different relation to visibility. Even more important, though, is their shared if still different relationship to capitalism and the nation-state system underlying it. Terrorism, at least of the Al Qaeda strain, is in its attack on and critique of capitalism, in ways a double for that ideology. Piracy is different. As an extreme, violent version of the ever-growing pursuit of self-interest and private control of capital, piracy is finally guided by the same logics of political economy, however exaggerated. It is a shadow of capitalism. In this age of increasing circulation of global capital – not to mention of global media – the renewed prominence of that shadow should be unsurprising.

Saturday, May 30, 2009

Are We not Men? _Terminator Salvation_


*****SPOILER ALERT: The following comments take off from revelations made toward the end of the film that answer questions about character and narrative posed throughout. Readers not wishing to know these before viewing the film should not read further.*****

Near the end of Terminator Salvation, Marcus Wright (Sam Worthingon), the major new character introduced in the film to the franchise has his role explained to him (and us) by a computer. He is not only a cyborg, a revelation already made dramatically in the course of the narrative, but one whose purpose was to lure Connor to SkyNet where the resistance leader could be killed. This purpose is explained by a computer simulation of Dr. Serena Kogan, played by Helena Bonham Carter, who had fifteen years earlier persuaded Wright to donate his very human body for medical research. Wright’s enraged reaction includes tearing out the SkyNet control chip implanted in his neck and smashing the computer video display. He then comes to the aid of Connor, who is being assailed by the next generation, Schwarzenegger-faced Model 101 Terminator. After dispatching the machine and fleeing the headquarters, Wright offers his human heart to the mortally wounded Connor so that the resistance leader can fight on.

Wright is arguably the central character in Terminator Salvation. (Bale’s stilted acting as John Connor makes this an easier claim to defend, though that’s not my point.) The film begins with Wright on death row in 2003, signing over his soon-to-be-executed body for research to Dr. Kogan. His appearance in 2018, the post-apocalyptic moment of principal action in the film, is not immediately explained and his movement through the narrative parallels, while sometimes crossing, that of John Connor up to the scenes at SkyNet headquarters where the revelations about him and the final conflicts with Connor and the new Model 101 occur. The film’s poster shows only him and Christian Bale’s John Conner, the grown-up resistance fighter whom viewers have come to know over the three earlier films and the current TV series. Reportedly, Bale had even initially been approached by the director, McG, to play the Wright character before convincing the filmmaker that he should be Connor.

That Wright is a cyborg comes as only a minor surprise midway through watching the film. What interests me more, and is finally more telling, is the later revelation that his very purpose was programmed by SkyNet. This explanation is neat and plausible. It makes sense within the narrative and allows for the subsequent closure achieved through Wright’s rescue of Connor from the Terminator and his donating his heart for transplant. It also alludes to the other films – one that comes to mind is the 2004 remake of The Manchurian Candidate, where Bennett Marco, Denzel Washington’s character, who serves throughout the film as the viewer’s surrogate and seeming pursuer of villains, is himself revealed in the end to be the brainwashed assassin.

It is also assures that the film remains summer schlock rather than edgier sci-fi (or, to borrow the evocative name of a club is the first film, tech noir) fare.

Why? In the Terminator franchise universe, machines are bad and humans good (some stupid, some mean, most victims, but all, as a species, good in the face of SkyNet’s genocidal evil). After being revealed as a cyborg, the question remaining about Wright, whom we’ve been assured in the prologue was a human, is how did he become a cyborg and for what purpose. The first possibility, that SkyNet made him for the subterfuge, is laid out in the film. The second possibility, the one not taken, has Wright as a cyborg made by men for the purpose of aiding the resistance. This would be akin to the human reprogramming of the Model 101 Terminator sent back to help Sarah and John Conner against the T-1000 in the second film.

Such remaking of a human as part-machine by other humans would also be importantly different from the simple reprogramming of a machine. It would complicate the clear boundaries maintained in the franchise between men and machines. Summer Glau’s female Terminator in the TV series, Terminator: The Sarah Connor Chronicles, treads on similar ground, occasionally expressing curiosity for human emotion and feeling, particularly for the teenage John Connor. The Model 101 Terminator in the initial sequel, Terminator 2: Judgment Day, also evinces some arguably human traits despite at the end of that film explaining that he can never cry. If the boundary was pushed in the 1991 production, it may be the larger-than-life Schwarzeneggerian exception that proves the franchise rule. On today's film screen, especially, no such blurring occurs.

Even more importantly, the mechanizing of humans by humans avoids any suggestion that men are striving to become more machine-like in ways that parallel how machines, in the continuing evolution of Terminator models, seek to become more like men (at least in appearance). Terminator Salvation therefore avoids any hint of convergence of man and machine, at least from the human side. As a result, the film sidesteps the knottier and ultimately more provocative questions posed by ore thoughtful science fiction, preferring to fill the current production with pyrotechnics of future war.

Blade Runner, particularly in the variations offered through its multiple versions, is the consummate example of a film that plumbs the depths of what convergence of man and machine might mean. It is also probably an unfair basis for comparison. Yet other productions demonstrate how multi-dimensional characters can complicate otherwise clearcut oppositions between man and machine. Consider the first Matrix film (1999), setting aside the sophistication of the guiding conceit of the matrix itself, Cypher, the character played by Joe Pantoliano, who betrays his fellow rebels in hopes of returning to the painless ignorance of simulated reality. Again, this is perhaps another unfair comparison.

My point nevertheless is that reworking even relatively minor elements could interject layers to character and narrative. For a summer action film from an extraordinarily profitable franchise, that may well be a non-issue. But with two more Terminator films in a projected new trilogy set after Judgment Day, that kind of richness could only be for the good.

Sunday, February 22, 2009

_Wall-E's_ Debt to D.W. Griffith


To describe the gentle movements of Wall-E’s eponymous, robot hero, critics have regularly invoked the poetic physicality of Charlie Chaplin and, occasionally, Buster Keaton. This seems particularly apt in the film’s opening thirty minutes, when the robot’s trash compacting movements around a desolate Earth is synchronized with music and proves expressive to the point that dialogue is unnecessary. The parallel continues to be relevant as Wall-E leaves Earth in romantic pursuit of Eve, a vegetation-seeking robot, calling to mind Chaplin’s City Lights, in which the Little Tramp falls for a blind matchgirl. That masterwork, despite appearing in 1931 after the movie sound era begin, was a silent production dependent on character’s visual expression of emotion.

If the comparisons to silent film virtuosi resonate in celebrating the gestural subtleties of Pixar’s animation, they lack historical depth. No critics I have read have pursued the parallels they draw with Chaplin or Keaton in contemporary reviews. To a certain degree, that’s fine: the point of mentioning these past giants is to celebrate a technological updating of the timeless capacity of cinema to convey characters’ feeling and emotion through their visual movement alone.

History is important to Wall-E, though. This is true not only when considering the film’s comments, offered from seven hundred years in the future, about our present destruction of the environment. The past also operates more complexly in the ways the film itself tells its story. But neither Chaplin nor Keaton is the key to understanding this history. The key figure here is David Wark Griffith.

D.W. Griffith is best known for having directed The Birth of a Nation, the seminal 1915 film that was at once a momentous step forward for narrative filmmaking and a vile racist account of the Civil War and Reconstruction whose heroes were the Ku Klux Klan. Yet despite this unpardonable offense in visualizing invidious racial politics, Griffith was also a father, indeed a founder, of narrative filmmaking as we have inherited it. A master at consolidating the advances wrought by others and creating his own innovations in visualizing more an more complex stories on screen. The use of parallel editing to build suspense while simultaneously tracking separate dramatic developments and the visual development of complex psychological characters are but two of the legacies of the hundreds of films, mostly shorts, made by Griffith.

While shaping film narrative in the early, formative years of the twentieth century, Griffith was nevertheless an unalterably nineteenth-century man. He had been born in Kentucky in 1875, the son of a former Confederate army officer who had been a hero in the Civil War. Another way to conceive of this connection is to recall that fewer years separated the appearance of The Birth of a Nation and the events it recounted than we in 2009 are separated from the end of World War II. While much else has changed in renderings of the past and shaping of collective memory, of course, the point is that such a defining event still held great sway in the popular imagination. Perhaps more generally important was the sentimental cast on which he relied for constructing coherent stories about the world and especially the historical past.

Returning to Wall-E, the film appears to contain a direct if fleeting nod to Griffith. Midway through, after discovering soil left by our intrepid robotic hero on the Axiom spaceship, the previously inactive human captain requests a computer lecture about the soil and planting. Various images flash before his eyes, and ours, but the very first is recognizable from its place in film history. It shows a single man, bag slung over one shoulder, walking slowly and spreading seeds through a field. The image is drawn from A Corner in Wheat, a 1909 short film made by Griffith.

The short was adapted from Frank Norris’s 1902 novel, The Pit. The titular “corner” is the control that one mogul seeks over the world’s wheat market and the film dwells on the contrast, developed through skillful editing, between the profligate lives of financial speculators and the sufferings of the poor who cannot afford bread when wheat prices are artificially increased. A cautionary tale for the turn-of-the-century progressive era, modern urban excess is critiqued in favor of a more equitable agrarian past. Pictorially, the closing pastoral image of the film that appears in Wall-E is itself a reproduction of Millet’s 1850 painting, The Sower, which idealized a peasant farmer.

Griffith was nostalgic here in the truest sense of that word: he yearned for an imagined past, particularly a past home, that never was. Nostalgia is a timeless impulse, of course, and it is fair to observe that stories tinged with nostalgia are, if not universal, largely unbound by historical era or place. Cinema has always been about nostalgia – some film philosophers claim that the medium’s defining condition is the celebration of the continuity of the world through viewing actors and experiences captured on celluloid (or, now, disk) that we know existed in the past. As a late nineteenth and early twentieth-century invention, however, cinema has most consistently trafficked in nostalgia for a pre-modern, pre-urban, pre-mechanical past that can be juxtaposed with the viewer’s present.

Besides the happy ending to Wall-E and Eve’s romance, the story in the Pixar production concludes with the hopeful return to Earth – to “home,” as the captain repeatedly says – of the previously inactive human occupants of the Axiom. The planet has become habitable again, they believe, based on the successful growth of a single green plant. They excitedly elect to pursue their future by returning hopefully to their past, seeking to re-create home in a world that they can now only imagine, or have imaged for them by technology.

If Wall-E is primarily a sentimental romance in the tradition of City Lights, its background story of societal change extends the tradition of A Corner in Wheat to address the collective need of humans to get back in touch – here, literally – with the physicality of their surroundings, notably the Earth itself. That latter story, also told a century ago by Griffith, remains a quintessentially modern one: our ongoing quest for newer and better technologies must be balanced by an uncertain fascination with the consequences of their use that drives us to embrace the imagined visions and values of the past. With its combination of breathtaking animation and adroit storytelling, Pixar offers, to many, the best of a new generation of technological filmmaking. Yet even as the medium continues to evolve into the twentieth-first century, and even to tell stories about the twenty-eighth, their latest production suggests that cinema remains squarely rooted in its own imagined past of nostalgic returns and hopeful new beginnings.

Friday, February 13, 2009

_What Would Google Do?_ PowerPoint


I heard Jeff Jarvis give a brief talk last night about What Would Google Do? The book deserves full reading and consideration -- there's much in it both to admire and critique -- but here's a provocative if skeletal summary presentation of some its ideas.

http://www.slideshare.net/jeffjarvis/wwgd-the-powerpoint?type=presentation

Monday, February 9, 2009

A Noble Terrorist?: Thoughts inspired by _Seven Days to Noon_


If Alfred Hitchcock had made The Naked City in London, and substituted Cold War atomic politics for domestic criminality, the film would have been something like this....

Directed in Britain in 1950 by John and Roy Boulting, Seven Days to Noon tells the story of the pursuit of a scientist who has threatened to explode an atomic bomb unless his government agrees to renounce atomic weaponry. Led by a Superintendent Folland of Scotland Yard’s Special Branch, the investigation follows the scientist, Professor Willingdon, from his rural research center to London. Over the seven days stipulated by Willingdon in his ultimatum letter, the film traverses London as his pursuers slowly track him down. The city, eventually evacuated as a precaution, ends up appearing surreally empty. On the seventh day, the police find the scientist, kill him in a moment of confusion, and disarm the bomb.

Like many thrillers, the narrative both follows the Scotland Yard investigation of the plot and functions as an investigation itself. Of what? Visually, of postwar London. Shot on location, at times in overtly documentary style, and occasionally incorporating stock footage of city life and police activity, the film dwells on the diversity of the city’s places and inhabitants. This is conspicuously highlighted in the unorthodox opening credits, the words of which move swiftly across the screen from right to left over a virtual travelogue of the city. If the film’s first third is taken up with the presentation of plot and the initiation of the police investigator’s search for the would-be bomber, the second third is given over to the city as a lead character – both the space controlled by the police who search it thoroughly and an unending array of places in which the would-be bomber can hide. We consequently see hotels, barbershops, bars, gambling halls, and rooming houses; that is, the heart of the people’s city.

In a way, this evokes a grand tradition of using the film camera to penetrate and illuminate the ordinarily unseen, marginal spaces of the modern city. Think of German films of the Weimar period, as Tom Gunning has noted so incisively in writing about Dr. Mabuse, Der Spieler, in which the contest between good and evil, or at least official order and criminality, is fought in part over the control of vision of the urban world. Which parts of the city are illuminated and made visible and which are left in darkness and invisible to official eyes and the film camera? To be sure, Seven Days to Noon is not a film on par with Dr. Mabuse. Yet part of what’s fascinating in the Boultings’ production is how exactly it reorients some of the central ideas represented so profoundly by Lang and others in the long cinematic tradition of representing the modern city.

Instead of critiquing the often arbitrary distinction between official and under-worlds by showing their similar motivations and values, Seven Days to Noon dwells on the urban spaces of ordinary people, including, yes, less flattering sites like dance halls and bars. That these spaces enjoy a pride of place and warrant neither parallel nor justification with the physical monuments of official London suggest that what is ultimately at stake in the atomic age is not states or government buildings so much as people and everyday spaces and lives. While hardly audacious, such humanism is refreshing for a genre, the thriller, more often characterized by superficial action, tinny political motivations or fashionable pessimism.

The final third of the film dramatizes the evacuation of the city ordered by the Prime Minister. It is here that the film’s historical moment of production warrants comment. Made in 1950, Seven Days to Noon seeks to evoke the spirit of solidarity and grand purpose of the recent wartime past as still relevant to the new threats of the atomic age. This is most clearly evinced in the willing transport of the citizenry out of town, which almost surely would have appeared as a reassuring reminder to Londoners that they remained capable of responding successfully to the new atomic threat just as they had the earlier one of Nazi attacks. The more visible outcome in the film is a stunningly empty city (imagine a precursor, made a half-century earlier, to the arresting opening visuals of Danny Boyle’s 28 Days Later).

Again mindful of how the city had so recently been a battleground, the images of uninhabited but mostly rebuilt and intact buildings also serve as a kind of postwar tribute to the resolute British spirit. Ultimately, though, the resulting vision of empty streets suggests a kind of stand-off in the struggle for visual control of the city between the police and the bomb-wielding professor: while the emptiness literally results from a state-administered evacuation, the necessity for that action has been driven by Willingdon. Even more, the scenes are strangely haunting in their suggestion of the potential consequences of the scientist’s threat fulfilled.

It is here that we come to the question of terrorism. In the era of mass media, terrorism can be understood as an attempt, for political ends, to control information, narratives, images, knowledge and feeling through the intimidation and fear of individuals remote from the physical acts being threatened. As I’ve argued elsewhere, such a formulation recognizes the importance and complicated operation of media for communicating distant experience. More interestingly, this model suggests that, if judged by the same standards, mainstream media makers might fall into the same category of provocateurs and intimidators. Consider Hitchcock bringing a ticking bomb onto a loaded city bus: the act drives the narrative of a film about the evil done by those disrupting public order on-screen (the film is Sabotage, from 1936) in order to manipulate the emotions of audience members safely on the other side of that screen but whose viewing of the commercial film tends to set them in a particular political position. So is Hitchcock a terrorist? Or is the Eastern European villain, Verloc (played by Oscar Homolka)?

One answer is, of course not: neither threatens real people with actual violence. Yet recalling the importance to our conception of terrorism of the intimidation of people at a distance from the threat of violence depicted through media, the answer grows murkier. To label an individual a terrorist, or an act terrorism, indeed requires acknowledgment of the role and reach of media. It also demands a sensitivity to the political character of words and behaviors. We tend not to think of mainstream films or other commercial media as political in the same way as the overt propaganda of some zealots or groups. For some, though, commercializing or commodifying media and entertainment is entirely political – socializing and pacifying viewers, feeding the maw of consumers, serving as an opiate for the masses.

Where this potentially leads is to a questioning of one’s perspective and, especially, the consistent privileging of some perspectives over others. How do we approach commercial films, for instance, or even news? As somehow politically neutral or as manifestations of a specific political and economic structure? As means, moreover, for a particular shaping and packaging of information and emphasis on some images and narratives over others? These questions should not be understood as leading to the conclusion that all perspectives are somehow equivalent, morally or otherwise. We need to be able differentiate perspectives and politics and the media practices communicating them without that kind of reductionism.

The same logic of privileging, of making visible versus keeping invisible, pervades individual narratives and the actions and motivations of individuals occupying them. How do we compare the actions of Superintendent Folland and Professor Willingdon? More fundamentally, how do make sense of the contrasting assumptions about atomic weapons held by the British government in Seven Days to Noon (in the characters of Folland and especially the fictional Prime Minister) and by scientist? That the film doesn’t delve into these different understandings and motivations is partly a function of it being a thriller more interested in the chase itself than the background rationale. Yet the very absence of sustained elaboration of the reasons for Willingdon’s actions, of his presumably humanist convictions and beliefs, is illustrative of how media communications rely on partial or fragmentary or even negligible accounts of why individuals behave politically in the way they do.

A frequent critique of commercial media treatments of so-called terrorism is that the details of the perpetrator’s motivations or politics are left undeveloped or written off in broad strokes as irrational, villainous, savage, or simply evil. That approach may heighten dramatic conflict, especially if the conflict is cast as good versus evil, but it ignores the reality that even perpetrators of ghastly violence are people with pasts and thoughts and feelings that presumably have contributed to their complex decision to take extreme actions. Sadly, the neglect of this sociological and psychological complexity is often colored by racism or xenophobia or other biases based in cultural difference.

In the Boulting brothers’ film, Professor Willingdon is a fascinating test case because he embodies the very values that the state itself seems also most to represent or care to defend: he is a loving father and husband, a devoted civil servant, a well-educated producer of knowledge for the greater good. If he is a genius, he’s hardly an evil one. In fact, his motivation in issuing his ultimatum might appear as an expression of the most basic of liberal humanist values, the preservation of self and society. Though he finally comes off as absent-minded and, as a research scientist, naïve about the geopolitical realities of the world, his might be seen as a virtuous, even noble reaction to an increasingly uncertain world.

To speak in such terms of Willingdon’s nobility requires a certain perspective on individual virtue, political action and humanist values. One might imagine, for example, a more militant position in which the professor’s thinking fails utterly to account for the Cold War threats to the survival of Britain and its inhabitants. The humanist values he seems to promote are secondary, even inconsequential, if the society does not adopt a policy of realpolitik and arm itself to counter the enemy’s atomic build-up. Naivete trumps nobility in that view.

The larger point is that characterizing intentions as noble, virtuous, or otherwise marked by good principles is largely left to the eye of the beholder. Like the idea of terrorism itself, which has been elastically deployed by many governments over more than two centuries to describe all manner of enemies and villains (recall how Nelson Mandela’s African National Congress was once on the U.S. terrorist list), nobility means different things from different perspectives. At least most, if not all individuals labeled by others as terrorists themselves have motivations that they consider good or noble. They want to remake society according to their own vision, do god’s work as they see it, destroy the world in order to save it. From outside that perspective, such intentions may seem irrational or non-sensical but internally they cohere and accord meaning to destructive behavior.

Again, this is not to defend violent action or justify any random vision for employing violence or its threat to change the world. Bloodshed needs to be condemned roundly. However, as the issues surrounding Professor Willingdon’s ultimatum make plain, dismissive labeling of destructive behavior or its threat as simplistically irrational or hateful or evil is neither accurate nor useful. More helpful is a cultivated sensitivity to the complexity of motivations driving these actions and the play of perspectives shaping mediated communications about them. While not a great film, Seven Days to Noon presses us to acknowledge that multiple perspectives in media productions necessarily shape the way we approach both good or noble intentions and the political use or threatened use of violence. Oftentimes, as the film demonstrates in its memorable depictions of postwar London, those perspectives turn on which spaces or persons or experiences are made visible and which are kept invisible -- that is, how some perspectives are privileged over and at the expense of others.

Tuesday, February 3, 2009

Remaking the FCC?

The current economic crisis has driven ongoing comparisons between the present day and the Great Depression of the late 1920s and 1930s. Besides Lincoln, FDR has been the leader most often cited as a possible model for President Obama as he faces today''s many economic and domestic challenges. Roosevelt’s first hundred days, the fifteen major pieces of legislation he signed during them, and the effective creation of the modern U.S. government as we have come to know it make such parallels instructive if also cautionary tales for our time.

Amidst all the commentary and critique, and in an era of globalizing media and technology, it’s perhaps telling that the Federal Communications Commission (FCC) has not been included in most debates about the legacy of that earlier era. To be sure, the FCC was not part of the institutional broadside launched by Roosevelt against the economic collapse. The commission was established by the Communications Act of 1934, which built, in turn, on many of the provisions of the Radio Act of 1927. Then, like now, radio and communications media more generally are not readily conceived to fall within the government’s purview for supporting social and economic well-being. Media in America have tended, instead, to be understood a realm of free speech outside of government control.

In practice, that has meant media are left less to the people and more to corporations. As media historian Robert McChesney has persuasively argued, government regulation of radio emerged at a time of public fascination with the medium and foreclosed an immense range of public and political uses of the new technology in favor of consolidated corporate interests. In the process, the early Congress effectively gave over the “public airwaves” to commercial broadcasters like General Electric with the FCC providing oversight. (McChesney’s Telecommunications, Mass Media, and the Battle for the Control of U.S. Broadcasting, 1928-1935 is an exceptionally well-researched and revealing account.) It was a defining moment for the convergence of free speech and free markets.

Like many of his other appointees, Julius Genachowski, Obama’s choice to head the Federal Communications Commission (FCC) has drawn praise. A former Harvard Law classmate of the President’s, Genachowski served as a legal adviser to the FCC in the 1990s before working for various dot-com’s (like expedia.com and hotel.com) and serving as a board member to major media companies (including General Electric and USA Networks). He later counseled Obama during his campaign on media and communications issues. Expectations are that Genachowski will shift the FCC’s priority away from telecommunications providers and toward some combination of enabling technology innovation and supporting increased media user access and possibly rights. One possibility, as The Economist and others have reported, is the creation of subsidies for the promotion of high-speed broadband, particularly wireless broadband.

That’s encouraging news after eight years of FCC myopia focusing on loosening cross-media ownership restrictions and moral micro-oversight of broadcasting. Unlike the former chairman, Kevin Martin, who was a lobbyist and eager at every turn to enable industries to expand freely, Genachowski will likely push back against the media consolidation enabled by its Bush-era predecessors and encourage diversity in media ownership. The effects should both be in the public interest and ultimately valuable to the marketplace.

Yet one wonders what more might be possible were Genachowski, with Obama’s and Congress’s support, to re-conceive of the FCC as the lead agency in a coordinated effort to upgrade our media and communications landscape. If, as the President has rightly said, our transportation infrastructure needs serious improvement and renewal, what about our digital infrastructure? Again, both regulatory precedent and the public value of unrestricted free speech make such wider-ranging reform unlikely. Even more compelling is the fear of free marketers that their opportunity will be usurped by an expanded government role. These are undoubtedly important concerns. But it is also necessary to remember that the digital revolution through which we’re living will have historical consequences even greater than the economic downturn we’re currently suffering. What might be worth considering in response is a fuller partnership between government and industry that would coordinate media and technological development in more efficient and concerted ways. Especially considering its history at the nexus of government, commercial and citizen concerns over the most appropriate operation of communications in American society, the FCC would be a very good place for that conversation to start.

Thursday, January 29, 2009

Ari Adut, _On Scandal: Moral Disturbances in Society, Politics, and Art_ (Cambridge UP, 2008) [brief review]


Scandal is moral conflict made public. Among history’s most famous examples, casting light not only on the provocation of homosexuality but the shifting boundaries of public and private life in Victorian England, was the 1895 trial of Oscar Wilde. Other cases discussed in this new book by sociologist Ari Adut include the American Presidency, reaching from the mid-1800s through Watergate, and the judicial investigations of high-ranking French officials in the 1980s. These events revealed, in various though clearly shared ways, how the changing status of elites and political office were publicly negotiated. Disruptive of everyday life, profane in its violation of accepted standards of behavior and expression, scandal entails a reckoning with a society’s guiding values.

Crucially, whether based in an actual, apparent, or alleged transgression, the scandalous episode is sustained by publicity. Even more, besides having consequences for the individual artists or politicians around whom the conflict swirls, scandal reveals through widespread contestation how society is organized in a given place and time and which politics best define it. On Scandal thus seeks both to develop its thesis by tracking the circumstances around individual cases and to explore their deeper import (sometimes realized in the moment, other times not) for morality and public life.

Probably the consummate (and most familiar) example here is the Monica Lewinsky scandal of 1998, which led to the impeachment of a U.S. President. A volatile mix of sexual wrongdoing and Constitutional crisis, the scandal’s eruption forced attention on the moral ambiguities of the nation’s cultural politics and the degradation of political authority. Moreover, at a time of expanding digital media, the episode foregrounded questions about the appropriate politicization of the personal and the personalization of the political.

What finally distinguishes the events recounted here from a wider litany of social controversies (think, recently, of Bush v. Gore or the Madoff pyramid scheme) is public provocation grounded in private attitudes or behaviors. Often meaning a preoccupation with sexuality or nudity or the matter of who in society is able to indulge them, it is the moral stakes for individuals constituting the public that lend scandals their weight. That assertion helps Adut to offer a more cohesive and manageable thesis but ultimately prevents his model from making fuller sense of the surplus of apparently scandalous - if certainly publicly provocative and morally contentious - events generated and amplified through the echo chambers of contemporary media.