Tuesday, July 28, 2009

Bauhaus: Still Teaching after 90 Years


A fascinating exhibition opened last week in Berlin on the Bauhaus. This innovative school of architecture, design and visual arts was founded 90 years ago at the end of the First World War and was closed in 1933 by the Nazis when they rose to power in Germany. For those 14 years, however, the Bauhaus represented a vibrant interdisciplinary school and community of teachers and practitioners committed at once to re-examining the very roots of Western aesthetics and design concerns and to extending the experimentation and social critique of modernity. The current exhibition at the Martin-Gropius-Bau is the largest exhibition on the Bauhaus in history and comprises more than 1000 objects. More info at http://www.modell-bauhaus.de .

As a laboratory for exploring artistic, educational, and social issues, the Bauhaus rewards exploration from multiple perspectives. For me, as an educator committed to interdisciplinary teaching and learning, the launching of workshops involving talented students and gifted practitioners and thinkers from different fields (Walter Gropius, Paul Klee, Laszlo Moholy-Nagy, Wassily Kandinsky among them) is inspiring. That this was accomplished in such a penetrating way at an historical moment of sweeping technological change and social transformation makes it all the more extraordinary. Viewing the show today, as we again confront the changes wrought by technology and a wide-scale reconceptualization of the world, Bauhaus continues to provide lessons in how we might pursue, with rigor and openness and imagination, persistent questions about creativity, what it means to be human, and how to relate to the world around us.

Thursday, July 23, 2009

Media Enchantment and the Real World


I recently received a pair of e-mail announcements from The Economist magazine (I’m a happy subscriber to the print edition). The first message indicated that an electronic version of the magazine was now available for the Kindle e-reader. The second was that the latest in the magazine’s ongoing online debate series, on “Israelis and the Arabs,” was now being launched and could be followed on the e-reader.

What was telling for me was how the messages combined media and real world items. Now, media exists in the real world, I know, and a debate about conditions among Israelis and Arabs or anyone else is not the same as the conditions themselves. But those are more abstract quibbles.

The issue here is that amidst our generally justifiable techno-euphoria today, especially regarding social media, the connection of evolving technologies to what’s happening in the actual world is often neglected or at least downplayed. Our very celebration of the speed, variety, mobility, and accessibility of digital media can easily lead to an emphasis on proliferating and interconnecting technologies themselves and only a superficial or fleeting engagement with whatever information they are ostensibly communicating.

In other words, and perhaps unavoidably updating McLuhan, it’s a reminder that while (new) media are themselves an important message we can dwell over, media technologies also (still) communicate about issues that have meaning for flesh-and-blood human beings and consequences on the ground and in actual lives.

Sunday, July 12, 2009

Thanks for the Compliment (about not being simplistically partisan or ideological)

A few words about a direct message I received on Twitter. It made my day. I just signed on a couple weeks ago and still notice and appreciate new followers. Here's what came in earlier:

I can't figure out if you're a liberal or a conservative. But your tweets are interesting.

I was glad to know at least one person finds my tweets interesting, but even more was pleased to learn my messages didn't betray any simplistic political perspective. I definitely situate myself on one side of that seeming divide, but believe doing so publicly, at least through a one-word label, is counterproductive. I'm convinced that the effect of such simplistic partisan or ideological affiliation has been toxic for our politics over the last two decades (at least). Of many examples, recent events in the New York State legislature, which for weeks were deadlocked in a 31-31 partisan tie, with both Democrats and Republicans wanting to be the majority, come to mind as a ridiculous, adolescent exercise serving no one.


In the echo chamber of contemporary media politics, I've long thought that media and journalistic reports should stop automatically including the party affiliation following a legislator's name (e.g., Peter King (NY-R) or Al Franken (Minnesota-D)). What value is added by those letters? Yes, I recognize that politicians self-identify with parties, rely on them for funding and support, and work in groups or caucuses organized along party lines. With so much information available from so many sources, it's perhaps understandable that having ready hooks on which to hang one's views and build communities of interest makes not only good sense but effective strategy.
Perhaps most fundamentally, an R or D, a L or C not only neatly -- too neatly for me -- summarizes re-assures us that we belong to a political tribe. Of course, the price, the loss of genuine nuance and robustness and contrarianness in our social and political discourse, seems much greater.

It's probably naive to think so, but every step that can be taken, by media organizations as well as citizens and social media participants, to acknowledge more fully the complexity of political and social life today should be embraced. So I hope I can keep on being hard to figure out, at least in terms of labels. The best ideas, I'm convinced, are often interesting precisely because they don't simply re-affirm already known positions or platforms but provoke one's thinking beyond.

Thursday, July 9, 2009

Social Media Consulting Du Jour

Great piece today at mediaite.com by Anthony de Rosa about social media consulting (http://www.mediaite.com/online/the-social-media-sommelier). It rightly shines a light on the important and often very lucrative role played by consultants these days as corporations realize the necessity of strong social media connections with customers. The point is that during these transitional days, more traditional corporations without ready audiences through blogs, Twitter, Facebook and the like can rely on individuals who do. Whatever the origin of their audiences and followers, the individuals can profitably leverage those numbers in consulting.

Two comments. First, de Rosa does open the piece noting we are in a transitional moment: "New media clout scoring old media dollars." His piece dwells on the example of the Vaynerchuk brothers, one of whose winelibrary.tv allowed the generation of vast follower lists that he's been able to leverage in social media consulting with corporations from industries far away from the world of wine. The question of relevance is not so directly posed here but it might be: how do the new media, and the consultants shaling them, re-make the old brand and message? Do the new voices offer a healthy and overdue wake-up call to old brands and organizations or will they ultimately prove blips in brand development that will be ultimately irrelevant in the long-term?

Even more, I think of a possible cautionary tale from a decade ago in the university world. In the late 1990s, when technology was promising a quantum leap in distance learning, many schools contracted outside vendors to develop the requisite technologies and services. Other schools or consortia formed for-profit start-ups, believing that technology-supported distance learning would be a sure money-maker. In most cases, particularly following some rather public failures in the latter group (think Fathom), universities quickly moved beyond their initial exuberance and have pursued in-house development of distance and e-learning resources. This more course has still, in many cases, proven quite ambitious -- consider MIT's open course offerings or Yale's webcasting of classes -- but it relies less and less on the outsourcing that relied on individuals who perhaps knew more about fledgling technology than specific institutional cultures or offerings.

A second issue here releates to the so-called Twitter or social media revolutions claimed for Moldova and, more recently and prominently, in Iran. This is not as much of a stretch as it may first appear. With the same regimes against which the partly Twitter-driven protests were organized still firmly in charge in these countires, we should rightly ask two related questions: what did the revolutions (better: protests) actually achieve? And what role did Twitter play in those protests? Both deserve fuller answers than I'll offer here (for a likeminded skeptical take, see Trevor Butterworth at http://www.ourblook.com/Social-Media/Trevor-Butterworth-on-Social-Media.html ). Briefly, though my concern is that the questions, while related, remain importantly distinguished and that the latter one be put in context. If Twitter had a "multiplier" effect in Iran or elsewhere, what did it multiply and why? And further, how will that effect persist over time, particularly as regimes themselves upgrade their own understanding of technology and engage in what David Bandurski of the China Media Project called "Control 2.0"?

While I am not at all suggesting an (economic, symbolic, moral) equivalence between corporate control of branding and governmental control of dissent, I do believe the multiplier effect in play in politics globally is also relevant to the current and dynamic role played by social media in corporate branding. We need not only to develop a better, more nuanced and in ways more data-driven understanding of that effect. We should also appreciate the role of social media consulting in fostering and managing that effect. We likewise should acknowledge how fleeting the phenomenon, at least in its current form, might be. It's not only a matter of co-optation and control but of the inevitable integration of this new, exciting and potentially powerful set of technologies into longstanding patterns of social and organizational behavior.

Tuesday, July 7, 2009

Imagining Moldova -- and the First Twitter Revolution (Part 1)

I recently had the opportunity to spend a week in Moldova. I confess I knew little about the place before my plans formed. Most of what I knew (vaguely) derived from the public protests in the capital, Chisinau, as well as the second, city, Balti, that occurred last spring and briefly dominated Twitter.

Protesters took to the streets in early April following Parliamentary elections in which the ruling Communist party won roughly 50% of the seats. They picketed the Election Commission Headquarters and then the President’s residence before temporarily occupying both the Parliament building and the President’s office. Organized largely via Twitter calls under the tag, “#pman" (for the capital’s main square, “Piata Marii Adunari Nationale”), sizeable public gatherings numbering as many as 15,000 continued daily for more than a week claiming election fraud and later illegal arrests and the violation of human right. While the government agreed to a re-count, the election results stood and the Communist party president and parliamentary majority remained in power.

Wanting to know more, I consulted with several Romanian friends and their advice was simpler: the country is poor and stagnant, they responded quickly, but it has great wine and beautiful women. Perusing maps of the region and tourist websites, friends in New York had an even more peremptory assessment: I was heading to an only slightly Europeanized land of Borat -- Kazakhstan with a splash of Romanian charm. Okay. Thanks.

So I sought out more background. There's not a lot out there in terms of books or detailed websites. Wikipedia has a cursory if up-to-date entry. Lonelyplanet.com offered a worthwhile download of pages from a travel guide primarily focused on Romania. The one helpful book available on Amazon was the scholarly if conservatively slanted The Moldovans: Romania, Russia, and the Politics of Culture, by Charles King (2000). (Another that I ordered but didn't arrive before departing was Steven Henighan's travelogue about a Canadian teaching English in the country, Lost Province: Adventures in a Moldovan Family [2003].)

The broad strokes of what I learned are these. Referred to by some as the poorest country in Europe, with a GDP per person estimated by the IMF at only $2200, Moldova is situated to the far east of the continent, nestled between Romania and Ukraine. The land is arable (to the degree that volumes of its soil were actually shipped to the Soviet Union in past years) but holds few mineral reserves. The geographical position speaks to the complex status of the country’s people, politics, culture, and even language as a meeting ground of east and west, of Romania and Russia, of Europe and Central Asia. As former Soviet Socialist Republic of the USSR, Moldova's present-day relation to Russia remains strong, not least in the continuing rule of the Communist party. Complicating politics further are two regions of simmering independence movements. Transdniestr, which declared its autonomy shortly after the fall of the Soviet Union and Moldova's own declaration of independence, and Gagauz, an area in the country's south populated by Turkic Orthodox Christians.

Not a bad overview, particularly in the individual strands of historical development. But in pursuing various sources, a serious question arose for me: how does Twitter or any of the vaunted digital information and communication technologies we enjoy actually deepen our understanding of the world to which we seem to have much fuller and more rapid access? Part of this concerns Twitter specifically, with its endless stream of brief information text and its ongoing tracks of trending for certain topics that seem to feed on themselves. While many well-researched sources are only a link away from the tweets, there’s little telling how many are accessed or read (or, particularly for the uninitiated, which are genuinely well-researched and which to be avoided). The result is that Twitter becomes the latest manifestation of a digital source of nearly endless information for which the political (and reading) preferences of the user shape the eventual output.

Put differently, it’s very easy to maintain a thorough familiarity with headlines and the soundbytes of political rhetoric or policy and other debates, but delving beyond that superficial and ephemeral familiarity to a deeper understanding is anything but assured. That seems especially true for geopolitics today, when news cycles and attention economies rely on a dizzying shifting of media focus (yes, trending) from one hot spot or crisis or disaster to another. It is still more an issue with the lack of history that figures into even many of the better accounts of contemporary events. Beyond the disconnected entries offered by Wikipedia and other scattered websites, printed materials and fictional films, the history even of the late twentieth century that unavoidably shapes our lives and world today is increasingly grounded in fragmented digital sources.

I offer all this as prologue to recounting my physical entry to Moldova precisely because my reliance on Twitter and various, mostly web-based accounts of politics and peoples so strongly framed my thinking and expectations of this place about which I knew so little. While similar in ways to what has long been available to travelers in guidebooks, from the nineteenth-century Baedeckers onward, the contemporary mediascape has grown both quantitatively and qualitatively different. The digital world is ultimately smaller, infinitely more accessible, and, particularly as one imagines lesser known places like Moldova, conducive to unprecedentedly superficial and partial understandings.


In Part 2, I move from my imagined Moldova to the actual, physical country.

Monday, July 6, 2009

Honda's "Power of Dreams" Online Films

I've been viewing the short films online at Honda's "Power of Dreams" series (http://dreams.honda.com). While very clearly advertisments for Honda, its history and current operations, the films can also be alternatively smart and inspiring, brimming with au courant ideas of management, risk-taking and innovation. (One does wonder, of course, how many of the ideas are actually implemented and practiced in the everyday.) Besides the corporate figures, many of the faces and voices are familiar (Deepak Chopra, Danica Patrick) and some refreshingly unexpected (Christopher Guest, Clive Barker). And the "Mobility 2088" instalment is simply cool. Viewed collectively, these films aspire to be a cross between the groundbreaking BMWfilms.com series of shorts, The Hire, directed by luminaries from John Frankenheimer to John Woo, in 2001-2002 and more recent online salons, from TED to the Aspen Ideas Festival (http://www.ted.com and http://www.aifestival.org). They finally fail to reach that standard, but do offer an excellent summary of how a company, particularly in a challenged industry like automobile manufacturing, reflects on -- and, with polished production values, presents -- itself and its vision of the future.

Saturday, July 4, 2009

Andrew Lih, _The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia_ (Hyperion, 2009)


Opening this account of the history and current reach of the World Wide Web’s phenomenally successful encyclopedia is a foreword by Wikipedia’s founder, Jimmy Wales. In it, Wales speaks briefly of some of the values guiding the project: individuals doing good, trusting each other, and using old-fashioned standards of clear writing and reliable references. His most important observation, though, building on these other values and view of beneficent human nature, is that Wikipedia grew as a kind of social software that both fostered and relied upon community.

That basic if imprecise idea guides much of the following account of the early years of technological developments that allowed Wikipedia to emerge. From Linux and Nupedia to WikiWikiWeb and Hypercard, the evolution and linkage of various innovations through the 1990s makes for a fascinating read. The individuals responsible at each step in the process, including Wales but also Ward Cunningham, the father of Wikis, and Larry Sanger, the original Nupedian, and others are also nicely drawn. Throughout, the imperative to create formative connections both between and for a networked community remains consistent.

In the middle of the book is a 50pp chapter that draws together various central issues but also covers a series of incidents and events, policies, and internal practices. It exemplifies the book’s strength and weakness. On the one hand, it delineates clearly the development and coordination of various technologies into a fully viable site for widespread public participation, production and usage. On the other, the recurrent attempts to make sense of these developments in broader social and cultural terms are frustratingly lacking. That sense-making is not necessarily required in an historical account, of course, but the recurrent suggestion here of metaphors and models to interpret the cultural significance of Wikipedia only highlights the failure.

Subsequent chapters are event-driven, showing how Wikipedia continues to be shaped, across languages, in face of different competitors and a changing web and mediascape, and finally how the project is managing growth. The book concludes with questions about the scaling of the project and the persistence of its originary values of community. Will increasing numbers of participants continue to do good and trust each other? Will the result, the “Wiki-ness” of Wikipedia endure? And crucially, how should the stewards of the foundation, like Wale, respond to the shift from being like a village where everyone knows each other to “more of a faceless impersonal metropolis” that is “driving the adoption of hard, cold, binding policies” (176).

As this challenge for the future suggests, the book dwells on the idea that we have come to describe as the wisdom, collaboration and dynamics of crowds. Yet detailing the Wikipedia case hardly settles the matter: did crowds create Wikipedia or did Wikipedia create the relevant crowds? More intriguing, the book seems to question the relationship between the individuals who developed Wikipedia and the crowds so regularly invoked by them as responsible for its growth. Are crowds possible, that is, without individuals orchestrating their collaboration?

Lih makes clear that the answer, at least in terms of the history of “the world’s greatest encyclopedia,” is no: remarkable, innovative leaders were as indispensable as the crowds themselves. In his Foreword, Wales underscores the socializing power wrought by technology and the World Wide Web. But he doesn’t pursue it, possibly because a fuller explanation would involve him directly in ways that run somewhat counter to better publicized tenets of community and collaboration. Perhaps the ultimate lesson here of Wikipedia’s creation and continuing growth is that an essential aspect of celebrating the creation and ongoing growth of global community of contributors remains the recognition of key leaders able to envision the scope and direction of that collaboration.

***

The Wikipedia Revolution also foregrounds another question. Going forward, how will we write – or, more to the point, research – histories of the digital age? The matter of research materials is a major concern: what will be the digital archives of sites and other projects that change and transform themselves so quickly? Again, one answer to this returns us to the issue of individual rather than collective voices. Invaluable to Lih and to us, for example, is Larry Sanger’s 16,000+ word account of the “Early History of Nupedia and Wikipedia” from 2005, available at Slashdot.com. At least for the near future, when such individuals remain alive and available to provide their recollections, they will remain vital resources. Beyond that, particularly as access to and preservation of digital projects fades, the matter becomes murkier.

Friday, July 3, 2009

Ruins of the Second Gilded Age


Amazing photo essay by Edgar Martins from the NYTimes Magazine (July 5, 2009) on the what the US real estate boom has left behind. It's a strangely unsettling group of images, both unavoidably nostalgic for pre-bust days of irrational expansion and eerily still (and depopulated) in their uncertain drift toward the future. Gives pause about how far we have yet to go to undo the work of those heady days. Thanks to David G for calling early attention to this.
http://www.nytimes.com/slideshow/2009/07/05/magazine/20090705-gilded-slideshow_index.html

Thursday, July 2, 2009

A Broadband Plan for Whom?

Julius Genachowski was finally confirmed last week as the Chairman of the FCC. Today, he presented, "The FCC and Broadband: The Next 230 Days." A bold action plan for expanding broadband across the country? Maybe. Eventually. Right now it looks more like a primer on bureaucracy and abstract project management. (The presentation is available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-291879A1.pdf)

Two brief thoughts. First, recall the report on global broadband penetration from Strategy Analytics in mid-June. The United States ranked 20th, with 60% household penetration -- just after Estonia and Belgium and just in front of Slovenia. The top three spots went to South Korea, Singapore and the Netherlands with, respectively, 95, 92 and 88% penetration. The report also concluded that U.S. prospects aren't improving: the forecast is that the United States will fall to 23rd by the end of 2009. As if we needed a further reminder that mid-twentieth century American institutions -- the auto industry, medicine, here, technology -- are no longer automatically pre-eminent in the world. (http://www.strategyanalytics.com/default.aspx?mod=PressReleaseViewer&a0=4748)

Second, at a time when free-market principles are justifiably being questioned in various industries, the priority of the government program appears to encourage private sector development through billions of dollars in stimulus grants and subsidies. A lot of funding, to be sure, at least for the companies being subsidized, but how coordinated will the resulting developing of broadband actually be. One of the mitigating factors in cross-national comparisons of broadband penetration is the size of countries -- expanding technologies in Singapore and the Netherlands is obviously a much lesser order of magnitude than in the U.S. Yet isn't that exactly the reason why there needs to be an overall strategic effort rather than one that's left to a market that has proven itself dysfunctional and unable to grow in a concerted way in the past? I'm not suggesting an entirely top-down government program. What does seem to make sense, though, is a plan that puts the larger public first and the vaunted entrepreneurs and technology companies, who obviously have heretofore not seen an economic motivation in expanding broadband across the country, second.

On Dying Young: _Public Enemies_ and Michael Jackson

It may be the unavoidable glare of the Michael Jackson media juggernaut, but I saw the new movie about John Dillinger, Public Enemies, and immediately believed it was a fitting release for this exact cultural moment.

Why? The film ends with Dillinger's storied killing by FBI agents outside the Biograph theater in Chicago in 1934. Or very nearly ends. A coda follows in which one of the lawmen responsible for the killing makes a touching visit to the bank robber's love interest to share with her the dying man's whispered last words. That sentimental closing moment underscores how the film presents the Public Enemy #1 to be remembered: as an outlaw with a heart of gold, who genuinely loved a woman and sought to escape with her from the midwestern life of crime.

In a way, this is the male, gangster version of the whore with a heart of gold story. But it's also a story that has changed over time. Manhattan Melodrama, the 1934 film starring Clark Gable as a gangster that Dillinger saw that fateful night at the Biograph, was equally a production of its time. Gable dies in the end in the electric chair but the closing is really about his childhood pal, now the DA, played by William Powell, and the woman they both loved, portrayed by Myrna Loy, affirming their marriage and future together. That sort of affirmative Hollywood ending was mandated in productions of the time, particularly those involving gangsters, and at least tempered the sympathies of viewers for criminals and their misdeeds.

In the current film, director Michael Mann has built our contemporary Dillinger to be a legend -- or rather a larger legend than he was. Played by Johnny Depp with an angular cool, Public Enemies offers little insight about his motivations for the string of action sequences that constitutes it. His lawman nemesis, G-Man Melvin Purvis, is similarly undifferentiated as played by the increasingly ubiquitous Christian Bale. (This lack of dimension becomes all the more conspicuous in a closing title, where we learn Purvis not only quite the FBI a year after Dillinger's demise but then killed himself some two and and half decades later.) That lack of character dimension leaves the action but also allows the broad, even archetypal contours of the outlaw story to be foregrounded.

Outlaws can occupy a special social status between the people and the law or legal institutions and authority. Allowing everyday citizens to keep their money while taking the bank's funds during a robbery is only the most obvious way this status is presented. The recognition of the power of a nascent national media by the manipulative FBI director, J. Edgar Hoover, makes clear how those claiming the legitimacy of the state or police must use the press to battle with so-called outlaws for the public's hearts and minds as much as with tommy guns. Especially in hard economic times, when the political and economic system is under duress, that battle for public confidence and the outlawry it facilitates is vital.

If the enduring fascination with Dillinger and his status as a Public Enemy is only burnished by the new film, viewers in July 2009 may exit the theater thinking here was a charismatic outlaw who died too soon at the hands of a legal but not altogether moral order. The coda with his tearful lover is crucial because it emphasizes that he died too soon. For the two of them but also for us.

Dying young has a long history in Anglo-America, from Housman's poetic athlete to the more layered celebrity deaths of the last half-century. Marilyn Monroe, James Dean, Jim Morrison, Jimi Hendrix, and Elvis are among the performers whose early deaths are still cause for commemoration. More to the point, these deaths have enabled our individual and collective memories of these performers to remain fixed on their youth -- its beauty and its rebelliousness. The most common comparison made with Michael Jackson is Elvis, which is appropriate both for the rarefied cultural heights they occupied but also because they were ultimately not so young when they both died (Elvis 42, MJ 50).

Focusing on the past youth of the aging or dead is ultimately an act of wish-fulfillment for present-day onlookers seeking to arrest or even deny the passage of time. Amplified by the echo chamber of popular culture, such an act can also become an important affirmation by the public not only of its existence but its own vitality. That such affirmations so often turn on perceptions of beauty and rebelliousness, of the creative grace and outlawry of those who are gone, is being evidenced yet again today.