Wednesday, March 19, 2008

BONUS POST: "Ad hocracy"

I'm breaking up my series on newspapers to float an idea that I'm hoping to present at a workshop next month.

It comes from this basic observation: We've already got the free tools to construct ad hoc breaking news networks around discrete events: Blogs act as a reverse-order chronology of evolving events; Twitter gives us two way communication across platforms, plus a means to capture text via SMS from "dumb" phones and then push those messages out to the Web. Utterz lets us send audio clips from our cell phones straight to the blog. And Flickr gives us a way to organize and "theme" photos coming in live from multiple users.

The fun part? Figuring out how to configure the parts and administer the results so that you find the Goldilocks Zone between immediacy and chaos.

For the record, I experimented with something similar to this in 2006. The idea of creating ad hoc news networks arose from my thought that I'd have set up Storm Watch differently if I'd been a Twitter user.

I'm posting the original concept here so that everyone can bang on it, come up with better solutions. Have at it.

PART 1: SET UP AND OUTREACH
STEP 1: Administrator sets up a breaking news blog on a standard blogging platform that can handle multiple authors.

STEP 2: Administrator publishes the URL of the new blog and encourages members of the network to bookmark it. The blog will remain absolutely dark until it is needed.

STEP 3: Administrator creates a Twitter account for the new blog and encourages people within the network to Follow it.

STEP 4: People who are known and trusted by the administrator (reporters, bloggers, Twitterers, photographers, videographers, frequent commenters on other sites, etc.) are invited to create author accounts/Utterz accounts that will feed into the blog when it goes live.

STEP 5: Administrator adds a Twitter widget to the blog to display recent Tweets. (?)

PART 2: WAKING UP THE NETWORK
STEP 1: The administrator/network wrangler hears about a breaking news event (a plane crash, a storm, a riot, a blogger conference, etc.) and wakes up the network via a variety of means: Tweets, cell phone alerts, e-mail, blog posts, etc.

STEP 2: Eye witnesses begin filing reports in a variety of ways: Tweets to the @breakingnewsnet account; Utterz calls that go straight to the blog; photos uploaded to individual Flickr accounts but tagged to create an event-specific Flickrstream.

STEP 3: Eye witnesses who are live-blogging to their own sites can share their posts with the breaking news blog by sending links in multiple ways.

PART 3: MANAGING INFORMATION
STEP 1: The administrator should focus on trying to manage the flow of information across platforms in real time. In particular, the administrator should look for reports from non-trusted sources that could be misleading, erroneous or threatening.

STEP 2: If the interface between Twitter and the blog is manual (i.e., the administrator is copying and pasting Tweets as new posts), then information flows will be managed by deciding which reports to publish elsewhere. If the connection is scripted (@breakingnewsblog Tweets appear automatically on the blog, or in the blog publishing queue), then the administrator has to decide whether some reports should be removed awaiting verification.

STEP 3: The administrator(s) should attempt to develop as clear a picture of events as possible, posting updates that write-through the various incremental reports from the field.

STEP 4: Witnesses get feedback, questions and requests for clarification from the administrators and other users. Follow-up reports attempt to address those questions and requests.

PART 4: POWERING DOWN
STEP 1: As the situation moves from real-time development and into the more traditional news cycles, the administrator notifies the network that the breaking news blog is shutting back down.

STEP 2: Administrators capture and save the Twitterstream, tag photos and video, and formally concludes the blog coverage. Commenting continues as long as needed.

STEP 3: Administrators review inputs and extend "trusted source" accounts to network participants who performed as reliable observers. Sources that perform poorly could have their "trusted source" status rescinded.

STEP 4: New account requests and Twitter follows are processed and prepared in anticipation of the next breaking news event, which will wake up the blog and the network all over again.

DISCUSSION:
SCRIPTED VS. HUMAN: First, is there a way to capture specific Tweets (my example: @breakingnewsblog) and script their posting to the blog? Second: If there is, should we? My gut tells me that the Twitterstream would be the raw material: Anyone could Follow it and see the incoming reports in real time. The blog, on the other hand, could be a step up in terms of making sense of the reports. What's the best way to moderate this?

BEST USES: Should this be done using the system I described or something different? Would hashtags work better? Should Facebook be used instead of/in addition to?

WHICH BLOGGING PLATFORM? Each blogging platform has its quirks. Which one would work best for an ad hoc news network?

IS MORE BETTER? This blog could handle everything from organizing live responses to a scheduled political debate to receiving reports from a surprise earthquake, but obviously each event would have a different network of contributors. My thought it that you want to make it as easy as possible for people to contribute (and cell phone to Twitter is pretty easy). But am I right? And will the administrator be able to keep up if the size of the reporting audience gets too big too fast?

IS THIS ONE IDEA OR SEVERAL? A version of this created for covering tech conferences might look different than a version built to cover breaking news in a geographic area. Or maybe you'd do this within a very defined community (graphic novel fans, infectious disease experts) to handle particular types of information.

WHAT ABOUT ETHICS? Your average professional organization would be scared to try something like this because of fears that people might use bad language or make libelous claims against individuals or post hoaxes or upload photos that invade privacy, etc. Well, aren't those legitimate concerns? How could we address them in an ad hoc, cooperative breaking news network?

That's what I'm thinking about. I'd love to hear your thoughts...

Sunday, March 16, 2008

MEDIA REVOLUTION: Why The Tower Must Fall

From a generic perspective, it’s probably fair to say that the public history of new media began five years ago this month with the invasion of Iraq. Technorati was tracking fewer than half a million blogs in March 2003, but 24/7 coverage of the war meant cable news needed things to talk about, and there were the bloggers – this strange new species of pundit – always talking, talking, talking.

I became a blogger in March 2003, part of a surge in self-expression that exceeded 4 million participants on the eve of the November 2004 elections. To mainstream media in that campaign year, “new media” mostly meant blogs, and their reaction was generally dismissive. Sure, bloggers might find a niche in the chattering-classes ecosystem, the reasoning went, but they were bottom-feeders sucking up leftover information produced by the reporting professionals.

In the public mind, therefore, blogging more or less began with the war, then became significant – for better or worse – in September 2004, when bloggers played a leading role in casting doubt on a CBS News story about President George W. Bush’s service record in the Texas Air National Guard.

So why talk about the nascent blogosphere of almost four years ago?

Because at this moment of decision and tumult in mass-media news, far too many of our business leaders and newsroom decision-makers appear to be stuck there. And since most failed to explore even those outdated possibilities (part of a general tendency to reject as irrelevant anything they do not control or understand) far too many of our industry's executives continue to misread the dramatic cultural and economic changes now reshaping the markets for our products.

The consequences? Executives sink hefty budgets into e-mail marketing schemes – even as young people abandon e-mail in favor of communication via social sites. Consultants skim big fees pitching “Web 2.0” business concepts that look promising based on analysis of traffic and function, yet fail instantly because they begin with no understand of the culture of online communities.

Is it any surprise that so many of our flagship institutions are foundering? From tagging to RSS to social filtering, mainstream media has failed to keep up with the ever-leapfrogging development of new tools because its leaders remain not only ignorant of the Web, but suspicious of its very legitimacy.

For a simple example, look at the average newspaper website video section. Can you find the embed code for any of the videos they host?

I went through a list of the Top 100 newspaper websites tonight, checking every eighth one to see if it offered an embed code. Most required me to watch a commercial before letting me see the video and I’d chosen (also a mistake… to see the RIGHT way to put an ad on a video, watch TPM.com’s VERACIFIER videos), but only one offered me an embed code -- and that was the San Jose Mercury News, the home paper for Silicon Valley.

Why? YouTube has been putting embed codes on its videos since 2005, and it’s not exactly advanced technology. Allowing people to place your video on their site is a great way to increase the number of plays you’ll get on a potentially popular video, and they don’t cost you a thing.

I suspect there are four reasons:

  1. Top editors have never heard of embed codes;
  2. They don’t understand how they work;
  3. They MISUNDERSTAND how they work (“You mean other people will get to take our video and we wouldn’t have any control over how they use it? What’s in it for us, other than a lawsuit?”); and
  4. They just don’t think it matters, since the only people who care about embed codes are bloggers, and bloggers a bunch of whack-job wannabees.

The next post in this series is going to talk about the necessary next steps for newspapers – not online, but in print. Before we moved on to that subject, I wanted to cast this thought out to the world:

The biggest problem faced by the newspaper industry isn’t a competition problem, or a revenue problem, or a technology problem, or even a quality problem. It’s a culture problem. The average metro newspaper is a monopolized commodity, and after decades of bottom-line corporate control, all vestiges of independent thought have been selectively bred out of its cultural DNA.

Newspaper leaders are willing to make almost any change… so long as certain things remain untouched. And those untouchable items -- double-digit profits, secretive editorial boards, black-box news judgment -- are the very first things that healthy companies will have to address.

Newspaper monopolies are not worth saving. They are The Tower, and they must fall before something new can begin to grow in its place.

Tuesday, March 11, 2008

An old idea has a new future

Back in the late 1990s, at roughly the same time as the advent of the ill-fated :CueCat, my employer invested in a failed technology called "GoCode." Both of these devices had the same goal: to connect print readers easily to the Web. :CueCat was a bigger flop because it had a bigger footprint, but I'm sticking with the GoCode scanner system because I got up close and personal with it.

Here's how it was supposed to work: Newspapers would take the URLs for content related to individual stories and enter them into an encoder. The encoder was supposed to produce a tiny bar code that could be attached to the end of printed newspaper stories. Newspaper readers, in turn, were supposed to be able to take their free "GoCode wands," drag them over the tiny bar codes, and delight in being transported to related content on the Web.

Got it? Four items: 1. Printed content; 2. Web content; 3. A printed bar code for the Web content URL; 4. A bar code scanner that was supposed to connect print to the Web via a personal computer.

GoCode, like :CueCat, was a dismal failure. I could see it was doomed the first time we received training in how to use the encoder: Instead of giving editors the ability to link to valuable related content (full texts of speeches, budget databases, etc.), what they delivered came with a menu of useless content that we really couldn't edit or expand. Then the 100 "test" wands never got distributed. Our company grabbed some industry headlines for our innovative attempt at modernity, but GoCode was DOA. We never even bothered to announce its demise.

Why did GoCode/:CueCat fail?
  1. Failed to understand how people use the Web;
  2. Required that you read sitting at your computer;
  3. Required special hardware;
  4. Required special generating hardware/software;
  5. Required user-end software;
  6. Cheap device never achieved scale;
  7. Device was pre-USB 2.0 and required special pin-porting;
  8. Media companies that didn't understand the Web were sold on these technologies by huckster companies that were long on promises and short on delivery;
  9. The timing was all wrong (Dot.com bust);
  10. Didn't empasize content quality.
After these disasters, the print publishing industry has been uninterested in anything similar. Consequently, even the act of printing meaningful URLs has been daunting. I taught the people at my newspaper to use www.tinyurl.com, and it's also true that some Web content providers have gotten somewhat better about generating logical (and shorter) URLs. But let's face it: once you get past the root domain name, people start turning you off if they've got to type in the address.

Which is why it's time for a bright technologist and a smart media company to start developing the next generation of URL scanner. Because the technology that made this idea workable has been on the market since June.

The iPhone.

Connecting print to Web -- mobile
The cellphone -- not the PC -- is the proper device for taking print readers from page to pixel, and the development of a cell phone with an imaging device AND a functional Web display is the next-to-last hardware development task required to make this idea work.

The final step: Add a laser/infrared scanner capability to all smart phones.

So here's how it would work:
  1. All print publication automate the creation of URL bar codes, so that any URL mentioned in the pub is accompanied by a tiny bar-code icon;
  2. Smart phone users who want to see Web content mentioned in print wave their phone over the icon;
  3. Scanner on the back of the phone decodes the URL, opens the phone's browser and displays the content.
Other uses should be obvious: All display and classified advertisements would come with an appropriate icon; printed maps and directions would have icons for related attractions, restaurants, etc.; Business cards would display the bar code. On so on.

Could you add a scanner to your computer? Sure. It's a simple USB device. But that's not the killer app: People who are mobile need simple, quick access to the Web. And in 10 years we're ALL going to be mobile. All the time.

But what's the business model?
I was never clear how we were supposed to make money off GoCode. But I know how to make money off what I'm describing.

Advertising and licensing.

The advertising stream: So maybe before this system displays your page it flashes up a targeted advertisement -- it's topical, because you could code keywords into the icon, and it's local, because the cell phone carrier can read where you're transmitting based on which repeater has picked up your outgoing signal.

So that's a premium ad spot: Local and targeted, with algorithms that serve up well-suited commercial info.

Could the user get a version of this with extra features and no ad support? Absolutely: It's a freemium feature in the making.

And finally, licensing: If I develop this tech and offer it for free to users, how hard will it be for me to charge all print publishers a small licensing fee for using the software and/or hardware we give them to handle the encoding?

The Web is based on clicking. If we had to type every URL, the Web wouldn't be the Web. A cellphone that scans and displays URLs extends that power from the traditional PC to the extended network.

It makes sense for cell users. It makes sense for publishers. And it has an easy business model.

Anybody want to pick this up?