« Thoughts on the T60 wide-screen | Main | FeedBurner is 3 »

Pipes Dreams

First off, let me just say that I think Yahoo! Pipes is very cool and that it has the potential to be an important building block for the next phase of the web (see "A More Personalized Internet?" for an overview). It's the logical next step for this ecosystem that is made possible because of the standard content interchange format called a "feed". Feeds first allowed there to be a loose coupling between content publishers and content consumers and let each evolve separately. Then, FeedBurner came around and showed that this loose coupling also enabled value-add middleware that respected and in some cases even strengthened the "content contract" between the producers and clients. Pipes is a logical next step which does a very cool thing: it allows external parties to construct content workflows and, most importantly, gives them a sharable URL. FeedBurner and Pipes actually complement each other very well, and I've been having a lot of fun over the past week demonstrating that.

There are some very interesting directions that Pipes can take as it evolves, and I'll be curious to see what Yahoo! does with it. One of the first things I wanted to do when I started working with Pipes is that I wanted to be able to construct and share new modules. I hope that it's something they would consider exposing, because man would that be tight! From personal experience, though, I know that it's probably not going to happen -- it's really hard to lock-down any kind of code that would have to execute in your process space, so that's probably out. But maybe if they could just expand the existing "Fetch" module so that one could POST the current state of the stream to an external URL that I host on a server somewhere and I could return the transformed content, and you could wrap that up in a sub-pipe that expects additional user inputs as the config parameters ... something like that could work.

Which brings me to the meat of this post: wonderful things could happen if you marry Pipes to the Atom Publishing Protocol (APP). What if the pipe output, rather than just being XML that spills on the floor when the URL is requested, could instead be hooked up to a module that speaks APP? Now you've got a really cool content routing mechanism. The "Fetch" module already really handles the input end of things, but being able to channel the output to a different destination could open up some amazing possibilities.

One detail to be worked out is the triggering mechanism for the workflow. Currently, a request to the resultant URL serves as the triggering mechanism that the workflow should executed. This is how FeedBurner works as well -- there's no master cronjob that ticking away and retrieving all the source feeds every 30 minutes. Instead, when a request for the burned version of the feed comes and the source feed is stale (i.e., hasn't been checked in the last 30 minutes), then go refresh the source feed. That way, you don't waste cycles updating dormant feeds. Pipes works the same way.

So, if there isn't a request URL, how would you "run" the workflow? Probably the most appropriate thing to do would be to use something like a ping mechanism, so that if the pipe is pinged and the content has been modified since the previous run, you run the pipe. That could probably work.

In the end, if you take the promise of Pipes, the potential of Google Base, and add some of the stuff that you'll see from FeedBurner in the next few months, you'll have some wicked tools to start rewiring the next version of the web. I think it's going to be quite a trip.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)