Feeds as a platform

Feed aggregator developers currently struggle with multiple data formats, unclean sources, HTTP status handling, and many other complex problems that take time away from actually developing their application. New feed platforms are emerging that will allow developers to leverage the existing work of larger teams that already sweat the small stuff every day, allowing new aggregator developers to focus on creating new experiences on top of feed data to thrill users and deliver requested features quicker and easier than ever before. Microsoft, Google, and NewsGator are three examples of established companies exposing their feed architecture to the outside world to encourage new applications.

The feed aggregator ecosystem currently faces issues that reduce the adoption of feed reading technologies for new and expert users alike:

  • Many agents hitting the same feeds on regular intervals. As the popularity of feed aggregators grows, will sites be able to handle every subscriber hitting a feed every half hour?
  • Payloads are getting bigger. Audio, photos, and video are becoming increasingly popular data formats accompanying a feed. The number of automatic downloads initiated by aggregators could discourage rich content sharing due to high bandwidth bills for each publisher.
  • Repopulate subscription lists and item status across multiple applications and devices. We interact with feeds on out home computer, work computer, cell phone, and possibly an online aggregator. We don’t want to enter each feed for every location or waste time with items we have already read.
  • Too much data, not enough time. Every new feed we add to a subscription list increases the fear that we might not be able to handle the additional attention load.
  • Feed subscription and delivery to the appropriate application are complicated. Modern web browsers currently display feeds using an XSLT skin on top of the data, a good first step, but users want to be able to easily track an information source over time and know they have successfully completed their task.
  • Feeds are created in multiple XML formats (RDF, RSS, Atom, etc) and often contain invalid XML markup and do not conform to their respective feed specifications.

But wait, there’s more! Developers also have to worry about expected best practices for the benefit of users and feed producers.

  • Proper handling of namespaced elements such as Dublin Core, iTunes, Media RSS, and many others.
  • HTTP best practices such as conditional GETs based on Last-Modified and etag values, permanent and temporary redirect handling, and permanently deleted feeds.
  • Secure feed handling. Some feeds are specific to one user or a group of users and employ HTTP authentication and other techniques to restrict access.
  • Import and export of feed lists using OPML or other common formats.
  • Data backup and selective sharing with others.
  • Search the content of your feeds.
  • Search and/or browse a list of feeds to help populate your aggregator with new content.

Developers of new aggregator applications need to take all of the above into account and either build it themselves or hope there are a few existing code libraries to help them out along the way. Luckily the whole process is becoming easier as feed platforms enter the market to handle complex issues and expose the underlying feed data as local objects.

I will introduce feed platforms from Microsoft, Google, and NewsGator over the next few posts to illustrate how developers have a new toolset available for their feed applications that will help connect any new application with millions of existing users on multiple platforms.

2 replies on “Feeds as a platform”

  1. Great post, Niall!

    You’re right on the mark. The future is about being an aggregator of content rather than developing all the content from the ground up.

    At iVideoBlast.com, we’ve created some of our own video content but we’ve aggregated most of it from various feed sources and compiled it into a easy to use interface.

    For example, our library of over 400 Video Podcasts are 100% feed driven. The positives are the dynamic nature of adding content vs developing it all yourself. The downside, as you mentioned, is dealing with different feed coding styles and of course the occassional “unavailability” of the feed.

    And don’t even get me started on the various iTunes feed formats being used…

    Oh, well, that’s the life of a developer right?

    Keep the great ideas and concepts flowing.

    Scott

  2. Payloads are getting bigger. Audio, photos, and video are becoming increasingly popular data formats accompanying a feed. The number of automatic downloads initiated by aggregators could discourage rich content sharing due to high bandwidth bills for each publisher.

    Take a look at the link extensions draft for Atom. Specifically, the no-follow attribute for links. Through proper collaboration, feed readers and feed publishers can work together to address the potential bandwidth consumption problems. For instance, in a podcast feed, the most recent entry(ies) can be marked for automatic download, while older entries could be marked as “only download on demand”.

Comments are closed.