Amazon customer images

Amazon allows customers to share its images of a product with the intention of helping buyers understand product features and uses. In the digital camera category customer images has become a high profile way to show off your work and be evaluated by other photography enthusiasts. Take a look at the customer images page of the Nikon D70 for a good example or view the top images in the camera and photo category. It’s an online portfolio hosted by Amazon.

My picks for the Feedster developers contest

The Feedster developers contest is over and developers are waiting for the winners to be announced. You can view all of the entries on the Feedster submission form.

What do you think about the contest submissions automatically being posted on Feedster for all to see? Good idea or bad idea?

I went through the entire list of submitted Feedster hacks, removed the garbage entries seeking free linky love, and was able to put together my own list of winners while we wait for the official results.

Best use of Feedster in a standalone RSS aggregator

NewsFire RSS search using Feedster

David Watanabe added Feedster search to NewsFire, a popular RSS aggregator for Mac OS X. It even searches as you type!

Best use of Feedster with a publishing engine

Timothy Appnel wrote a plugin for Movable Type to search your weblog’s feed or a collection of feeds using Feedster and display the results on your Movable Type site.

Best Firefox extension using Feedster

News Story Expander screenshot

Adrian Holovaty wrote a Firefox extension to overlay Feedster links on The New York Times and The Washington Post web pages.

Technorati Users Group meeting December 21

Come drink beer and learn more about the Technorati API next Tuesday, December 21, at 7 p.m. at 21st Amendment in San Francisco. I will introduce Technorati API calls, demonstrate some existing applications built using the APIs, and lead you through some sample code using XPath and Java.

A few Technorati employees will stop by and provide updates on the developer program and provide the latest company news. If you are interested in learning more about live search, web services, or corporate intelligence this event is for you! The Technorati developer contest ends December 31 so this will be a good opportunity to develop or fine-tune your ideas for a to compete for over $3250 in prizes. I will raffle off two books — a copy of Michael Kay’s XPath 2.0 Programmer’s Reference and a copy of Dan Gillmor’s We the Media — for those in attendance.

21st Amendment is located at 563 2nd Street in San Francisco. It features free wireless internet, good food, and house made beers. On Tuesday you get to keep your pint glass! We will be upstairs at the mezzanine loft. Street parking is usually not an issue.

I will try to setup an iSight so you can listen and watch from anywhere in the world. Please join #technorati on irc.freenode.net next Tuesday night if you would like to participate in the meeting remotely via IRC.

If you plan to attend please leave a comment below or contact me so I can get a good idea of headcount.

20041221T190000-08:0020041221T203000-08:00Technorati Users Group21st Amendment, 563 2nd Street, San Francisco, CA

Search wars

Charles Ferguson has a lengthy analysis of Google and Microsoft in the January edition of MIT Technology Review and how the search wars might play out over the next few years. Will Microsoft crush Google like it crushed Netscape? Ferguson bets on open standards and APIs as Google’s saving grace, creating a lock-in of tools and services.

Winning architectures are proprietary and difficult to clone, but they are also externally “open”—that is, they provide publicly accessible interfaces upon which a wide variety of applications can be constructed by independent vendors and users. In this way, an architecture reaches all markets, and also creates “lock-in”—meaning that users become captive to it, unable to switch to rival systems without great pain and expense.

More companies need to realize the power of the network and the ability of outside vendors to increase demand for services while creating products the provider does not have the time or imagination to invent.

Peter Norvig, the company’s director of search quality, told Technology Review, “We’ve had the API project for a few years now. Historically, it’s not been that important: it’s had one person, sometimes none. But we do think that this will be one important way to create additional search functions. Our mission is to make information available, and to that end we will create a search ecology. We know we need to provide a way for third parties to work with us. You’ll see us release APIs as they are needed.”

Wow. Only one person at Google responsible for supporting APIs?

Google should first create APIs for Web search services and make sure they become the industry standard. It should do everything it can to achieve that end—including, if necessary, merging with Yahoo. Second, it should spread those standards and APIs, through some combination of technology licensing, alliances, and software products, over all of the major server software platforms, in order to cover the dark Web and the enterprise market. Third, Google should develop services, software, and standards for search functions on platforms that Microsoft does not control, such as the new consumer devices. Fourth, it must use PC software like Google Desktop to its advantage: the program should be a beachhead on the desktop, integrated with Google’s broader architecture, APIs, and services. And finally, Google shouldn’t compete with Microsoft in browsers, except for developing toolbars based upon public APIs. Remember Netscape.

Microsoft can bundle search with the next version of Windows Server just as it delivered SharePoint with Windows Server 2003. Google needs to make a play for this pure software space instead of relying on the hardware bundle. I think desktop search will continue to be controlled by Microsoft and Google will be confined to market share similar to Firefox unless Google has a major distinguishing desktop search offering such as exclusive content.

Tiger adds default RSS reader option

The next version of Macintosh operating system, 10.4 code named Tiger, defines a default RSS reader at the system level. Preferences in the current early start kit allow a user to define the time between feed updates, the color annotation of a new article, and when to remove stored items. Hopefully this means aggregators will be able to share a common feed storage location.

Update: MacNet took down at the request of Apple Legal. The preference pane showed a drop-down boxes to select the default RSS reader, choose an update interval, highlight new items, and the length of time to store items.

Value of idle time

I just finished reading “Quitting the paint factory,” an article by Mark Slouka in the November 2004 issue of Harper’s Magazine. Mark looks at the history of the American worker, the pursuit of money over the value of time and mind, and questions what we value. Mark talks about the value of idle time, how we now spend money to have busy leisure time, and he shares the stories of literary figures struggling with some of the same questions about life. (via BoingBoing)

What it says, crudely enough, is that in order to be successful, we must not only work but work continuously; what it assumes is that time is inversely pro­portional to wealth: our time demands will increase the harder we work and the more successful we become. It’s an organic thing; a law, almost. Fish got­ta swim and birds gotta fly, you gotta work like a dog till you die.

PubSub LinkRank detail

PubSub has had LinkRanks for a while, but yesterday BoingBoing linked to a detail page not listed in the PubSub site links. PubSub displays a line chart of incoming links to the specified domain over the last thirty days and inbound links in the past ten days. Only one double listing in my results, a TypePad weblog with a separate domain. The chart is presented in Flash, making copy and pasting much more difficult. PubSub should learn from Alexa and make their images easy to add to other content. Think PowerPoint slides.

PubSub is experimenting with NewsML for entry tracking using the OASIS published subjects specification.

The line chart displays rank on the Y-axis and time on the X-axis — both not labeled. Comparing PubSub results to Technorati results for the nine most recent links there are six domains links missed by Technorati and two links missed by PubSub. One of the PubSub domains listed referenced me 11 days ago: beyond the stated range. I cannot verify the reference of one of the domains listed by PubSub, ssucu.com, as it has no home page and no pages indexed by Google.

Overall a good reminder that I need to play with PubSub more. Especially if they capture del.icio.us references (through spliced FeedBurner feeds I presume?).

Technorati API first long look

I spent some time today looking at the Technorati API and coding part of a personal tracking application. The API server was spotty throughout the day, making testing difficult. I cleaned up some of the wiki documentation, stored my own copies of API responses, and used my own servers to pull the data.

I put together a demo application using JavaScript.

Some things I noticed:

  • Cosmos query never returns rssurl or atomurl elements even though the data is stored. Use the weblog url as a parameter in a bloginfo query to pull this data.
  • The lang element in bloginfo is undocumented. 31411 for English, 1065 for Japanese, not sure what else.
  • Technorati picks up a lot of duplicate references. A weblog entry will often be counted twice: one for the home page and once for the individual entry pages.