The expectations of computer design and interaction are changing. On September 9 I gave a talk at PARC to the ACM‘s local human-computer interaction group, bringing a developer’s perspective to expanding the possibilities of efficient design. The presentation is titled “Better Design Through Code” and available for download below. What information about our web visitors is available with every request? How can site authors better tap into a full spectrum of run-time information about each visitor, his interests, and his computing capabilities?
Analyze full request information
An incoming HTTP request contains much more information than a host and path yet we typically pay no attention to the additional request data and simply respond with a single resource. The requesting browser identifies itself, communicates its capabilities, the preferred language of its user, computing hardware, operating system, versioning, network location, and much more. Web applications can tune into data contained in these requests and adapt content for specific usage groups.
A mobile client such as the iPhone may have different usage expectations than a desktop or HDTV display. Past solutions have sent our visitors to entirely new domains — e.g. iphone.domain.tld, wii.domain.tld — specific to their device or desired experience. We can instead build content negotiation directly into our applications, serving the best content experience to unique use cases at the same URL. Our servers listen to an incoming request, analyze the capabilities of the client, and match the client with the best available content experience for the given path.
Large data sets provide navigation challenges that may be best filtered by visitor location. An IP address and its registered parents provide basic location information with varied confidence levels by country, region, or city. A quick lookup of an IP address might identify your visitor as a Comcast Cable customer or a Yahoo! employee, with capabilities to alter your page content accordingly. Homepages of global corporations should not have to ask for user input to select a home country or region. A national retailer can show a map of local store locations before prompting for more detailed user inputs. An e-commerce website can automatically fill-in form fields for each visitor, saving total time in checkout.
Detecting installed software
Software installed our computers leave browser-addressable footprints in the form of MIME and URL schemes meant to connect our browsers, webpage embeds, or downloaded files with the appropriate installed application. A Quicken file is automatically passed along to the Quicken program for interpretation. Google’s Picasa photo software leaves a detection footprint used by Google and others. We can detect video capabilities of the client by examining installed applications such as Windows Media Player, Quicktime, or Flash for codec availability. We can even search for installed hardware such as an occasionally tethered GPS unit or portable music player.
Browser history tests
The final part of my presentation focused on identifying the favorite websites and web services of a visiting user to improve site content. It is possible to insert links into a webpage for comparison against the current visitor’s browser history. Site owners can test URL sets and adapt their web content based on these identified services. We can present different page enhancements to a Facebook user or a MySpace user, test our audience for future development targets, and only show content we determine to have a high probably of converting.
Every time a web page loads we throw out potentially useful data. With just a little effort we can thrill our users with custom, adaptive experiences based on their unique computing and personality profiles for increased engagement and conversions. This presentation outlines some of the reasonably easy methods of customization available to site owners seeking more intelligent methods of visitor interaction through smart server- and client-side applications.