Google Chrome 13 released to stable channel

Google released Chrome 13 into its stable channel this morning with over 5200 revisions including Instant Pages. If your webpages are not already differentiating between attended and unattended pageviews using the Page Visibility API for site analytics (and other functions assuming live eyeballs and the opportunity for interaction with page elements) your pageview numbers are now likely inflated. Google may preload your pages and its associated resources from the search result page for supporting browsers such as Chrome 13 and later.

WebKit and Chrome prerendering

Chromium project logo

Google search result pages now trigger a prefetch and prerender of top search result links in an effort to make navigating search results as easy as changing channels on your television. If Google’s search algorithms determine there is a significant probability of user click-through on particular result they will instruct supporting browsers to preload the entire destination page including images, JavaScript, advertisements, and analytics. The page requests happen in the browser, looking like a regular webpage view with the exception of a few JavaScript variables addressable from your site’s JavaScript. Support for preloading webpages and all related assets is a feature of WebKit-based browsers released after May 2011, including Chrome 13.

  1. What’s happening
  2. How to add code to accommodate
  3. Summary

What’s happening

Any webpage may now ask supporting browsers to “prerender” another webpage to prime the browser cache and improved perceived load times. If Chrome discovers a <link> element in your document’s <head> it might trigger an unattended pageview.

A few use cases currently short-circuit the browser’s attempt to prerender the page:

  • Content requiring browser plugins such as Adobe Flash
  • <audio> or <video> content

Tracking eyeballs, not machines

So how do I track when a page is being loaded from what looks like a regular user but is really a background process of the browser? The prefetch features of WebKit also support the in-progress page visibility API. JavaScript running on your pages can check the current visibility of the document and attach new event to take action if that state changes.

var isVisible = false;
if ( document.webkitVisibilityState === undefined || document.webkitVisibilityState === "visible" ) {
} else {
  document.addEventListener( "webkitvisibilitychange", visibilityFunction );

You may view a full example using the jQuery JavaScript framework. You may test your page’s prerender behavior on a page created by the Chromium project.

A webpage has three visibility states: hidden, prerender, and visible. A tab opened behind the current tab is hidden. A page requested by the browser but not yet shown in the browser UI. Standard navigation and viewing is visible. Browsers not yet supporting the Web Visibility API return an undefined state when your script attempts to access the property. If you are tracking a visitor’s first interaction with a page you are probably interested in the visible state. Inside your visibility function you will want to check the visibility state and call your functions of interest.


What looks like a real pageview from a modern browser might be a browser downloading your page resources in the background before possibly being presented to an actual visitor. Websites that care about separating eyeballs from machines should add new JavaScript to their pages to create awareness of the current loading state. This new prerender behavior is enabled by default in Chrome 13 and will become more common as WebKit-derived browsers or competitors enable similar functionality to improve perceived page load times.

SSL statistics from Chrome and Googlebot

The Google Chrome team released new statistics and implementation details on their proposed “False Start” abbreviated TLS handshake. Google claims the new handshake, introduced in version 9 of the Chrome browser in February, shaves an average of 120 milliseconds from a typical four-flight TLS handshake by accepting application data before both sides have communicated a “Finished” status. Chromium and its descendants such as Chrome can signal their acceptance of the abbreviated handshake protocol in the initial request for compatibility with 99.6% of known websites in the Google search index serving pages via the https scheme. Chromium flags incompatible sites in a blacklist text file bundled with the browser.

False Start is another example of the Chrome team speeding up the web by questioning existing protocols, introducing new ideas for how the Web could work enabled via a browser flag and possibly a server configuration, and defining a compatibility corpus based on the Web observed by Googlebot. SPDY is another good example. I like it.

Mike Belshe, an engineer on the Chrome team, has also been posting about unnecessarily long SSL certificate chains on large websites and the path to a short SSL chain including the competitive advantage of long-established certificate issuers. Good references for SSL/TLS behaviors behind the scenes that may be slowing down your websites and causing trust issues on mobile clients.

Powered by WordPress

WordPress logo

This blog is now powered by WordPress multisite. Automattic, the company behind major contributor to the WordPress project in employee hours and hosting, has been a client of mine since the summer of 2009. I co-organized the first WordCamp with project founder Matt Mullenweg in 2006 about a year after photoshopping some of the first WordPress apparel during the Webzine conference. I have co-hosted WordCamp San Francisco with Matt for the last two years. I submitted my first patch in early 2005 to improve Atom feed templates. Yet through all of my involvement with WordPress my main blogs have stayed on Movable Type. I finally patched pieces of WordPress core that were bothering me, waited for multisite to stabilize, ported over my theme, and wrote a few plugins for feature parity. This blog was previously powered by Movable Type, Blogger Pro, Blojsom, Radio Userland, First Class, hand-edited HTML, and HyperCard.

A publishing engine should serve its primary purpose of publishing the message you would like to serve out into the world. Over the years the number of templates generated with each new published message has grown to include desktop HTML, mobile HTML, web feeds such as Atom and RSS, sitemaps for search engine discovery, and notifications delivered to search engines, feed readers, and social sites with any content change. I am interested in implementing web publishing best practices and general community engagement on my blog; moving away from Perl-powered Movable Type to PHP-powered WordPress feeds my urge to tinker, experiment, and improve my site’s relationship with the broader web. Millions of websites powered by WordPress will be able to take advantage of the core patches and plugins I use on my own site. I’m also excited WordPress is finally moving to PHP 5.2 and MySQL 5 which should help speed up the software and collapse the many conditionals baked-in to the software.

If you are a WordPress core developer you can follow my patches on my WordPress profile. I will be contributing more patches before the WordPress 3.2 code freeze.

The many flavors of H.264 video

H.264 is not a single video codec; it is a family of codecs with some shared shortcuts grouped into 17 sets of profiles and 16 levels of constraints. Video creators and playback software share a mutual understanding of these shortcuts, which are often accelerated by specialized chipsets. This post examines a few of the many flavors of H.264 video and their application in mobile, desktop, and Flash Player environments.

Ogg Theora 1.1 macroblocks example

A compressed video is a series of shortcuts shared between a video creator and a viewer. A series of pictures, 30 pictures per second in most capture devices, are analyzed and compared, collapsing a group of pictures into a single photograph and variances between pictures before or after its place in the series. All lossy video codecs examine a series of pictures and look for pieces that can be thrown out and replaced with shortcuts to recreate video quality with less stored data. Specialized decoders in our playback software, often assisted by chips especially programmed to quickly execute these shortcuts, decompress video with these specialized instruction sets. Shortcuts can be patented, leading to some of the intellectual property concerns around H.264, VP8, and Theora video as video playback, and encoding targets, are increasingly integrated with web browsers implementing support for native HTML5 <video>.

  1. H.264 flavors
  2. The Apple effect
  3. Flash Player for mobile
  4. WebM and VP8
  5. Summary

H.264 flavors

H.264 is not a single video codec; it is a family of codecs with some shared shortcuts grouped into 17 sets of profiles and 16 levels of constraints. Decoding software, often backed by chips specially wired for video tasks (such as NVIDIA’s PureVideo) fill a storage buffer and try to compute video frames more quickly than those frames are requested from the player. High-complexity profiles and levels offer the highest quality video in the smallest file size but require a larger file buffer and computational horsepower to quickly decompress a video. High complexity works well in an overpowered desktop environment but videos must be adjusted for simplified, battery sipping use cases such as a mobile phone.

Feature Baseline
Flexible macroblock ordering (FMO)
Arbitrary slice ordering (ASO)
Redundant slices (RS)
B slices
Interlaced coding (PicAFF, MBAFF)
CABAC entropy coding
8×8 vs. 4×4 transform adaptivity
Quantization scaling matrices
Separate Cb and Cr QP control
Monochrome (4:0:0)

Videos are encoded with specific playback targets in mind based on maximum compatibility. The iPhone 3GS supports H.264 Baseline Level 3.0. The iPhone 4 and iPad support H.264 Main Profile Level 3.1. The latest netbooks with NVIDIA ION and PureVideo HD support H.264 High Profile Level 4.1. A video optimized for desktop, notebook, or netbook playback encoded using H.264 High Level 4.1 will not playback on an iPhone.

The Apple effect

Adobe has repeatedly said that Apple mobile devices cannot access “the full web” because 75% of video on the web is in Flash. What they don’t say is that almost all this video is also available in a more modern format, H.264, and viewable on iPhones, iPods and iPads.

Steve Jobs, April 2010

Adobe’s Flash Player added support for H.264 video decoding in August 2007 with its Flash Player 9 Update 3 Beta 2 (9.0.115) release. Websites previously included a video file, a Flash video container (FLV) with a On2 VP6 or Sorenson video track, into a single Flash file for distribution and playback. The launch of H.264 support in Flash decoupled the video player and the video file, loading videos over the network when a viewer initiates playback (a much lighter payload for embeds such as YouTube). Video websites can directly expose MP4 downloads to iTunes, the QuickTime browser plugin, or search engines for download and indexing.

Decoupling the Flash video viewer from the underlying video provides direct access but does not necessarily deliver video “viewable on iPhones, iPods and iPads.” Video publishers need to dumb down their video for Apple’s low-power devices (and Flash mobile), or a video will be viewable but not playable.

YouTube exposes multiple video resolutions on its website. Each video resolution uses a slightly different version of H.264 but none of these videos delivered to desktop web browsers are compatible with an iPhone 3GS and its Baseline profile requirement. Let’s take a look at the underlying videos exposed in the default Flash version of YouTube for the latest weekly address from the White House.

Exposed YouTube web formats

MP4, High profile level 4.1
FLV, Main profile level 3.1
FLV, Main profile level 3.0

The H.264 videos used by YouTube for default video playback on web browsers are not compatible with portable Apple devices not built off an A4 processor. YouTube is creating special video files for iOS and other mobile devices.

Flash Player for mobile

On June 22 Adobe released Flash Player 10.1 for mobile, its first full Flash player written for ARM instruction set architectures. Flash for mobile does not solve the video playback problem. Flash can draw a player area and display a preview image of the video in place of a failed plugin icon. Video playback ultimately depends on the hardware decoder horsepower behind the scenes and its ability to deliver video frames and synchronized audio to your mobile device’s screen faster than intended playback and within the constraints of small file buffer and memory available on mobile. Flash for Mobile renders a player and its interaction elements; video for mobile still relies on simpler sets of shortcuts targeting hardware-accelerated features and available computing resources on mobile.

WebM and VP8

Google introduced the WebM file format on May 19 with a container based on Matroska, a VP8 video track, and Vorbis audio. Google released any patent rights it may assert over VP8 and released the source code for libvpx, a reference encoder and decoder, with 17 test vectors for implementers. The popular FFmpeg project, used by many web publishers for encoding and by Google Chrome for decoding, quickly added native VP8 support in late June. FFmpeg’s VP8 implementation was able to highly leverage video encoder and decoder shortcuts already used by H.264, opening VP8 to hardware-accelerated playback by chipsets optimized for H.264 shortcuts. If your encoder, decoder, and hardware already pays into the H.264 patent licensing pool run by MPEG-LA the shared, patent-asserted shortcuts present in VP8 can be a good thing. If you were hoping for a Freedom-loving replacement for Theora, VP8 may not be clear of patent assertions (but Mozilla seems to like it).


Web developers are excited about H.264 video and the rise of browser-native playback through HTML5 <video> markup. H.264 is a family of standards, each with its own set of shortcuts shared between a video publishing tool and a video player. The excitement over mobile video has overlooked the intricacies of H.264 profiles and levels detailed by RFC 4281 and the changing landscape of hardware-accelerated video on mobile. Video publishers should be aware of playback differences between playback devices and either choose a lowest common denominator or specifically target the quality and file size of an intended playback device.

HTML5 video markup, compatibility and playback

The emerging HTML5 specification lifts video playback out of the generic <object> element and into specialized <video> handlers. Explicit markup for audio and video places elevates moving pictures to a similar native rendering capacity as <img> markup we are used to but with more fine-grained details about underlying formats and compression available before loading. In this post I will dive into implementation details of HTML5 video based on currently available consuming agents and outline some of the nuances of preparing media for playback.

  1. Inside the video element
    1. Browser workflow
    2. JavaScript-based workflow
  2. Implementation nuances
  3. Player UIs
  4. HTML5 video and Flash
  5. Summary

Inside the video element

The video element is the top-level element of a cascading element set designed to handle graceful degradation across a wide array of HTML rendering engines. If a web browser or other consuming agent unpacks the DOM and does not understand what you have described it should process child elements until something makes sense or it reaches the end of your element tree.

<video width="480" height="320" id="video" poster="video_frame.jpg" controls="true" autobuffer="true">
  <source src="video_high.mp4" type="video/mp4; codecs=&quot;avc1.64001E, mp4a.40.2&quot;" />
  <source src="video_base.mp4" type="video/mp4; codecs=&quot;avc1.42E01E, mp4a.40.2&quot;" />
  <source src="video.ogv" type="video/ogg; codecs=&quot;theora, vorbis&quot;" />
  <object id="flashvideo" width="480" height="320" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase=",0,115,0" standby="Loading your video...">
    <param name="movie" value="video-player.swf" />
    <param name="quality" value="best" />
    <param name="allowfullscreen" value="true" />
    <param name="loop" value="false" />
    <param name="flashvars" value="movie=video_high.mp4" />
    <!--[if !IE]>-->
    <object type="application/x-shockwave-flash" width="480" height="320" data="video-player.swf" standby="Loading your video...">
      <param name="quality" value="best" />
      <param name="loop" value="false" />
      <param name="allowfullscreen" value="true" />
      <param name="flashvars" value="movie=video_high.mp4" />
      <img alt="animated GIF" src="video_animated.gif" width="480" height="320" />
  <p class="robots-nocontent">We tried to show you a video but your browser does not support native video playback and does not have a copy of <a rel="nofollow" href="">Adobe Flash</a> installed. Please upgrade your browser and plugins.</p>
    <!--[if !IE]>-->

Look complicated? It is! The static markup above describes six possible video interactions with the web browser. Three different source videos are described in HTML5 markup: a MP4 file container with a H.264 video track using the High profile Level 3 and low-complexity AAC audio (suitable for desktops); a MP4 file container with a H.264 video track using the Baseline profile and low-complexity AAC audio (suitable for mobile phones); an Ogg file container with a Theora video track and a Vorbis audio track. I will dive deeper into file format and codec nuances in a separate post. If the HTML5 video markup fails, or none of the three specified source videos are compatible with the consuming agent the markup falls back to double-baked markup for the Flash Player plugin. If HTML5 video fails and Flash embedding fails the markup includes simple information about the video and an animated GIF preview.

Behind the scenes the web browser is converting your markup string into its own set of mapped elements, passing off to the appropriate handler, and adjusting page layout based on its new discoveries.

Browser workflow

  1. Read the markup string.
  2. Build an element tree.
  3. Find a <video> element.
  4. I know how to process a <video> element. Map defined attributes.
    1. Found width and height attributes. Prepare the page layout for new content.
    2. The controls attribute is present and I know how to process the attribute. The publisher would like to use the default playback UI built-in to my video handler.
    3. Found a src attribute. Try to load the referenced resource. Similar handling to an <img> src.
    4. Found an autobuffer attribute and I know how to process the attribute. Start buffering the movie resource before the viewer initiates playback.
    5. Found a poster attribute and I know how to process such an attribute. The publisher would like to show a poster frame image inside the video object dimensions before the viewer initiates playback.
    6. The src attribute is either undefined, unavailable, or incompatible. Continue parsing child elements for a better content match.
      1. Found a source element and I know how to process such an element.
        1. The type attribute value references an Internet media type I recognize and support for Internet video. It’s possible I might be able to read the file format and unpack the video container after download.
          1. A codecs parameter is specified within the type attribute, defining the video codec and audio codec needed to decode the container’s video and audio tracks respectively.
        2. The src attribute exists. Queue the referenced resource for network loading after the viewer initiates playback, or immediately if autobuffer was specified in the video element.
      2. No suitable source element found. Continue searching.
  5. Found an <object> element with an object handler specified using the classid attribute. My name is most likely Trident/IE.
    1. The classid attribute value matches a plugin installed on the viewer’s computer: Adobe Flash Player.
    2. The version of Flash Player currently installed on the viewer’s computer is less than the minimum specified value in the codebase attribute.
      1. Attempt to download and install a new Flash Player ActiveX control at or above version 9.0.115 “MovieStar.” The specified Flash Player version is capable of handling a MP4 video container with an H.264 video track and AAC audio track.
      2. Stop processing the video object; reload later.
    3. A <param> element exists with a name attribute of movie and a resource location declared in the value attribute.
    4. Display text specified in the object’s standby attribute value while I attempt to load the Adobe Flash browser plugin and its SWF file interpreter. Pass the specified param element key-value pairs into the Flash interpreter as well as the FlashVars query parameter describing dynamic values interpreted by the SWF at runtime.
  6. I don’t care about conditional comment blocks targeting Trident/IE or such a conditional evaluates as true.
  7. Found an object element with an object handler specified using the type attribute.
    1. The type attribute specifies an Internet media type connected to a known plugin registered in the plugin system (most likely NPAPI).
    2. The data attribute exists and specifies a valid resource.
    3. Display text specified in the object’s standby attribute value while I attempt to load the Adobe Flash browser plugin and its SWF file interpreter. Pass the specified param element key-value pairs into the Flash interpreter as well as the FlashVars query parameter describing dynamic values interpreted by the SWF at runtime.
  8. No acceptable video player found. Display an animated GIF preview of the movie. Let the viewer know they are missing out on the full content experience.
  9. Acceptable movie found and queued.
    1. Attempt to progressively download or stream the specified video element.
      1. Does the Content-Type returned by the server match our expected value(s)?
      2. Does the server accept downloading individual pieces of a file at a time (Accept-Ranges)?
      3. Did the resource return a X-Content-Duration header specifying expected playback length in seconds?
    2. Send downloaded video pieces to the video decoder for decompression.
    3. Initiate a playback buffer.
    4. Fire events related to the final loaded stage of the process.

Yes, I have over simplified.

JavaScript-based workflow

It is possible to test playback capabilities of the browser and its related plugins through JavaScript (if JavaScript is available on the page of course). If you are considering supporting HTML5 video at some point in the future but are curious how many of your visitors could support the new playback method you could track analytic events today to influence your product roll-out months down the road.

Video element support

Test the current consuming agent’s support for the <video> element by declaring a new DOM object and evaluating the browser’s default handlers. If the created DOM object contains functions present in a default HTMLVideoElement or HTMLMediaElement interface we know the consuming agent applied special handling to our video element declaration and likely supports HTML5 video.


Individual codec support

Testing support for the video element is only the first step. We also need to check playback support for the specific video and audio codecs used in our source videos. The canPlayType method returns the likelihood a given file container, video codec and audio codec are supported by the consuming agent.

var v = document.createElement('video');
var supported = v.canPlayType('video/mp4; codecs="avc1.58A01E, mp4a.40.2"');
if ( supported == 'probably') { return true; }

Detect Flash

Flash Player 9.0.115 and above is required to play MP4 file containers with H.264 video and AAC audio. The Flash Player detection kit provides client-side detection libraries and automatic upgrade capability for site visitors not already using the latest version of Flash.

Check for an ActiveXObject of ShockwaveFlash.ShockwaveFlash.10 or ShockwaveFlash.ShockwaveFlash.9 and compare the full version string.

In a NPAPI plugin environment check the navigator.mimeTypes array for the key “application/x-shockwave-flash,” verify the associated plugin is enabled, and parse the version number from the plugin’s description string.

DOM insertion

Once your script has determined the best available video playback method you can insert the appropriate markup using a subcomponent of the markup used above.

Implementation nuances

In the static markup method of describing content the consuming agent cycles through possible <source> elements one at a time in search of a suitable match. In my testing on mobile WebKit (iPhone OS) this test cycle removes the poster frame image described in the <video> element and instead places a broken video image inside the element dimensions instead. If a later <source> element matches a generic playback image is added to the element. Source element cycling is the new flash of unstyled content for the HTML5 video world.

The dynamic insertion method relies on the canPlayType method and its return values of “probably” or “maybe.” Maybe is not good enough for my needs if I have a Flash fallback option, but if you are in a constrained playback environment such as low-power mobile devices then acting on a response of “maybe” is better than nothing. Just be sure to send along some alternate HTML as a failure fallback.

Player UIs

Each web browser supporting HTML5 video uses its own backing software to power the video playback experience. Chromium and Google Chrome use a specially patched version of FFmpeg. QTWebKit uses Phonon. Layer on top platform-specific video acceleration, UI, and handling and you will see a variety of final UIs across browsers and platforms. Including the controls in your <video> element is the quickest path to launch but you will give up control over interactions.

If a web browser supports HTML5 video it almost certainly supports native vector graphics as well. It’s possible to craft your own UI with supported JavaScript methods triggering play, pause, and final frame handling in the native video handler.

HTML5 video and Flash

Flash is the dominant method of video playback on the web today. Native browser support of HTML5 video and business excitement to reach low-power devices such as the iPhone provide compelling reasons to offer content using HTML5 video markup. Flash supports progressively loading MP4 files with H.264 video and AAC audio since 2007. Flash Player 10.1, expected in the next few months, speeds up playback with less resources thanks to specialized GPU handling and more efficient code. HTML5 video and Flash playback solutions will need to co-exist for maximum reach (that’s the reason you are using Flash in the first place).

Playback is only one component of the total video experience. You will need to develop analytics and advertising capabilities to match or exceed your current Flash experience. Advertisers don’t publish interactive advertisements in <canvas>. The high-CPM pre-roll and post-roll video advertisements we see today are based on a Flash ecosystem built up over the years. HTML5 video and your money maker of choice will need to find a way to co-exist (banner and text advertisements still work well) and drive your development budget. I expect to see better JavaScript libraries from the open-source community as well as advertising networks solve some of the problem in the near future, just like a suite of XHR handlers popped up once Ajax started to take off.


HTML5 video has arrived and is deployed across a wide enough user base for sites and developers to stand up and pay attention. File support and markup varies by browser and there is currently no native support in Internet Explorer. Developers are excited to take advantage of the performance gains of native video handlers and reach new audiences in the smartphone market. If you are thinking of getting implementing HTML5 video in the future it’s possible to start measuring your audience’s playback compatibility today so you at least know your deploy targets.

Google search referer changes

Google will roll out a change to its search results pages later this week designed to better capture outbound clicks. Google search result pages will link to a gateway URL before delivering the visitor to his final destination. These gateway URLs will replace search result URLs exposed via the Referer HTTP header. Google announced the new gateway page on its Google Analytics blog, giving webmasters a few days to prepare for the change.

What is changing?

The Referer path for Google search results will change from /search to /url. It is still not clear which URL parameters from the search page will be passed through the gateway. The search term, q, is still preserved inside the sample URL provided by the Google Analytics blog.


Scripts, plugins, and helpers replying on a set Referer path for content highlighting or targeting will need to adjust their code as Google’s change spreads throughout their data centers worldwide.

Why the change?

Google is likely making this change to better track search actions and shield URL parameters from sites downstream. Gateway URLs dependably capture click data and reformat the information passed along to external sites.

Search engines evaluate customer satisfaction based partly on outbound click behavior. Searchers who consistently click on the third search result may be sending Google a signal about that content’s authority for a search term and therefore influencing the ranking algorithms. Traditionally such an action would be measured with a JavaScript onclick event added to the link to pass a signal back to the search engine’s servers before taking the searcher to his destination. JavaScript tracking does not work on all clients, including clients accessing search results with JavaScript turned off (e.g. through Google’s APIs or a feature phone).

The search result page includes detailed information needed by Google to deliver the best possible result. A search might include a location from a GPS sensor, social context drawn from a group or custom search engine parameter, or other sources of questionable exposure. Google will only expose a few relevant parameters in URLs included in a web browser’s Referer headers.


The way your website interprets traffic from one of its top providers will change later this week. You will need to adjust scripts and check for updates to analytics software where appropriate. If you notice a huge drop in measured search referrals from Google don’t panic. Just make sure you are measuring the correct actions.

Facebook’s photo storage rewrite

Facebook logo

This week Facebook will complete its roll-out of a new photo storage system designed to reduce the social network’s reliance on expensive proprietary solutions from NetApp and Akamai. The new large blob storage system, named Haystack, is a custom-built file system solution for the over 850 million photos uploaded to the site each month (500 GB per day!). Jason Sobel, a former NetApp engineer, led Facebook‘s effort to design a more cost-effective and high-performance storage system for their unique needs. Robert Johnson, Facebook‘s Director of Engineering, mentioned the new storage system rollout in a Computerworld interview last week. Most of what we know about Haystack comes from a Stanford ACM presentation by Jason Sobel in June 2008. Haystack will allow Facebook to operate its massive photo archive from commodity hardware while reducing its dependence on CDNs in the United States.

The old Facebook system

Facebook photo storage architecture 2008

Facebook has two main types of photo storage: profile photos and photo libraries. Members upload photos to Facebook and treat the transaction as digital archive with very few deletions and intermittent reads. Profile photos are a per-member representation stored in multiple viewing sizes (150px, 75px, etc). The past Facebook system relied heavily on CDNs from Akamai and Limelight to protect its origin servers from a barrage of expensive requests and improve latency.

Facebook profile photo access is accelerated by Cachr, an image server powered by evhttp with a memcached backing store. Cachr protects the file system from new requests for heavily-accessed files.

The old photo storage system relied on a file handle cache placed in front of NetApp to quickly translate file name requests into a inode mapping. When a Facebook member deletes a photo its index entry is removed but the file still exists within the backing file system. Facebook photos’ file handling cache is powered by lighttpd with a memcache storage layer to reduce load on the NetApp filers.

No need for POSIX

Facebook photographs are viewable by anyone in the world aware of the full asset URL. Each URL contains a profile ID, photo asset ID, requested size, and a magic hash to protect against brute-force access attempts.


Traditional file systems are governed by the POSIX standard governing metadata and access methods for each file. These file systems are designed for access control and accountability within a shared system. An Internet storage system written once and never deleted, with access granted to the world, has little need for such overhead. A POSIX-compliant node must specifically contain:

  • File length
  • Device ID
  • Storage block pointers
  • File owner
  • Group owner
  • Access rights on each assignment: read, write execute
  • Change time
  • Modification time
  • Last access time
  • Reference counts

Only the top three POSIX requirements matter to a file system such as Facebook. Its servers care where the file is located and its total length but have little concern for file system owners, access rights, timestamps, or the possibility of linked references. The additional overhead of POSIX-compliant metadata storage and lookup on NetApp Filers led to 3 disk I/O operations for each photo read. Facebook simply needs a fast blob store but was stuck inside a file system.

Haystack file storage

Facebook haystack photo storage system

Haystack stores photo data inside 10 GB bucket with 1 MB of metadata for every GB stored. Metadata is guaranteed to be memory-resident, leading to only one disk seek for each photo. Haystack servers are built from commodity servers and disks assembled by Facebook to reduce costs associated with proprietary systems.

The Haystack index stores metadata about the one needle it needs to find within the Haystack. Incoming requests for a given photo asset are interpreted as before, but now contain a direct reference to the storage offset containing the appropriate data.

Cachr remains a first line-of-defense to Haystack lookups, quickly processing requests and loading images from memcached where appropriate. Haystack provides a fast and reliable file backing for these specialized requests.

Reduced CDN costs

The high performance of Haystack combined with new data center presence on the east and west coasts of the United States reduces Facebook’s reliance on costly CDNs. Facebook does not currently have the points of presence to match a specialist such as Akamai, but the combined latency of speed of light plus file access should be performant enough to reduce CDN in areas where Facebook already has existing data center assets. Facebook can partner with specialized CDN operators in markets such as Asia where it has no foreseeable physical presence to boost its access times for Asian market files.


Facebook has invested in its own large blob storage solution to replace expensive proprietary offerings from NetApp and others. The new server structure should reduce Facebook’s total cost per photo for both storage and delivery moving forward.

Big companies don’t always listen to the growing needs of application specialists such as Facebook. Yet you can always hire away their engineering talent to build you a new custom solution in-house, which is what Facebook has done.

Facebook has hinted at releasing more details about Haystack later this month, which may include an open-source roadmap.

Update April 30, 2009: Facebook officially announced Haystack and further details.

Facebook’s growing infrastructure spend

Facebook logo

On Thursday BusinessWeek reported Facebook is seeking new financing for its data center operation growth in 2009. Facebook continues to add new members and their associated content at an extremely fast pace, with most new growth coming from international markets. Facebook needs to expand its abilities to serve these markets by bolstering current infrastructure offerings and cutting latency to its members through new international points of presence. In this post I will take a deeper look at Facebook’s current computing infrastructure and related expenses and examine likely new areas of investment in 2009.

Facebook members

Facebook currently has over 160 million members in its top 30 markets. Facebook enjoys a 24% market penetration across all 30 countries, including complete domination in Chile and Turkey, where 76% and 66% of all Internet users are members of Facebook. Facebook member numbers are taken from; total Internet users for each country is as reported by the CIA World Factbook.

United States54,739,960223,000,00024.55%
United Kingdom17,308,04040,200,00043.05%
Hong Kong1,584,6003,961,00040.01%
South Africa1,390,4405,100,00027.26%

Facebook’s current infrastructure serves North America with minimal latency. Future expansion into Europe and southeast Asia are likely as Facebook tries to expand its international audience.

Data centers

Facebook data center map March 2009

Facebook currently operates out of four data centers in the United States: three on the west coast and one on the east coast. Facebook leases at least 45,000 square feet of data center space.

Switch & Data‘s PAIX at 529 Bryant Street in Palo Alto is just around the corner from the Facebook offices and a long-time home to Facebook servers. It’s unclear how much of the 100,000 square-foot, liquid-cooled data center is currently occupied by Facebook.

Facebook has been with Terremark‘s NAP West in Santa Clara since November 2005. Facebook originally leased 10,000 square feet but may have grown larger over the years. Facebook is still listed as a Terremark customer in Santa Clara but the company might be consolidating its operations into its new local data center.

Facebook geo-distributed its web operations in 2008 with DuPont Fabros‘ ACC4 in Ashburn, Virginia. Facebook leased 10,000 square feet in 2007 and occupied the space in 2008 after extensive reworking of the Facebook backend. Facebook shares ACC4 with MySpace, Google, and other competitors.

In January 2009 Facebook moved into its first exclusive data center, Digital Realty Trust‘s 1201 Comstock Street in Santa Clara. The 24,000 square feet of data center space operates at a PUE of 1.35, a respectable mark against reported 1.22 marks for Google and Microsoft. Facebook leases the center from Digital Realty as its sole occupant.

Facebook is rumored to be adding an additional 20,000 square feet of data center space in Ashburn, Virgnia in DuPont Fabros‘ ACC5. Facebook is expected to move into ACC5 in September 2009 and place new servers online by the end of the year.

Facebook recently announced an international headquarters in Dublin, Ireland that will include “operations support” across Europe, the Middle East, and Africa. A European data center is a likely expansion point for Facebook as they try to solidify their European offerings.

Server loans

Facebook debt

Facebook paid for part of its infrastructure expansion through specialized debt financing from TriplePoint Capital. Facebook drew down $30 million in 2007 followed by another $60 million in 2008. BusinessWeek reports Facebook is currently trying to secure as much as $100 million in debt financing for its next round of growth.

Debt financing against physical assets such as servers and office buildings offer lower rates than a traditional venture capital round. Facebook’s server expenditures have a recoverable resale value mapped over a depreciating lifespan, unlike direct and unrecoverable payments to employees and service providers. Lenders such as TriplePoint are a specialized type of real estate investor, a market with huge risk premiums in the current market. Facebook’s $100 million debt financing is bigger than TriplePoint’s typical investments, placing Facebook’s new expansions beyond the investment strategy’s scope during a time of real-estate investment turmoil. Facebook needed to look to other financing operations for bigger infrastructure loans, an expected move for the growing company.

Facebook spent $68 million on Rackable servers in 2007 and early 2008, likely as a result of their Virginia data center build-out. Facebook is also rumored to be a large consumer of premium-priced proprietary hardware NetApp storage appliances and Force10 networking.

Facebook’s debt financing agreement with TriplePoint Capital expired a few months ago, leading the company to seek new sources of financing for its new Santa Clara data center and other expansion plans. Facebook is in discussions with Bank of America for additional loans against this capital expenditure according to BusinessWeek.

How many servers?

Facebook had over 10,000 servers as of August 2008 according to Wall Street Journal coverage of a presentation by Jonathan Heiliger, Facebook‘s VP of Technical Operations. Facebook signed an infrastructure solutions agreement with Intel in July 2008 to optimally deploy “thousands” of servers based on Intel Xeon 5400 4-core processors in the next year.

~800 memcached servers supplying over 28 TB of memory.
~600 servers with 8 CPUs and 4 TB of storage per server. That’s 4800 cores and about 2 PB of raw storage!
Facebook adds more than 850 million photos and 7 million videos to its data store each month. That’s a lot of Filers.

Facebook uses Akamai and other CDN providers to serve static content to visitors around the world. It’s an expensive service offering not covered by Facebook’s server debt financing.


Facebook’s faces difficult infrastructure challenges as the company tries to keep up with explosive growth around the world. Current shocks in the real estate investment market have made property financing difficult for all companies, including Facebook. New infrastructure moves from Facebook coming online this year should lower total operating costs per server thanks to new efficiencies in the cost of power and a decline in leasing price per square foot as Facebook buys in bulk. I expect new deals with foreign governments such as Ireland will lead to new expansion by Facebook heavily influenced by the ex-Googlers on staff who have paved this path before.

Facebook is a privately-held company, offering limited insights into its expenses and other operations. The company seems to be repricing its server debt financing each year and has just crossed into the capital lending realm of big banks not easily able to take big risks in their property portfolios at the moment.

Create enhanced results on Yahoo! and Facebook with Share markup

Yahoo! announced support for enhanced search results last week based on Facebook Share and RDFa markup. Website owners can add a few meta tags to their pages to boost click-throughs from a more visual Yahoo! Search result and ease the process of sharing a link on Facebook at the same time. In this post I will cover the major categories of enhanced share types — audio, images, video, news, blogs, games, documents, and multimedia — and walk through how site owners can stand out on shareable platforms. Yahoo! and Facebook are just the first two platforms to collaborate on this effort. Expect more announcements from a variety of activity stream and URL share providers in the future.

Why add special markup?

Yahoo! Enhanced Search example

Shared pages and search results contain, at a minimum, a title and a description extracted from your site’s <head>. You can enhance your search and share capabilities by explicitly specifying a thumbnail image or playback data you would like included directly alongside mentions of your pages.

<meta name="title" content="Apple iPhone 3G" />
<meta name="description" content="iPhone 3G combines three products in one..." />
<link rel="image_src" type="image/jpeg" href="" />

Thumbnails appear alongside Yahoo! search results in a 54 pixel high by 98 pixel wide image window. Thumbnails might be overlaid with a call to action for supported mediums and their embedded players.

Supported mediums

The Facebook Share API supports annotating a page as audio, video, generic multimedia, an image, blog content, or news content. Publishers simply add a single line to their pages to help Facebook classify the type of page shared by their users and apply the appropriate specialized handling.

<meta name="medium" content="blog" />

Yahoo! Search currently supports enhanced search results for video, games, and documents (all wrapped in a Flash player). Playback SWF files must be white-listed by Yahoo! before they are supported for inline playback (Facebook also white-lists). Yahoo! currently supports the following websites’ embedded Flash players:

Hulu, YouTube, Yahoo! Video, Metacafe
Scribd, SlideShare

Inline audio

Facebook supports track title, artist name, album name, and a direct link to the audio through its share interface. Facebook users should be able to playback MP3s and other popular audio formats from their social news feed.

<meta name="medium" content="audio" />
<meta name="title" content="Pearl Jam - Black" />
<meta name="description" content="Sheets of empty canvas, untouched sheets of clay..." />
<link rel="image_src" type="image/jpeg" href="" title="Ten Album cover" />
<link rel="audio_src" type="audio/mpeg" href="" />
<meta name="audio_type" content="audio/mpeg" />
<meta name="audio_title" content="Black" />
<meta name="audio_artist" content="Pearl Jam" />
<meta name="audio_album" content="Ten" />

Inline video

Video sites should specify their embedded player and its desired height and width on each video page. White-listed players will be played back directly in a social news feed.

<meta name="medium" content="video" />
<meta name="title" content="The Daily Show with Jon Stewart: Thu, Mar 12, 2009" />
<meta name="description" content="Jon Stewart and CNBC's Jim Cramer go face to face in the studio." />
<link rel="image_src" type="image/jpeg" href="" />
<link rel="video_src" type="application/x-shockwave-flash" href="" />
<meta name="video_width" content="512" />
<meta name="video_height" content="296" />
<meta name="video_type" content="application/x-shockwave-flash" />


Search results pages and social news feeds provide new sources of referrals for connected sites. Webmasters can spice up their pages with a few extra meta tags to stand out any time a machine or a human decides to share your content with others. Adding thumbnails to a page, something as simple as a user profile picture in a social context, is a simple step that will help your content stand out from the crowd.

Publishers supporting embedded Flash players should consider the impact of off-site inline playback on their business. If you’ve already embraced spreading your SWFs far and wide you should jump on the white-list queue early to gain a competitive advantage through partners such as Yahoo! and Facebook.