The wonders of fb:ref and iRules: Serving pages from Facebook’s cache

I wanted to introduce or remind everyone of the fb:ref markup tag and how through the use of iRules in our BIG-IPs we can offload the serving of common pages to facebook’s cache.

Importantly this can happen without having to hit or rewrite your application, and is implemented in really fast, robust, edge “application switches”.


At Joyent we exclusively use active-passive pairs of F5 BIG-IPs for all of our load-balancing and application switching. Beside full packet inspection, being able to handle many nodes and pools of nodes, BIG-IPs have a great facility called iRules. iRules allow one to control and manipulate basically everything coming in and out the network.


The fb:ref markup tag has a description on the Fackbook Developers Wiki

Fetches and renders FBML from a given ref source – either a ref string “handle” you’ve created using fbml.setRefHandle or a URL that serves FBML. You can use this ref to publish identical FBML to a large number of user profiles and subsequently update those profiles, without having to republish FBML on behalf of each user (that is, using profile.setFBML for each user).

And the exact benefit and process is described by wwall in a forum post

There are at least 2 main benefits to fb:ref.
1) You only have to make 1 api call to update multiple user profiles. (and its cached on fb’s servers)
2) It replaces the existing fb:ref content with your new version (so you can keep the rest of the profile the same)

There are 2 versions of fb:ref with one you use a handle and the other a url. They are setup differently.

1) Register/create handle, with content using setRefHandle()
2) call setFBML(), using your FBML that includes a tag referencing your handle
3) update the content of the handle using setRefHandle()

The idea

You have an application where a respectable percent is the same page or pages over and over and over again, and they either don’t change that often or they change on some regular time interval.

Typical approaches to save on having your application processes being consumed with having to serve these pages would be to write these out as static pages and serve them (still requires some running through the application), or to use a cache like Varnish to front-end your application (you’ll have to both have the varnish infrastructure in place and you’ll have to expire the pages in the cache upon update).

The idea with fb:ref is that you can asynchronously (i.e. a user isn’t waiting right now for that page to come up) push those pages up to Facebook, the fbml is all processed and then cached by Facebook.

This is both good for the application developer and for Facebook. Facebook can take their time processing the fbml in those pages, and then also guarantee that the page is going to come up fast along with the rest of the Facebook.

Implementing it with an iRule

iRules are written in Tcl and we’re going to use the very powerful HTTP::respond command. HTTP::respond has the format you can see below: “response code” “you’re saying hey this is body content” “then the actual content”.

if { [HTTP::uri] contains "/popular_something/list"} {
HTTP::respond 200 content "<fb:ref handle='[HTTP::uri]'/>"
} else { pool facebook.application_server_pool}

The HTTP::uri is used here rather than something like URL::path, because we want this single rule to handle /popular_something/list, /popular_something/list/1 and /popular_something/list/4182.

We write an iRule for each URI initially so they can be independently toggled on-and-off (and this toggling is instant), but they can be further collapsed into single rules with regular expressions and ||.

The result

This example is for a Facebook application doing 30+ million requests in a given day, and 80+% of those requests are for the same 5 pages.

The application developers can load up facebook on their own schedule and once the HTTP request comes in for one of these URIs, the body content of is instantly returned from what is a network device rather then hitting the application servers.

Not bad.

%d bloggers like this: