At Risk: An Online Service is Sometimes Neither Online Nor Serviceable
The down side of relying upon all these online services is that, well, they’re online services [1]. The sad truth of the internet is that these things disappear all the time, often with little or no notice. Some notable examples:
- Geocities: Once the 500 pound gorilla of internet traffic, Yahoo! shut down Geocities in 2009. ArchiveTeam saved as much of it as possible and made it publicly available on Internet Archive.[2]
- Many Google services, including the maps API for Flash. The general maps API, once freely available, now costs $4 per 1000 visits.
- Friendster: At one point it was a contender in the race against Facebook, then it announced this past May it would be deleting all user data.
- Lala: Apple purchased then shut down this online music service, some say in preparation for its own online service (probably what became iCloud).
- Del.icio.us has been on the Yahoo! chopping block. It wasn’t shut down but was instead purchased by the founders of YouTube.
These are just a few of the innumerable internet services which have either seen their final sunset or been very close to it. Each of these companies had users who relied upon their services, users who were left high and dry when the last staff member leaves the building. And in the end this just didn’t matter. The lights went out and the service disappeared.
This is bad enough for your Average Joe User, but what if your own company relies upon someone else’s API? What if you were one of the companies whose offering depended upon the Google Maps API for Flash only to have it end-of-lifed? Or, say, the Amazon cloud is suddenly unreachable? How would your company fare?
In this interconnected world of ours we are all stronger through the cooperation afforded by public APIs and online services of this kind, however that does not mean we’re allowed to be blind to the risks which we take by doing so. As CTOs, architects, managers and programmers we all need to consider these scenarios and prepare for them in order to minimize the impact on our users. Downtime happens to everyone at some point and is excusable but failure to prepare is not. It’s just stupid.
When designing your project or service, try to build in some failsafes to prevent a crisis in case an external API or online service becomes unavailable. Fail gracefully, don’t flame out. It could even be as simple as disabling certain pages (or even the entire service, if necessary) when an API can’t be reached. This is a far superior user experience to your users seeing a 404 or 500 status page…
When researching this article I came across the post API Half-lives by Gabriel Weinberg of DuckDuckGo. He has a great list of questions for a team to ask itself when considering APIs and services to use in their projects. Go give it a look then sit down with your team and consider whether you’ve put yourselves at risk and, if so, what you can do about it.
[2] ArchiveTeam’s raison d’être is swooping in to save at risk data. They do great work and could use your help preserving our fragile digital heritage. Check out their wiki or hop on their IRC channel and say howdy. They’re all rogues with hearts of gold who’ll welcome you with open arms. [back to reading!]