Algorithm Analysis for Service Designers

The scalability of a service describes how expensive it is to operate as it grows. The more users access the service, the more expensive it becomes—but how much more expensive? The answer to that question is the service’s scalability. As we design services, we want them to be as scalable as possible so that their growth results in higher revenue out of proportion with the increase in expenses.

It is possible—in fact, quite easy—to design the opposite kind of service, where growth results in greater expense growth than revenue, and which therefore “hits a wall” and can no longer profitably grow.

How do we design services that scale?

Computer scientists have long held a useful tool that is helpful in answering this question. It’s called algorithm analysis. Don’t let the word “algorithm” scare you. This tool can help any service designer, technical or not, put your thinking through the paces of understanding how the decisions you make about your service affect how the cost of the service will grow as you gain more users (or devices or sessions).

Here’s a layman’s explanation of algorithm analysis, followed by a practical guide to how to use it for better design.

The Concept

Algorithm analysis is a process in which you determine how rapidly an output will grow as some input grows. For example, it can help you to think through how the number of devices using your service (an output) will change as the number of users subscribing to your service (an input) grows. Or, it can help you think through how your costs will grow (an output) as your number of users grows.

The result of an algorithm analysis is a function, usually expressed in “Big-O Notation,” like this:

Don’t panic. This equation is simply asserting that the output will square as the input increases.

O(n) = n²

The input—the number of users of your service, for example—is symbolized by n.

Don’t worry about the part that says O(n)—it’s just saying, “Hey, we’re doing algorithm analysis.”

The business end of the equation is , which asserts that the output (your costs, for example) will square as your user base grows.

This is the first thing to understand about algorithm analysis: the result of your analysis is a function that starts with O(n) = … and ends with some mathematical expression concerning n.

Here’s the next thing to understand. It turns out that we don’t, at least when we’re first designing a system, need to know precisely how the output will grow as the input grows. An approximation is good enough. For example, when you’re first designing a system, if somebody tells you that the costs will grow like this:

O(n) = 2.4n

and somebody else asserts that the costs will grow like this:

O(n) = 4.5n

…it actually doesn’t matter who’s right. Both of these answers are “close enough” for making important decisions about the essential structure of your service design. Later you can optimize the precise per-user cost of the service, but when designing the service structure, the approximate growth curve is what matters.

Consequently, there are actually only a few commonly-encountered answers to an algorithm analysis question. The most useful ones for our purposes are just these three.

  • O(n) = 1
  • O(n) = n
  • O(n) =

The first one says, “No matter how many users you have, your costs remain about the same.” This is your ideal case.

The second says, “Your costs will grow more or less in proportion to your users.” This is a linear curve, and it’s pretty normal.

The third says, “Your costs will grow as the square of the number of your users.” This is bad, and your service is probably going to fail. And why would this happen—why would a service suffer growth per user? Most likely because your service costs increase in proportion to relationships between users. If there are n users, and each of them has an expensive (for you) relationship to many other (n) users, then your costs are going to explode as you gain even just a few more users.

For computer scientists, there are several other valid answers to an O(n) analysis, and quite a bit more nuance in how to interpret and respond to them. For the purpose of service design, these three give us a few general guideposts that will help us design for better scalability.

How to Use It

Let’s say you’re designing a service—a tandem-bike ride-sharing service, say—and you need to make sure the service is scalable. How do you use algorithm analysis to do that?

The key question actually centers around that little variable, n. The main benefits you’ll receive from doing algorithm analysis will be to decide what input most causes the output to rise? Or, in other words, what exactly is it that causes increased cost? Let me show you some examples of features you could design and the input that drives costs for this feature.

  • the ability to register as a rider or driver (pedaler?): n is the number of riders and drivers.
  • the ability to book a ride: n is the number of rides.
  • the ability to view a map of all your past rides: is the number of waypoints in your routes.
  • the ability to leave feedback for a pedaler: is the number of comments left, or possibly the number of views of those comments, depending on which is more expensive given the usage.

In each of these examples, the actual algorithm analysis is O(n) = n, the linear curve. What is changing from example to example is the identity of n, the thing being counted, which is driving the costs.

Consider, for example, the ability to view a map of all your past rides. This seems like an innocent-enough feature, and nice to have. But it’s liable to be expensive, because a single ride could have hundred or thousands of waypoints. If you have 100,000 users, each taking 10 rides per year, with just 100 waypoints per ride, you need to store, recall, and display upon demand 100 million waypoints across your system. Naturally, that’s not impossible; depending on your server and database investments, it’s not even necessarily difficult. But it is much more expensive than storing zero waypoints per ride per user. And that expense will grow in proportion to how many users and rides you gain as your service grows, and even to how long those rides are.

Is this feature worth that ever-increasing cost? Maybe it is, maybe it isn’t: but you need to recognize the question and answer it carefully.

Here’s an alternative feature, not as valuable but far cheaper: the ability to view a map of all your past rides showing only the start and end point. If we show only the start and end of each ride, then the number of waypoints per ride is a constant number: two. That is, the relationship from rides (input) to waypoints (output) is f(n) = 2, which is a Big-O approximate relationship of O(n) = 1. This relatively subtle change in the specification of the feature has a massive implication for its costs: data gathering, transmission, storage, recall, and display. If users adore the detailed plot of where they went, waypoint-by-waypoint, maybe it’s worth paying for. But if users are happy to see just the start and end point of each ride, this is a far cheaper way to give it to them.

Those were some examples, but what did we just do? What should we do in future to determine the general cost of features that we propose for our service?

  1. Analyze what elements the feature calls for (users, rides, waypoints…)
  2. Analyze how those elements grow as the number of users grows.
  3. Analyze which element grows most dramatically for each new user you gain.
  4. Determine is there a way to reduce, or eliminate, that growth of that element, possibly through a subtle (to users) redesign of the feature?

This article hasn’t made a computer scientist of you quite yet. This is a bit of a bastardization of the concept of algorithm analysis, which is much deeper and more rigorous in its field. But you now have a tool that will help you think about what’s growing in your service and how it grows, so that you can make sure the costs of your service grow as slowly as possible as your user base grows.