API is the building block of any client-server communication by helping exchange information in the form of request-response pattern. In any distributed system, it becomes immensely important to build APIs that are robust in nature and are highly available even in the face of a network issue. This article will summarize a couple of good practices that helps in developing highly available robust APIs.
In the face of a network failure, the API design must be expected to provide a response in a consistent way when the system comes back up. This is one of the most common issues in a distributed systems world where a failure either on client or server side leads to a retry on the API operation. In such scenarios, APIs should be built in an idempotent way meaning as many times you call the API with the identical requests, the response will remain the same or in better words, the effect of the request on server will remain same as if only a single request was made.
This can be achieved by using idempotency keys. During a client-server communication, client will generate a unique key to identify a request and sends it to the server. In case of failure, if client retries the same request, server can reply back with the previously cached result if it has already seen the idempotency key of the operation.
Exponential Backoff Retry
During a failure, a client can retry the request a couple of times until it gets the response back. Usually with the next retry, the failures such as intermittent network issues are gone. But, if the server is facing much serious issues leading to longer down time, retrying request continuously will lead to further worsening the issue. Hence, clients should usually follow the exponential retry algorithm which is waiting for an initial wait time on the first failure and then increasing the wait time proportionally. Sometimes multiple clients can retry around the same time due to exponential backoff and again add to the load on the server. This can be avoided by adding a random jitter to the request that will space out the various requests to the server.
Rate Limiting APIs
Due to traffic spikes, there are times when API requests increases thereby leading to response timeouts or even worse, service outages. You can definitely increase the capacity of your infrastructure to account for user growth but after a certain limit, it’s advisable to scale your APIs to support the unexpected traffic bursts. Deploying safe rate limits to every user’s account can prevent such large scale degradation by controlling the amount of traffic that is sent to your APIs (more like number of requests per second).
There are various types of rate limiters that can be used according to the kind of traffic any API supports. One of the most common ways to limit your users is by rate limiting them by requests. One of the easiest ways is to analyze the traffic patterns to your API before and during a traffic spike and use that data to limit every user to a certain number of requests per second.
API is a contract between the API developers and users that rely on it to fetch some data. Hence, it becomes super important to make any new changes backward compatible. One of the ways to achieve this is by API versioning. Versioning is a way to enable users to switch to the newer set of changes in an API effectively. Although versioning means a cost on the developers to maintain the old versions as well as enhance them with newer features, it’s one of the proven ways that allows the users to upgrade whenever they want to the latest versions.
Sometimes the response from the server can be huge thereby leading to degradation of performance of the API by increased latency. In order to handle such responses gracefully, APIs should return batched response or in other words, paginate the response.
Pagination can be achieved by using some sort of marker to identify that there is another batch of response associated with a request. The response contains a batch of results and an identifier which can be used to fetch the next batch of result.
The users of API can sometimes simultaneously try to make an update to the same resource. In order to successfully achieve such multiple transactions without stepping over each other is called optimistic concurrency. This can be done using a version number on the resource.
For example: if there are two clients, A & B simultaneously trying to update a resource, R. If A is successfully able to update R and writes it to the database. Then B’s request should get a concurrency error thereby informing B that the version of R has been updated and B should first update it’s copy of resource R. Hence, B will first get the latest version for R and then perform the update on it.
Any API call uses HTTP (Hyper Text Transfer Protocol) to transfer data over network. HTTP traffic can be read by any one hence, it becomes super important to secure the communication. TLS (Transport Layer Security), or previously known as SSL (Secure Socket Layer) is a way to secure these communications over network by encrypting the request/response. HTTPS is now a widely popular way to securely communicate using HTTP and TLS.
HTTPS should be used while making any API requests especially the ones that deal with sensitive data. Usually the host providers are able to provide SSL certification, otherwise there are open source certificate authorities that can be used for this. SSL certificate helps validating your server and any requests made from the client.
These are some of the most crucial patterns to keep in mind when working on building any API. Understanding these concepts thoroughly will definitely save time debugging and even avoiding large scale issues. Failure in adopting them might work in short run but as the usage of the API increases, to avoid any bottlenecks and surprise failures, these patterns will prove really useful.
If you like the post, share and subscribe to the newsletter to stay up to date with tech/product musings.
(The contents of this blog are of my personal opinion and/or self-reading a bunch of articles and in no way influenced by my employer.)