How we manage our first 1M requests per day API
This post was published originally on medium.com/@eichgi/how-we-manage-our-first-.. for my old blog.
Hi fellows, today I want to tell you how we manage an API which receives around 1M request per day.
Backend: Laravel app
Storage: 1 MySQL Server 1 MySQL Cluster composed by 3 nodes; 1 for management and 2 for data Redis server
Networking: 1 domain, and several subdomains for testing purposes and for connection between several servers
So far the service has been unreachable a couple of times, at the beginning it was kind of hard to realize if the capacities of our server was a good choice for the traffic and operations it was performing. This necessity leads us to:
Lesson #1: Metrics, more metrics then metrics again.
We set up a basic agent for watching out the amount of traffic and the performance of the main server. This gave us information about the status of the server at every moment of the day. We noticed the traffic increases by night! but for the rest of the day, everything works smoothly.
Lesson #2: Divide and win
We started analyzing which was the main data our users were consuming. We wanted to separate the MySQL server from the main server but it doesn’t guarantee the responses gonna be faster, just that the main server doesn’t execute in our case Apache + MySQL by itself at the same time. So we had to learn how to set up a MySQL cluster because it would give us replication of data and more importantly, data is stored in memory not in disk, in that way iops are faster.
I won’t lie, setting up a MySQL cluster it’s hard as hell if you haven’t done a task like this before, but when you are trying to learn and go beyond yourself, the motivation of knowing you can do it is all you need to continue. At the end we finish with 1 server and 1 cluster. By this time the android app becomes faster, we were gaining more users and the number of the processes increases steady but we knew sooner or later the whole process of scaling up will become repetitively and more expensive. This leads us to search for better mechanisms and to the next lesson.
Lesson #3: Cache here and there
The data we stored in our cluster was the most consumed by users, so that was the data we need to cache, but another variable comes in here: time. ¿For how long my data should be recached? Well, this entirely depends on the kind of app you’re building, but cache always helps because the more your data remains cached the less process your server executes by searching. So this is a good and one of the most used mechanisms for saving resources. We chose Redis for this and we haven’t struggle with this so far.
But we also apply the same context to our android app, we started to setup cache locally by periods of time, this way no requests were executed and saving us some processes on the server because our data is not being updated so fast as a social network for example. So basically we could receive more requests but it wouldn’t means it’s efficient.
Lesson #4: Use services which allow us to replicate the whole process faster aka Laravel Forge
At this moment we are trying to setup everything again with Laravel Forge, the first try for saying let us learn a lot of stuff related with scaling vertically and horizontally an app. Now that we have some concepts and practice we want to improve our stack adding more layers of security and a neater configuration and documentation. It’s amazing to learn new stuff like this and continue improving with tools that allow you to move at a faster pace.
Any advice or suggestion is appreciated, and please comment if you have something to share with us!
I drop a list of the resources helps us to learn and improve our project: courses.serversforhackers.com/scaling-laravel laracasts.com/series/learn-laravel-forge
2021 update: This configuration has helped us to manage 3M daily requests smoothly. As you advance you might learn tweaks for Nginx as well as for load balancers and nodes.