[ad_1]
One of many important elements of server-side growth is to maintain servers in a secure situation and never permit overloading operations to occur as they will crash the servers.
I’ll clarify “Well being Checks” and “Overload Safety” on this article. Additionally, I’ll present options for some issues and tips on how to try this.
Think about you may have 3 Node servers balanced with the “Nginx” server. The load of the servers is equally divided, so when you’ve got 600 customers, each server has 200 shoppers. Dividing the load equally to every server doesn’t imply you’re shielded from overload as a result of your work can differ for each consumer. For instance, for user_1, you could learn three information, however for user_2, you could have to learn 9 (3 instances extra). Relying on the customers’ requests course of can have extra complexity, which is the true downside obligatory to analyze and resolve. On this case, in case your work is completely different and you’ll steadiness requests to the identical server, which is overloaded, your server will most likely crash.
I’ll clarify “Well being Checks” and “Overload Safety” for issues like this.
The Load Balancer sends requests each n(i.e. 5 or 10) seconds to the server, to grasp whether or not the server is ready to deal with extra request or not, if sure, the server marks it as UP and continues to obtain extra requests from the balancer, in any other case the server marks it as DOWN and the balancer won’t ship any requests to this server till the balancer sends a well being test request once more and marks it as UP.
This course of known as Well being Test.
The request could be a easy HTTP(i.e. GET), Socket, or TCP request.
When the server receives a request, you are able to do some checks to grasp if the server can deal with extra requests or not, and after that, the server wants to reply to that request. On this case, sending standing 200 means every thing is ok, and the server can deal with extra requests. In any other case, you’ll be able to ship standing 503 SERVICE UNAVAILABLE, which suggests the server can’t deal with extra requests.
Instance of Well being Checks
Sadly, Open Supply Nginx native doesn’t help well being checks. For that, that you must set up a module which known as nginx_upstream_check_module (not distributed with the Nginx supply).
ngx_http_healthcheck_module — sends a request to servers, and in the event that they reply with HTTP 200 + an optionally available request physique, they’re marked appropriate. In any other case, they’re marked dangerous.
However I don’t need to make it tough.
Subsequently, we are able to use Nginx various Load Balancer — HAProxy.
See the set up half right here(you solely want the “Putting in HAProxy” half).
I cannot clarify all HAProxy as a result of it’ll take extra time, and our level just isn’t understanding HAProxy. We have to perceive how we are able to make a easy well being checking course of. I’ll clarify solely the important elements.
Right here is a straightforward server which has two routes, one route for well being checking and the opposite one for us.
Run utilizing commandPORT=8000 node server_1.js
In-Browser, additionally within the console, you’ll be able to see thePID
quantity, which exhibits the method id the place the Node.js server is operating, so you’ll be able to perceive which node obtained your request after the steadiness in HAProxy.
And right here is the configuration for HAProxy.
Setup haproxy.cfg
file and run the HAProxy service.
Right here you’ll be able to see the right way to add the configuration file and begin the service.
As you’ll be able to see, we create a server and bind to 3000 port and backend trackers
two servers with 8000 and 8001 ports, balancing within the roundrobin
approach.
rise (rely)
: the variety of consecutive, legitimate well being checks earlier than contemplating the server is UP. The default worth is 2fall (rely)
: the variety of consecutive invalid well being checks earlier than contemplating the server as DOWN. The default worth is 3
Guarantee that server_1 is operating.
Begin HAProxy service.
Now you’ll be able to see well being test requests coming to server_1. When the server responds with consecutive two requests with standing 200, HAProxy will mark this server UP and steadiness calls for to this server. Earlier than that HAProxy server(http://localhost:3000) is unavailable(attempt to open the server earlier than two consecutive well being test requests). After two successive responses, you’ll be able to see the outcome within the Browser (http://localhost:3000). Now, all of the requests are going to server_1 as a result of server_2(:8001) just isn’t operating.
Earlier than operating server_2, let’s take a look at the code and perceive it.
This server will ship responses with 200 statuses; after 20 seconds, statuses will change to 503. I feel every thing is straightforward and straightforward to grasp right here.
Let’s go to essentially the most thrilling half.
Now run server_2 utilizing the command.PORT=8001 node server_2.js
When two well being test logs have handed in two servers, you’ll be able to open the Browser (http://localhost:3000) and see the right way to load steadiness works(refresh a number of instances), PID
can be completely different.
After 20 seconds, when server_2 begins responding to the well being test 503 standing code, after first(as in config, we have now fall 1
) 503 response HAProxy will mark the server DOWN and cease steadiness requests to server_2, and the entire load will go to server_1.
HAProxy will strive well being test requests each 5 seconds, and upon receiving two consecutive(as in config, we have now rise 2
) 200 standing responses, the server will mark as UP, and HAProxy will once more steadiness requests to server_2.
To test whether or not the server is overloaded or not and defend towards overloads, that you must test some metrics. It additionally is determined by your code logic and what you’re doing, however right here you’ll be able to see generic metrics that are important to test.
- Occasion Loop delay
- Used Heap Reminiscence
- Whole Resident Set Measurement
Right here is an effective npm bundle overload-protection which checks these three metrics. One other good bundle is event-loop-lag which exhibits occasion loop delays.
Utilizing an overload-protection bundle, you’ll be able to specify limitations after which your server will be unable to deal with extra requests. When the configured most request restrict passes bundle robotically sends 503 SERVICE UNAVAILABLE.
The bundle works with http, specific, restify, and koa packages.
But when your Load Balancer can ship Sockets for well being checking and also you need to do it with Sockets, then that you must use one other bundle or construct one your self.
On this article, I’ve defined how Well being Test works in HAProxy and how one can defend your server from overloads. Each server ought to have not less than a well being test implementation, which is crucial for distributed programs.
Thanks, be at liberty to ask any questions or tweet me @nairihar.
In my article about “Sleek shutdown in NodeJS“
https://medium.com/@nairihar/graceful-shutdown-in-nodejs-2f8f59d1c357
Different good sources:
NGINX HTTP Well being Checks
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
Performing Well being Checks in HAProxy:
https://www.haproxy.com/documentation/aloha/7-0/traffic-management/lb-layer7/health-checks/
Utilizing Kubernetes Well being Checks:
https://www.haproxy.com/documentation/aloha/7-0/traffic-management/lb-layer7/health-checks/
[ad_2]