Before putting a new website into production, especially when it is intended to accommodate a large number of visitors, it is preferable or recommended to carry out a load test.
This load test will ensure the correct sizing of the hosting of the website. This will make it possible to determine whether shared or dedicated hosting will be necessary. This also makes it possible to test the good design of the website.
The different types of load tests
There are 2 types of load tests.
1 - Simple load tests
This consists of making a high number of requests on the same page in order to determine the response time.
This test can be easily performed using command-line tools. To do this, simply determine one or more reference pages, which will generate the most traffic, and go test one by one with different levels of simultaneous requests.
There are free tools for this, the best known of which is most certainly Apache bench. Here we will see how to perform this test with the LoadTest tool developed in NodeJS.
2 - "Load tests"
The 2nd type of test is considered as "load test", consists of defining one or more navigation scenarios and then running them simultaneously finally determining the number of requests that the hosting is able to support.
Install and use LoadTest
Runs a load test on the selected HTTP or WebSockets URL. The API allows easy integration into your own tests.
Installation
Install globally as root:
# npm install -g loadtest
On Ubuntu or Mac OS X systems, install using sudo:
$ sudo npm install -g loadtest
To access the API, simply add the loadtest
package to your package.json
{
...
"DevDependencies": {
"Loadtest": "*"
},
...
}
compatibility
Versions 5 and following must be used at least with Node.js v10 or higher:
- Node.js v10 or newer: ^ 5.0.0
- Node.js v8 or later: 4.xy
- Node.js v6 or earlier: ^ 3.1.0.
Why use it
loadtest
allows you to configure and refine queries to simulate real loads.
Basic Usage
Run as a script to test the loading of a URL:
$ loadtest [-n requests] [-c concurrency] [-k] URL
The URL can be "http: //", "https: //" or "ws: //". Set the max number of requests with -not
, and the desired level of concurrency with the -vs
parameter. Use keep-alive connections with -k
whenever it makes sense, which should be always except when you are testing opening and closing connections.
Single-dash parameters (eg -not
) are designed to be compatible with Apache ab
, except that here you can add the parameters after the URL.
To get online help, run loadtest without parameters:
$ loadtest
Back Use
The basic set of options is designed to be compatible with Apache ab. But while ab can only set a level of competition and allows the server to adapt to it, loadtest allows you to set a rate or requests per second with the –rps option. Example:
loadtest -c 10 --rps 200 http://mysite.com/
This command sends exactly 200 requests per second with a conce competition of 10, so you can see how your server copes with sustained rps. Even if ab
reported a rate of 200 rps, you'll be surprised how a constant rate of requests per second affects performance: it's no longer the requests that adjust to the server, but the server that has to adjust to the requests! Rps rates are usually lowered dramatically, at least 20 ~ 25% (in our example, from 200 to 150 rps), but the resulting figure is much more robust.
loadtest
is also quite extensible. Using the provided API, it is very easy to integrate loadtest into your package, and run programmatic load tests. loadtest makes it very easy to run load tests as part of system tests, before deploying a new version of your software. The results include average response times and percentiles, so you can pause the deployment, such as if 99% of requests do not end in 10 ms or less.
Use Don'ts
loadtest
saturates a single processor quite quickly. Do not use loadtest
if the Node.js process is above 100% usage in top,
which happens approximately when your load is greater than 1000 ~ 4000 rps. (You can measure the practical limits of loadtest
on your specific test machines by running it against a simple Apache or nginx process and seeing when it reaches 100% CPU).)
There are better tools for this use case:
- Apache
ab
has great performance, but it is also limited by a single CPU performance. Its practical limit is somewhere around ~ 40 krps. - weighttp is also compatible with
ab
and is supposed to be very fast (the author did not personally use it). - wrk is multithreaded and suitable for use when multiple CPUs are needed or available. It may, however, require installation from the sources, and its interface is not
ab
compatible.
Regular Usage
The following settings are compatible with Apache ab.
-n requests
The number of requests to send.
Note: The total number of requests sent may be larger than the parameter if there is a concerone parameter; loadtest will report only the first not.
-concurrency
loadtest will create a number of clients; this parameter determines the number. Requests from these clients will arrive simultaneously at the server.
Note: Requests are not sent in parallel (from different processes), but concurrently (a second request can be sent before the first has received a response).
-t timelimit
The maximum number of seconds to wait until requests no longer come out.
Note: This is different from Apache ab,
which stops receiving requests after the given seconds.
-k
gold --keepalive
Open connections using "keep-alive" mode: use the "Connection: Keep-alive" header instead of 'Connection: Close'.
Note: Uses agentkeepalive,which performs better than the node agent.js by default.
-C cookie-name = value
Send a cookie with the request. Tea name = value
cookie is then sent to the server. This parameter can be repeated as many times as necessary.
-H header: value
Send a custom header with the request. Tea header: value
line is then sent to the server. This parameter can be repeated as many times as necessary. Example:
$ loadtest -H user-agent: test / 0.4 ...
Note: If it is not present, loadtest will add some headers by itself: the "host" header parsed from the URL, a custom user agent "loadtest /" plus the version (loadtest / 1.1.0
), and an accept header for "* / *".
Note: When the same header is sent multiple times, only the last value will be taken into account. If you want to send multiple values with a header, separate them with semicolons:
$ loadtest -H accept: text / plain; text-html ...
Note: If you need to add a header with spaces, be sure to surround the header and value with quotation marks:
$ loadtest -H "Authorization: Basic xxx =="
-T content-type
Sets the MIME content type for POST data. Default: text / plain
.
-P POST-body
Send the character string as the body of the POST. For example: -P '{"key": "a9acf03f"}'
-A PATCH-body
Send the character string as the body of the PATCH. For example: -A '{"key": "a9acf03f"}'
-method
Send the method to the link. Accept: [GET, POST, PUT, DELETE, PATCH, get, post, put, delete, patch], The default value is GET Ex: -m POST
--data POST some variables
Send data. It does not support the GET method. For example --data '{"username": "test", "password": "test"}' -T 'application / x-www-form-urlencoded' -m POST
It was necessary to -m
and -T 'application / x-www-form-urlencoded'
.
-p POST-file
Send the data contained in the given file to the body of the POST. Don't forget to set -T
to the right type of content.
If POST-file
has the .js extension
, it will be required
d. It must be a valid node module and it must export
a single function, which is invoked with an automatically generated query ID to provide the body of each request. This is useful if you want to generate query bodies dynamically and vary them for each query.
Example:
module. exports = function (requestId) {
this object will be serialized to JSON and sent in the body of the request
return {
key: 'value',
requestId: requestId
};
};
-u PUT-file
Send the data contained in the given file as a PUT request. Remember to set -T
to the correct content-type.
If PUT-file
has .js
extension, it will be required
d. It must be a valid node module and it must export
a single function, which is invoked with an automatically generated query ID to provide the body of each request. This is useful if you want to generate query bodies dynamically and vary them for each query. For an example of a function, see above for -p.
-a PATCH-file
Send the data contained in the given file in the form of a PATCH request. Don't forget to set -T
to the correct content type.
If PATCH-file
has the .js extension
, it will be required
d. It must be a valid node module and it must export
a single function, which is invoked with an automatically generated query ID to provide the body of each request. This is useful if you want to generate query bodies dynamically and vary them for each query. For an example of a function, see above for -p.
R
Recover errors. Always on: The load test doesn't stop at errors. After the end of the tests, if there were any errors, a report with all the error codes will be displayed.
-s
The TLS / SSL method to use. (eg TLSv1_method)
Example:
$ loadtest -n 1000 -s TLSv1_method https://www.example.com
V
View the version number and exit.
Advanced use
The following settings are not compatible with Apache ab.
--rps requestsPerSecond
Controls the number of requests per second that are sent. Can be fractional, for example --rps 0.5
sends a request every two seconds.
Note: Concurrency does not affect the final number of requests per second, since rps will be shared by all customers. Like what:
loadtest; -c 10 --rps 10
will send a total of 10 rps to the given URL, from 10 different clients (each client will send 1 request per second).
Warning: if the competition is too weak, it is possible that there are not enough customers to send all rps, adjust it with -vs
if necessary.
Note: –rps is not supported for websockets.
--timeout milliseconds
The timeout for each request generated in milliseconds. Setting this to 0 disables timeout (default).
-R requestGeneratorModule.js
Use a custom request builder function from an external file.
An example of a request generator module might look like this:
module.exports = function (params, options, client, callback) {
generateMessageAsync (function (message) {
if (message)
{
options.headers ['Content-Length'] = message.length;
options.headers ['Content-Type'] = 'application / x-www-form-urlencoded';
}
request = client (options, callback);
if (message) {
request.write (message);
}
return request;
}
}
See sample / request-generator.js
for sample code that includes a body (or sample / request-generator.ts
for ES6 / TypeScript).
--agent
(depreciated)
.
Opens connections using keep-alive mode.
Note: Instead of using the default agent, this option is now an alias for -k.
--quiet
Do not display messages.
--debug
View debugging messages.
--insecure
Allow invalid and self-signed certificates on https.
--cert path / to / cert.pem
Sets the certificate to be used by the http client. Must be used with --key.
--key path / to / key.pem
Sets the key to be used by the http client. Must be used with --cert.
Server
loadtest is a test server. To run it:
$ testserver-loadtest [--delay ms] [error 5xx] [percent yy] [port]
This command will display the number of requests received per second, the latency of response to requests, and the headers of the selected requests.
The server returns a short text 'OK' for each request, so latency metrics should not take into account query processing.
If no port is specified, the default port 7357 will be used. The optional delay instructs the server to wait for the specified number of milliseconds before responding to each request, in order to simulate a busy server. You can also simulate errors on a given percentage of requests.
Complete example
Now let's see how to measure the performance of the test server.
First of all, we install loadtest
globally:
$ sudo npm install -g loadtest
Now we start the test server:
$ testserver-loadtest Listening on port 7357
In another console window, we run a load test against it for 20 seconds with a concending of 10 (only relevant results are displayed):
$ loadtest http: // localhost: 7357 / -t 20 -c 10 ... Requests: 9589, requests per second: 1915, mean latency: 10 ms Requests: 16375, requests per second: 1359, mean latency: 10 ms Requests : 16375, requests per second: 0, mean latency: 0 ms ... Completed requests: 16376 Requests per second: 368 Total time: 44.503181166000005 s Percentage of the requests served within a certain time 50% 4 ms 90% 5 ms 95% 6 ms 99% 14 ms 100% 35997 ms (longest request)
The results were quite erratic, with some queries taking up to 36 seconds; this suggests that Node.js queues certain requests for a long time, and responds to them irregularly. Now we will try a fixed rate of 1000 rps:
$ loadtest http: // localhost: 7357 / -t 20 -c 10 --rps 1000 ... Requests: 4551, requests per second: 910, mean latency: 0 ms Requests: 9546, requests per second: 1000, mean latency : 0 ms Requests: 14549, requests per second: 1000, mean latency: 20 ms ... Percentage of the requests served within a certain time 50% 1 ms 90% 2 ms 95% 8 ms 99% 133 ms 100% 1246 ms (longest request)
Again, the results are erratic. In fact, if we let the test run for 50 seconds we start to see errors:
$ loadtest http: // localhost: 7357 / -t 50 -c 10 --rps 1000 ... Requests: 29212, requests per second: 496, mean latency: 14500 ms Errors: 426, accumulated errors: 428, 1.5% of total requests
Let's lower the rate to 500 rps:
$ loadtest http: // localhost: 7357 / -t 20 -c 10 --rps 500 ... Requests: 0, requests per second: 0, mean latency: 0 ms Requests: 2258, requests per second: 452, mean latency : 0 ms Requests: 4757, requests per second: 500, mean latency: 0 ms Requests: 7258, requests per second: 500, mean latency: 0 ms Requests: 9757, requests per second: 500, mean latency: 0 ms .. . Requests per second: 500 Completed requests: 9758 Total errors: 0 Total time: 20.002735398000002 s Requests per second: 488 Total time: 20.002735398000002 s Percentage of the requests served within a certain time 50% 1 ms 90% 1 ms 95% 1 ms 99% 14 ms 100% 148 ms (longest request)
Much better: we observe a sustained rate of 500 rps most of the time, 488 rps on average, and 99% of requests answered within 14 ms.
We now know that our server can accept 500 rps without problems. Not bad for a naive node server.js to a single process… We can further refine our results to find out how much from 500 to 1000 rps our server goes down.
But let's look instead at how to improve the results. An obvious candidate is to add keep-alive to queries so you don't have to create a new connection for each request. The results (with the same test server) are impressive:
$ loadtest http: // localhost: 7357 / -t 20 -c 10 -k ... Requests per second: 4099 Percentage of the requests served within a certain time 50% 2 ms 90% 3 ms 95% 3 ms 99% 10 ms 100% 25 ms (longest request)
Now you're talking! The regularity rate also increases to 2 krps:
$ loadtest http: // localhost: 7357 / -t 20 -c 10 --keepalive --rps 2000 ... Requests per second: 1950 Percentage of the requests served within a certain time 50% 1 ms 90% 2 ms 95% 2 ms 99% 7 ms 100% 20 ms (longest request)
Not bad at all: 2 krps with a single core, supported. However, if you try to push it beyond, at 3 krps, it will fail miserably.
API
loadtest
is not limited to running on the command line; it can be controlled using an API, allowing you to load test your application in your own tests.
Invoke Load Test
To run a load test, simply call the exported function loadTest ()
with a set of options and an optional callback:
const loadtest = require ('loadtest');
const options = {
url: 'http: // localhost: 8000',
maxRequests: 1000,
};
loadtest.loadTest (options, function (error, result)
{
if (error)
{
return console.error ('Got an error: %s', error);
}
console.log ('Tests run successfully');
});
The callback function (error, result)
will be invoked when the max number of requests is reached, or when the max number of seconds has elapsed.
Beware: if there are no maxRequests
and no maxSeconds
, then tests will run forever and will not call the callback.
Options
All options except url are, as the name suggests, optional.
Url
The URL to invoke. Mandatory.
competition
How many clients to start in parallel.
maxRequests
A maximum number of requests; when it is reached, the test ends.
Note: the actual number of requests sent may be greater if there is a level of competition; the load test will report only the maximum number of requests.
maxSeconds
Maximum number of seconds to run tests.
Note: After the specified number of seconds, loadtest will stop sending requests, but may continue to receive tests afterwards.
Timeout
Timeout for each request generated in milliseconds. A value of 0 disables the timeout (default).
Cookies
A table of cookies to send. Each cookie must be a string of characters of the form name = value.
headers
An array of headers. Each header must be an entry in the card with the value given as a string. If you want to have multiple values for a header, write a single semicolon separated value, like this:
{accept: "text / plain; text / html"}
Note: When using the API, the "host" header is not inferred from the URL but must be sent explicitly.
Method
The method to use: POST, PUT. Default: GET.
bodysuit
The content to be sent in the message body, for POST or PUT requests. Can be a character string or an object (which will be converted to JSON).
contentType
The MIME type to use for the message body. The default content type is text / plain.
requestsPerSecond
How many requests each client will send per second.
requestGenerator
Custom demand generator function.
An example of a query builder function might look like this:
function (params, options, client, callback) {
generateMessageAsync (function (message)) {
request = client (options, callback);if (message)
{
options.headers ['Content-Length'] = message.length;
options.headers ['Content-Type'] = 'application / x-www-form-urlencoded';
request.write (message);
}request.end ();
}
}
agentKeepAlive
Use an agent with "Login: Keep-alive".
Note: Uses agentkeepalive, which works better than the node agent.js by default.
Silent
Does not display any messages.
indexParam
The given string will be replaced in the final URL by a unique index. Eg: if the URL is http://test.com/value
and indexParam = value,
then the URL will be:
- http://test.com/1
- http://test.com/2
- ...
- body will also be replaced
body: {userid: id_value}
will bebody: {userid: id_1}
indexEsparamCallback
A function that would be executed to replace the value identified via indexParam
via a custom value generator.
Eg if the URL is http://test.com/value
and indexParam = value
and
indexParamCallback: function customCallBack () {
return Math.floor (Math.random () * 10); returns a random integer from 0 to 9
}
then the URL could be:
- http://test.com/1 (Randomly generated integer 1)
- http://test.com/5 (Randomly generated integer 5)
- http://test.com/6 (Randomly generated Integer 6)
- http://test.com/8 (Randomly generated integer 8)
- ...
- body will also be replaced
body: {userid: id_value}
will bebody: {userid: id_ <callback value>}
not secure
Allow invalid, self-signed certificates over https.
secureProtocol
The TLS / SSL method to use. (for example, TLSv1_method)
Example:
const loadtest = require ('loadtest');
const options = {
url: 'https://www.example.com',
maxRequests: 100,
secureProtocol: 'TLSv1_method'
};loadtest.loadTest (options, function (error) {
if (error) {
return console.error ('Got an error: %s', error);
}
console.log ('Tests run successfully');
});
statusCallback
Perform this function after each query operation is complete. Provides immediate access to test results while the test batch is still running. This can be used for more detailed custom logging or to develop your own spreadsheet or statistical analysis of the results.
The results and error transmitted to the recall are in the same format as the results transmitted to the final recall.
In addition, the following three properties are added to the result
object:
requestElapsed:
Time in milliseconds required to complete this individual query.requestIndex:
Index based on 0 of that particular query in the sequence of all queries to be performed.instanceIndex:
Tealoadtest (...)
index instance. This is useful if you callloadtest ()
more than once.
You will need to check if error
is populated in order to determine which object to check for these properties.
Example:
const loadtest = require ('loadtest');
function statusCallback (error, result, latency) {
console.log ('Current latency %j, result %j, error %j', latency, result, error);
console.log ('—-');
console.log ('Request elapsed milliseconds:', result.requestElapsed);
console.log ('Request index:', result.requestIndex);
console.log ('Request loadtest () instance index:', result.instanceIndex);
}const options = {
url: 'http: // localhost: 8000',
maxRequests: 1000,
statusCallback: statusCallback
};loadtest.loadTest (options, function (error) {
if (error) {
return console.error ('Got an error: %s', error);
}
console.log ('Tests run successfully');
});
Warning: Tea statusCallback
format has changed from version 2.0.0. Previously, it was statusCallback (latency, result, error),
it has been modified to comply with the usual Node.js standard.
contentInspector
A function that would be executed after each query before its status was added to the final statistics.
The is can be used when you want to mark a result with a 200 http status code as a failure or error.
The result object passed
to this callback function has the same fields as the result
object passed to statusCallback.
customError
can be added to mark this result as a failure or error. customErrorCode
will be provided in the final statistics, in addition to the http status code.
Example:
function contentInspector (result) {
if (result.statusCode == 200) {
const body = JSON.parse (result.body)
how to examine the body depends on the content that the service returns
if (body.status.err_code! == 0) {
result.customError = body.status.err_code + »» + body.status.msg
result.customErrorCode = body.status.err_code
}
}
},
Results
The latency results passed to your callback at the end of the load test contain a complete set of data, including: average latency, number of errors, and percentiles. An example follows:
{
totalRequests: 1000,
percentiles: {
'50': 7,
'90': 10,
'95': 11,
'99': 15
},
rps: 2824,
totalTimeSeconds: 0.354108,
meanLatencyMs: 7.72,
maxLatencyMs: 20,
totalErrors: 3,
errorCodes: {
'0': 1,
'500': 2
}
}
The second parameter contains information about the current query:
{
host: 'localhost',
path: '/',
method: 'GET',
statusCode: 200,
<p>body:?<body>Hi ?,</body></p>
headers: […]
}
Start the test server
To start the test server, use the exported startServer ()
function with a set of options and an optional callback:
const testserver = require ('testserver');
const server = testserver.startServer ({port: 8000});
This function returns an HTTP server that can be closed ()
d when it is no longer needed.
The following options are available.
Harbor
Optional port to use for the server.
Note: The default port is 7357 because port 80 requires special privileges.
Delay
Wait for the specified number of milliseconds to respond to each request.
error
Returns an HTTP error code.
percent
Returns an HTTP error code only for the given % of requests. If no error code was specified, the default value is 500.
Configuration file
You can put configuration options in a file named .loadtestrc
in your working directory or in a file whose name is specified in the loadtest
entry of your package. json
. The file options will only be used if they are not specified in the command line.
The expected structure of the file is as follows:
{
"Delay": "Delay the response for the given milliseconds",
"Error": "Return an HTTP error code",
"Percent": "Return an error (default 500) only for some % of requests",
"MaxRequests": "Number of requests to perform",
"Concurrency": "Number of requests to make",
"MaxSeconds": "Max time in seconds to wait for responses",
"Timeout": "Timeout for each request in milliseconds",
"Method": "method to url",
"ContentType": "MIME type for the body",
"Body": "Data to send",
"File": "Send the contents of the file",
" Cookies ": {
"Key": "value"
},
"Headers": {
"Key": "value"
},
"SecureProtocol": "TLS / SSL secure protocol method to use",
"Insecure": "Allow self-signed certificates over https",
"Cert": "The client certificate to use",
"Key": "The client key to use",
"RequestGenerator": "JS module with a custom request generator function",
"Recover": "Do not exit on socket receive errors (default)",
"AgentKeepAlive": "Use a keep-alive http agent",
"Proxy": "Use a proxy for requests",
"RequestsPerSecond": "Specify the requests per second for each client",
"IndexParam": "Replace the value of given arg with an index in the URL"
}
For more information about the name of the actual configuration file, read the confinode user manual. In the list of supported file types,please note that only synchronous loaders can be used with loadtest.
Complete example
Tea lib / integration file.js
shows a complete example, which is also a complete integration test: it starts the server, sends 1,000 requests, waits for the callback, and closes the server.
Source translation: https://www.npmjs.com/package/loadtest