Scale-up REST API Functional Tests to Performance Tests in Python

Peter Xie
5 min readAug 9, 2019

--

Imagine you have written REST API function tests in Python, how to scale them up to do performance tests?

I have previously written an article about how to create REST API function tests using Python. In this article, I will continue to explain how you can use existing function tests and scale them up to do performance tests, using Python modules requests, threading, and queue.

Let’s use the same flask mock service endpoint I used for the functional tests, but just add a time.sleep(0.2) to simulate network delay 0.2 seconds.

Save the code as a file, e.g. flask_mock_simple_service.py, and run it by python flask_mock_simple_service.py.

Now you can access this service by typing http://127.0.0.1:5000/json in a browser or by running the functional test code as below, pytest -sv test_mock_service.py. And you will get response content {“code”: 1, “message”: “Hello, World!” }.

Functional test

Let’s see how to convert function tests to performance tests.

Step 1: Modify existing functional tests

To create performance tests using existing function tests, first we need to modify the function test functions a bit to suit performance tests. Let’s copy & paste the existing test_mock_service.py as a new file perf_test_mock_service.py and modify it for performance tests.

We use assert to verify something in the response, e.g. status_code and “code” value in the body content, in pytest functional tests. It needs to be converted to checking (some people use term validate), so one single request failure won’t stop the whole performance tests. We still mark the test as fail if validation fails, in the return values.

The second change is we need to return the response time, i.e. the amount of time elapsed between sending the request and the arrival of the response. This is so easy to just return resp.elapsed.total_seconds(), e.g., 0.210752.

Step 2: Loop test function

We need to create a loop test function so it will continuously send requests. As you can from the code below, it just loops one or more API functional tests with a wait-time, and stops once loop times(default infinite) is reached. A queue variable is used to store the results so we can calculate performance stats later. Note Queue is multiple thread safe.

Run the test code and you will see the result as below:

python perf_test_mock_service_v1_loop.py
Test passed.
Test passed.
Test passed.

Step 3: Start Concurrent Users

This is the most important step, to create and start concurrent threads to simulate concurrent users. To add one thread, just create a Thread object and provide a function (loop_test here) to run the thread, and function arguments (loop_times here) if any. Then start the thread by the start() method, and call the join() method to wait for the thread to finish before proceeding in the main thread.

Note: Thread parameter daemon=True tells spawned threads to exit if main thread exits.

For instance, we start 2 threads and each thread loops 3 times. Run the test code and you will see the result as below:

python perf_test_mock_service_v2_concurrent.py
Tests started at 1565252504.3480494.
Test passed.
Test passed.
Test passed.
Test passed.
Test passed.
Test passed.
Tests ended at 1565252505.0206523.
Total test time: 0.6726028919219971 seconds.

Step 4: Performance Statistics

Now that we have been able to run concurrent performance tests, we can add code to calculate performance metrics.

  • Time per Request(TPR): measure min, max and mean(avg) value using resp.elapsed.total_seconds() for all pass requests.
  • Requests per Second(RPS): measure mean value by dividing total pass requests by total test time.

Function stats() is added for this purpose, and we just call this function at the end of the main thread. As you see from the code below, we get the test results from the queue until it is empty or current queue size is reached, and measure TPR, RPS as well as total fail, exception and pass requests.

Then add stats() function into main as below.

Example output of 2 threads and 5 loop times are as follows:

It looks good, isn’t it?

Step 5: Test timer

Normally we want to control the duration of performance tests by time as well, and we stop the test either loop times is reached or time is up.

First, we need to create a global Event (event_time_up = threading.Event()) to notify loop_test when time is up.

Second, we create a function (set_event_time_up) to set the event.

Finally, we create a Timer (timer = threading.Timer(test_time, set_event_time_up)) and start it after performance tests are started. The timer will wait for test_time and call function set_event_time_up. Note we also need to cancel the timer if loop_times is reached earlier than this timer.

Example output of below settings when test time is reached first.

concurrent_users = 2
loop_times = 100
test_time = 5 # time in seconds

Put It All Together

I’ve added this performance test script into my Python REST API test framework below, which now cover both functional and performance tests. The script includes everything mentioned above and more, such as print stats continuously in an interval, e.g., 5 minutes.

Why Not Coroutine

We have implemented the performance tests using threading. There are two reasons why I don’t recommend using coroutine like asyncio in this case.

  1. Coroutine is complicated and tricky to use. You may measure the response time wrong if you don’t really understand coroutine. See my post for examples. And it is always a debatable topic whether asyncio is good or not, like this post.

2. The beautiful reqests package we used in funtional tests is not coroutine based, so you cannot reuse the same functions if you want to use coroutine for performance tests.

However, if you really want to use coroutine instead of threading for performance tests, I would recommend using a professionally written package like locust.

--

--

Peter Xie
Peter Xie

Responses (1)