REST API Test Framework for Humans by Plain Text Files

Peter Xie
5 min readApr 24, 2022

This is part of the REST API Testing in Python series.

In part 1 of the series, I’ve shown it is easy to write REST API tests in Python. However, if you are writing a test framework for a team where some people are not good at coding, or just to make life easier for everyone, the approach I am going to introduce in this post may be a solution for you.

TLDR: It reads plain text input files to send REST API requests, converts the JSON response to INI file format and saves it as output files, then compares the output files with expected files having exceptions defined in ignore files. To create new cases, simply create more input files; to create expected files, simply examine the actual output files and copy them as expected files if passed.

Photo by Tim Mossholder on Unsplash

Note: This approach could be implemented in any language, though I will show it in Python.

Project Structure

The project structure is as follows.

tree
├── inputs
│ ├── test_case_01
│ │ ├── request_01.ignore
│ │ ├── request_01.txt
│ │ ├── request_02.ignore
│ │ └── request_02.txt
│ └── test_case_02
│ └── request_01.txt
├── outputs
│ ├── test_case_01
│ │ ├── response_01.txt
│ │ └── response_02.txt
│ └── test_case_02
│ └── response_01.txt
├── expects
├── diff
└── Scripts
└── test_rest_api.py
  • inputs
    API request content and ignore fields for comparison.
  • outputs
    Received response converted to INI format
  • expects
    Same as passed outputs
  • diff
    output difference compared to expected files
  • Scripts
    The main python test script

Parse Input Files

A sample request input file is as follows.

There are 3 parts separated by an empty line:

  • Part 1: Request method + URL
  • Part 2: Headers
  • Part 3: Body

Yes, a REST API request is just these 3 parts, simple as that.

The following is the code to parse request input files. It uses regular expression (re) to parse the content. Note that headers and body are optional for simple get requests.

parse_test_input code

Send Requests

Once we parse the input, we need to send the request and collect the response. It is basically the same as part 1 of the series using the requests package, but we use request method instead of post or get method as the method type is defined in the input file. And we convert the response into a dictionary (dict) since it is a REST JSON response.

import requests
method, url, headers, body = parse_test_input(request_file_path)
resp = requests.request(method, url, headers=headers, data=body)
resp = resp.json() # convert to dict

Dict to INI

It is tricky to compare dictionary variables or JSON files, so we convert dict response to INI format, a simple key = value fashion, for comparison with expected outputs.

Sample Dict Input:

{
"name": {
"firstname": "Peter",
"secondname": "Xie"
},
"scores": [100,99],
"age": 30
}

Sample INI Output:

age = 30
name.firstname = Peter
name.secondname = Xie
scores[0] = 100
scores[1] = 99

As you can see above, the elements are sorted and broken down to the bottom level (i.e. a single value) of the dictionary.

The code is as follows. To break down dict and list elements inside the dict recursively to a key1.key2[i] = value fashion, we define an inner function iterate_dict (line 3). And there is a trick that if the value is multiple lines, we convert it to one line using repr, a printable representation of a value (line 18~19).

dict_to_ini code

Compare Output

With the INI output files, one can easily compare the actual outputs with expected outputs using tools like Linux command diff or BeyondCompare (whole folder comparison). That’s the beauty of this framework. Even a manual tester can run the tests and compare the outputs without looking into the test logs or code (who dares to say there is no bug in test code).

However, we want to compare it programmatically and make the framework complete.

INI to Simple Dict

First, we read the INI output file and convert it to a simple dictionary (note it is different from the original dictionary). It is just a list of key, value pairs.

int_to_dict code

Diff Simple Dict

Then we compare the simple dict variables of the actual and expected files and write the difference into a file in the diff folder. We follow the syntax of the Linux diff command output as follows.

  • If missing in actual, output: - key1 = value1
  • If additional in actual, i.e. missing in expect: + key2 = value2
  • If different:
- key3 = value3
+ key3 = value4
diff_simple_dict code

Ignore

You might have noticed in the above diff_simple_dict code that there is a ignore list argument. It is used to ignore fields like timestamp or dynamic id. The ignore files are defined as <request_id>.ignore, e.g. request_01.ignore, along with the request files in the input test case folders. A sample ignore file looks like the below with the key names to ignore.

name.secondname
result.timestamp

The code is quite straightforward.

parse_ignore_file code

Loop by Pytest Parametrize

The last piece is to loop the input folder and send requests test case by test case. We will use pytest parametrize fixture in this case, but one can use other test frameworks as well.

First, we need to get the list of test case folder names. This is run once at the beginning of the test script.

Then we loop all the test cases by @pytest.mark.parametrize(“testcase_folder”, test_case_list) decorator and the test function test_by_input_output_text takes testcase_folder as an argument, e.g. inputs/test_case_01.
Note that there could be more than one request per test case, so we loop inside the test case folder and send requests one by one.

For each request, we do the following steps with all the ingredients prepared above.

  • Parse request input
  • Send request
  • Convert response to INI format
  • Parse ignore files
  • Compare actual outputs (outputs) with expected outputs (expects) excluding ignores

Put it All Together

Put all the above code together, you get this full test script _test_by_input_output_text_full_simplified.py. You can try with my sample input files by cloning the repo as follows, or prepare your own inputs.

Run tests with sample inputs

Note: test_rest_api.py is the full test framework for the series and -k input is to filter only the test_by_input_output_text test function which is basically identical to _test_by_input_output_text_full_simplified.py.

Expect Files

For the first time, run without the expects folder and examine the actual output files manually and copy the whole outputs folder as expects folder if passed. If you run with the full repo version, you can check the response in JSON format in the log files as well. In case there is any update in the APIs afterward, say adding a new field in the response, you can just re-run the tests and copy the new outputs as expects. This is another beauty of this framework, easy to maintain.

Further Work

A common practice of modern REST API design is to have a token in the headers for authentication. The token could be obtained by a get token API, or from app admin interface (normally web). If it is through a get token API, we call it once at the beginning of the tests and cache it for the following test APIs.
Since the token is not static and normally valid for a few hours whether it is from API or admin interface, we should not put it in the input files. Instead, we can add the token into headers before sending the requests in the main test function. The following code assumes bearer authentication is used.

method, url, headers, body = parse_test_input(request_file_path)
headers['Authorization'] = 'Bearer ' + 'your token'
resp = requests.request(method, url, headers=headers, data=body)

Thanks for reading to the end.

--

--