You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 31, 2022. It is now read-only.
This is a tough topic, and obviously it's very hard to automatically generate semantic test cases. I see 2 possibilities:
An optional flag with which the user can specify that he is following a specific set of best practices (i.e. JSON API). When generating the test cases we can then assume that the parameters follow a specific naming scheme, e.g. filter[city]=Seattle. We could then check the response schema for that endpoint, and check if an attribute with name city is found. If so, we could add an additional assertion that checks if all objects returned from the API have the value Seattle for the attribute city. This will most likely also result in a bunch of faulty test cases (e.g. there is a filter[preset]=abc parameter that, when used, applies a bunch of filter presets to the query. If we implement this functionality, we definitely also need to support deletion of test cases (Support deletion of test cases #7).
What we could also do is help the user in creating some regression tests testing the semantic functionality of the API (even though it's very obviously against test driven development, it might still be helpful in projects with lots of manual testing).
Assuming that we have a replicable database state that the user has manually tested and found to be correct, we could do the following: for each endpoint and parameter combination, send off an actual request to the API, and save the response. When generating the test cases, compare the response to the one that we have saved previously.
The text was updated successfully, but these errors were encountered:
This is a tough topic, and obviously it's very hard to automatically generate semantic test cases. I see 2 possibilities:
An optional flag with which the user can specify that he is following a specific set of best practices (i.e. JSON API). When generating the test cases we can then assume that the parameters follow a specific naming scheme, e.g.
filter[city]=Seattle
. We could then check the response schema for that endpoint, and check if an attribute with namecity
is found. If so, we could add an additional assertion that checks if all objects returned from the API have the valueSeattle
for the attributecity
. This will most likely also result in a bunch of faulty test cases (e.g. there is afilter[preset]=abc
parameter that, when used, applies a bunch of filter presets to the query. If we implement this functionality, we definitely also need to support deletion of test cases (Support deletion of test cases #7).What we could also do is help the user in creating some regression tests testing the semantic functionality of the API (even though it's very obviously against test driven development, it might still be helpful in projects with lots of manual testing).
Assuming that we have a replicable database state that the user has manually tested and found to be correct, we could do the following: for each endpoint and parameter combination, send off an actual request to the API, and save the response. When generating the test cases, compare the response to the one that we have saved previously.
The text was updated successfully, but these errors were encountered: