-
Notifications
You must be signed in to change notification settings - Fork 9
/
Copy pathREADME.tests
209 lines (139 loc) · 6.54 KB
/
README.tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
Tests
-----
Run the regression tests with
make check
Run the TAP tests with
make -s install prove_installcheck
You can run individual TAP tests with
make PROVE_TESTS=t/050\* prove_installcheck
or similar.
Writing tests
-------------
Where possible, implement tests in `sql/` using `pg_regress`. Tests can be made
to run selectively on only some configurations using conditional rules in the
`makefile`.
If dynamic node management, configuration changes that require
a restart, etc are required, use a TAP test in `t/` instead.
Tests should seek to demonstrate the *desired* behaviour, not the *actual,
current* behaviour of the system. If you're writing a test to demonstrate a
bug, don't try to write a test that passes. Try to write a test that *will pass
once the bug is fixed*. If you plan to commit a test without a corresponding
bugfix, put it in the tests_failing suite in the Makefile so it won't run by
default, or add it as a TAP test marked `todo` (see below).
All tests should:
* Have a comment at the start clearly explaining what they
are intended to test and why.
* Explain any relevant limitations of test coverage
or functionality.
* Test what *should* happen not what *currently* happens
* Seek to avoid unnecessary duplication between related
tests. Rather than copying & pasting code, move it into
a shared module with appropriate refactoring so that other
similar tests can use it later. With comments!
* If a test is created to exercise a specific bug, the bug # should be
mentioned in the test commit message and the bug described in the
test comments.
* For Perl tests, follow the guidelines in `src/test/perl/README`
in the postgres sources.
Running things from TAP tests
-----
Use `PostgresNode` not direct `pg_ctl`, `psql`, etc calls. Prefer `safe_psql`.
Use `IPC::Run::run` not `system`.
Use `IPC::Run::start` not the `popen` variants.
If you have to run `psql` directly (e.g. for background work), pass `-qAtX`
unless you have a reason not to.
Tests for incomplete functionality or bugs
-----
Often you want to write a test to reproduce a bug, then fix the bug. But you
might not be the one fixing the bug or you might be planning on committing the
fix later. In that case you can still add a test for the bug, you just have
to mark it as expected to fail.
For pg_regress tests, put it in the `FAILING_TESTS` suite in `Makefile.in`.
This suite isn't run by default, only by
make RUN_FAILING_TESTS=1 check
For TAP tests you have better options. You can use `Test::More`'s `todo` and
`todo_skip` functionality. See `perldoc Test::More` and the existing tests
for examples, but here's a simple one:
# This will run but report "todo" instead of "fail"
TODO: {
local $TODO = 'requires foobar, which is not implemented yet';
my $result = some_command();
ok($result, 'some_command works!');
};
# This will not run at all, but shows the need for future work
# (and can be done conditionally, like below):
SKIP: {
todo_skip ("this will crash on windows, so skip for now", 1)
if "$^O" eq "MSWin32";
ok($node->do_something, 'it works');
}
Use `todo` unless doing so will make it impossible for the rest of the
test to run - for example, if the test will hang completely. In that case
use `todo_skip`.
Tests that can't run on some configurations or platforms
----
If a pg_regress test won't work on a given platform or configuration, use
conditional make rules to omit it from the list of tests to run. See the
existing `Makefile.in` for examples. You may have to add additional `configure`
tests to `configure.in` and/or mark existing ones as `AC_SUBST` so they get
substituted into the `Makefile` written from the `Makefile.in`. See how
`BDR_PG_MAJORVERSION` is substituted into the `Makefile` for an example.
For TAP tests, you can use `Test::More`'s `skip` function to just not run the
relevant set of tests, either with `skip_all` or with a `SKIP:` block. See
`perldoc Test::More` and the existing tests for details. Perl has a bunch of
built-in magic variables like `$^O` for this and you can use `system` etc too.
Delays and waiting in tests
----
Try to avoid waiting with `sleep`. Instead, wait until something happens using
`PostgresNode::poll_query_until`, a loop, etc. PostgresNode has some handy
helpers like the `wait_for_catchup` function that'll help with common tasks,
some of which are wrapped or extended for BDR in `t/utils/nodemanagement.pm`.
For example, instead of:
# wait until the node catches up (hopefully)
sleep(30);
you should write
# wait until all changes on node A have been replayed to node C
$node_a->wait_for_apply($node_c);
See existing tests for more examples.
If you must use sleeps, use `Time::HiRes` for short sleeps. Perl's `sleep`
cannot sleep a fractional number of seconds.
Timing in tests
----
You can use `Time::HiRes` for timing. It's available from Perl 5.8, so it's
safe.
Avoid making test results depend on timing outcomes.
Giving up
----
If a test's setup fails or some intermediate stage doesn't work as it's
expected to, it may not make sense to continue at all.
In that case a test should use `Test::More`'s `BAIL_OUT('reason')`, e.g.
ok($value, 'it worked')
or BAIL_OUT('without it working no later test makes sense');
or they can `confess()` (for a stack trace) or `croak()` (without one).
Or `die` if you don't `use Carp` but of course you use carp, right?
Reporting progress
----
Don't use `diag` unconditionally. Instead, if you're about to do something that
may silently take some time, use a `note` so the user doesn't think the test is
stuck forever. (`note` only prints in verbose mode).
`diag` is for providing extra info on failure, e.g.
is($thing, 1, 'it worked')
or diag "when attempting to insert $value";
You can also use `explain` for structured data. See the `Test::More` docs.
For general progress chatter, just use `print`. It'll go to the test log file
but not the display.
# Debugging tests
----
To trace what a test is doing, install `Perl's Devel::Trace`:
dnf install perl-Devel-Trace
and then change the #! line to
#!/usr/bin/env perl -d:Trace
to get detailed output on what the test is doing. It'll spam the display for a
bit then switch to outputting to the test log file.
Formatting
-----
If using vim, edit tests with:
:set tabstop=4 shiftwidth=4 expandtab autoindent
or add to your `.vimrc`:
autocmd BufRead,BufNewFile */postgres-bdr-extension/t/*.pl set tabstop=4 shiftwidth=4 expandtab autoindent
If using emacs, use vim. If using MS Word, use a brown paper bag.