Who is using model-based testing with xstate in production systems? #2704
Replies: 3 comments 3 replies
-
Yep, so I had a load of issues with this relating to how I designed the first few iterations of the machine that runs our authentication, and it wasn't really feasible to use the model testing for quite a while. Now have it running and it's great, but some caveats. Don't know how applicable this is, but I can describe exactly how I have it set up
So that gives me one machine, with a minimal context, minimal number of events, very few actions, with states that map directly to screens in the end apps. Machine is initiated in a provider which exposes the result of For the hook callbacks, I put all the implementations for my test app into a single file and import the relevant one into the relevant component that uses the relevant hook. For the tests I can then just mock that file out and replace all the implementations with spies. Need to do this because a couple of the callbacks don't take input (for example, checking sessions), so need adjustment during the test runs. The model test can now be set up:
Once that's all set up, it has been. incredibly easy to add tests. Notes, anyway:
So, once above was set up, the test files are pretty big. eg my state might be "SubmittingUsername". "USERNAME_VALID" event is sent, goes to "SubmittingOtp". The user has three goes, so "OTP_INVALID" sent, goes to "OtpInvalidRetry1". "OTP_INVALID" sent, goes to "OtpInvalidRetry2". "OTP_VALID" sent, goes to "SubmittingNewPin". etc. There are going to be a lot of states for anything complex (or even for simple stuff). But this doesn't really matter, because
Deviating from the model is fine because the test should not be testing the underlying FSM model, it should be testing the implementation of that model.
By any chance do you have a lot of guard clauses, and/or cycles? The point of the model-based test IMO is to exercise all possible paths through the graph, so manually specifying them would seem to indicate underlying design issues. Removing all guard clauses from my machines except for those that check values set at init time (ie no guards based on context) made it drastically easier to test, though I appreciate that may not be feasible. All this might not be useful, sorry for wall of text! |
Beta Was this translation helpful? Give feedback.
-
I use @xstate/test to power the E2E tests for the Electron app I'm building at our company. So far I've avoided having any overly specific state representations and I only test the happy paths through features, the test suite also isn't overly large (I think around 20-30 test cases generated) but it's been working out fairly well so far. I was unsatisfied with how @xstate/test models were built so I built a wrapper around it that allows you to create nestable "test zones" and generate an @xstate/test model from those composed zones. The code for it is pretty hacked together but if there is interest I could see about tidying it up and packaging it into an open source repo. I've uploaded the README as a gist to give an example of it https://gist.github.com/UberMouse/2fcfc2d714e0eef68b2b2cc98d194600 |
Beta Was this translation helpful? Give feedback.
-
I use it in my framework's (Ember.js) given test suite (QUnit). I have established a pattern for it and has helped immensely when unit testing complex (many conditionals) components. function testWrapper(testFn) {
return (testContext, state) => {
testContext.assert.ok(
true,
`§ Testing state ${JSON.stringify(state.value)} ← event ${JSON.stringify(state.event)}`,
);
return testFn(testContext, state);
};
}
module('Integration | Component | FooBar', function (hooks) {
setupRenderingTest(hooks);
const page = new FooBarPageObject('#test-subject');
const testMachine = createMachine({
id: 'fooBarTest',
initial: 'firstRender',
states: {
firstRender: {
on: { NEXT: 'secondStep' },
meta: {
test: testWrapper(function ({ assert }) {
assert.dom(page.element).exists();
assert.strictEqual(page.status, 'first');
}),
},
},
secondStep: {
meta: {
test: testWrapper(function ({ assert }) {
assert.strictEqual(page.status, 'second');
});
},
},
},
});
const testModel = createModel(testMachine).withEvents({
async NEXT() {
await page.clickNextButton();
},
});
const testPlans = testModel.getSimplePathPlans();
testPlans.forEach((plan) => {
module(plan.description, function () {
plan.paths.forEach((path) => {
test(path.description, async function (assert) {
await render(hbs`<FooBar id="test-subject" />`);
await path.test({ assert });
});
});
});
});
}); QUnit bundles assertions into a pile of steps inside the test output and it was difficult to tease out which state/events lead to a failed assertion. I use the |
Beta Was this translation helpful? Give feedback.
-
At my work we've been experimenting with it for end-to-end tests (with
@xstate/test
+ Cypress), and I'm starting to formulate a long-term testing strategy that integrates across product/dev/qa teams.Probably the biggest obstacle in our experiments was maintaining a fast enough feedback loop. We found ourselves heavily compromising the fidelity of the test machine as a "user model" to avoid an explosion of test cases
login-form
only hasidle
,error
instead of representing the email/password field statescases
explicitly withsubtype
to also trigger certain pathsAll of this to avoid "test explosions" even for relatively simple pages (eg: login form). Something doesn't feel right?
Additionally, the resulting test machines start to deviate pretty heavily from the model in our product manager's head. Ideally, the test machine when visualized should directly match the "user model" as defined by the product team. In fact, if the product team was using state machine diagrams, I don't see a reason why this couldn't be matched exactly?
That being said, I've opened up a PR here that should allow much greater flexibility in using a test model with
@xstate/test
by "manually" defining paths with a sequence of a events. I think it's the "missing link" to make model-based testing with xstate feasible at scale.Curious to know if anyone else is using model-based testing with xstate in "real life"? And if so, what are your thoughts about it?
Example from above
Beta Was this translation helpful? Give feedback.
All reactions