Over the last six months I've been having fun working with NServiceBus producing our new app at work. It's been great splitting our system up into functional bits (that we can hopefully reuse). Our current requirements mean we have one big saga (long running process) that uses a whole lot of handlers to get the job done.
This is awesome, as we've been able to attach a problem by talking about what the solution should be, splitting up into pairs and work on our own bit of the solution, then get back together and figure out our next bit of work, just having to define our messaging requirements collaboratively before we start.
We've also invested in writing specification tests with SpecFlow - taking a business scenario and making sure that our system can handle it from end to end. While this has been fun to learn, it's been hard work to get working due to the async nature of our system. What we've ended up doing is firing off the spec and then waiting 10 seconds until we check our auditing system to see if the saga has completed. This works fine for most of the tests, but there are often one or two specs that fail, but when you check the audits, everything has worked properly.
It would be awesome in this case to have a spec subscribe to the auditing events (currently there isn't an event published saying a saga has completed, but commands are sent to the auditor) - in that case the test could just wait until the publish, but it would also need a timeout (for when the saga actually fails).
This brings me to one problem that I've had with my specflow tests - the amount of time they take to run. These aren't like unit tests that we run all the time and only take a few seconds, the suit of tests takes minutes to run, so I'm looking to see if there are any solutions to run them in parallel. At the moment we use the NUnit library to implement the tests, but I have seen that MbUnit has a parallel library which looks interesting.
How do you do integration tests? Do you even do them?
No comments:
Post a Comment