Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Tuesday, January 17, 2012

Asynchronous Testing Woes

Over the last six months I've been having fun working with NServiceBus producing our new app at work. It's been great splitting our system up into functional bits (that we can hopefully reuse). Our current requirements mean we have one big saga (long running process) that uses a whole lot of handlers to get the job done.

This is awesome, as we've been able to attach a problem by talking about what the solution should be, splitting up into pairs and work on our own bit of the solution, then get back together and figure out our next bit of work, just having to define our messaging requirements collaboratively before we start.

We've also invested in writing specification tests with SpecFlow - taking a business scenario and making sure that our system can handle it from end to end. While this has been fun to learn, it's been hard work to get working due to the async nature of our system. What we've ended up doing is firing off the spec and then waiting 10 seconds until we check our auditing system to see if the saga has completed. This works fine for most of the tests, but there are often one or two specs that fail, but when you check the audits, everything has worked properly.

It would be awesome in this case to have a spec subscribe to the auditing events (currently there isn't an event published saying a saga has completed, but commands are sent to the auditor) - in that case the test could just wait until the publish, but it would also need a timeout (for when the saga actually fails).

This brings me to one problem that I've had with my specflow tests - the amount of time they take to run. These aren't like unit tests that we run all the time and only take a few seconds, the suit of tests takes minutes to run, so I'm looking to see if there are any solutions to run them in parallel. At the moment we use the NUnit library to implement the tests, but I have seen that MbUnit has a parallel library which looks interesting.

How do you do integration tests? Do you even do them?

Thursday, November 4, 2010

Web Testing

Have had a bit of fun doing some web testing over the last week. Mostly I work on internal systems, so don't get to have fun with user interfaces.

I've been using Selenium RC, with the aid of Selenium IDE in Firefox to figure out some XPath details.

A couple of things that I've learn't over the last couple of weeks:

  • Using XPath to get elements in Internet Explorer is super slow.
  • Selenium RC is great, but it's great to have the IDE just to test out roadblocks
  • The .Click event mimicks a keypress on an element instead of a mouse click - I found some screen reader popups that only appear with keypress.
  • You can't interact with Print / Save dialogs directly (although there are other tools for testing these components)
My tests are mostly checking that elements exist and have the appropriate classes / id's, so I'm yet to see how it works with positioning and styles being applied to the page. 

I'm also keen to use the css element locator - it's supposed to loads faster than finding elements with XPath.

If you're interested in Selenium, check out the proposed Stack Exchange site and give it an up vote. 

Tuesday, June 22, 2010

Planning to test


Traffic Lights
Originally uploaded by Roo Reynolds
When I code, I want to have confidence that my code does what I'm intending, and not being the most self confident coder in the world, writing tests boosts my confidence (except when they all break). Part of this is writing tests that are true to life examples of how the system will be used.

I'm not quite on the test driven development bandwagon yet - I would love to be, but I think I need to be more disciplined to make it work, and really need to learn more about mocking - and properly interfacing out my objects.

For my projects there are a few types of test that I'll write:
  • Integration tests that are testing a few components, for example creating a new object, validating the data and then saving to the database.
  • Story based integration tests - usually testing the API or services that are available, ensuring they correctly accept data, validate, save and return.
  • Simple database connectivity tests - to ensure my NHibernate mapping is correct.
  • Unit tests to test a single method.

There is probably a wide range of scenarios that I'm not checking, which is where working with a tester to manually write up the scenarios and work through them really helps. Over the last couple of weeks I've been working with a colleague, and have found this to be really useful.

Now we've got into our 4th round of testing, and testing is really starting to become labor intensive, which brings me to the question of how do I automate all these tests? To me, I would be confident building the tests in code, but how do you help others build confidence in your coded tests?

How does traditional test planning fit into an agile type development project? As I've written a large number of tests, where does the need to manually test come in?

For the continuing development, testing will obviously need to continue, and we'll need to keep automated tests up to date, which is a subject for another post.

Friday, June 11, 2010

Delivery is near!


"Tiny" Delivery Truck
Originally uploaded by El Struthio
The last 6 months or so has been rather hectic, and very unstructured, but the end is in sight!

This month I've committed to delivering 'complete' software by the end of the month, so the last week or so has been bug fixes and working with a colleague to do a variety of tests on the software I've written or integrated with.

Testing has been going pretty well overall, I've got a suite of unit / integration tests that I run myself, but just having someone else go over my project, confirming what I've tested and what's in my head has been great. It's enabled me to sit back a bit, release some of the unnecessary information from my head and prepare for handover.

There is still a lot of work to do (see my previous post about builds), integration that is yet to be fully tested, but it has been a great relief, and one of the reasons I'm keen on moving towards an agile work practice - handing over a tested and complete piece of software every cycle will hopefully allow me to leave work at work more effectively.

Tuesday, June 1, 2010

Builds, dependencies and testing



Originally uploaded by +fatman+
Over the last few weeks I've been moving back and forth between testing on my computer and deploying to our test environment. I did another build this afternoon, thought everything was good, and deployed to the test server at about 430 for my colleague to test. Well, after he tried to run the test it turns out that I forgot to change a section of my config.

I watched an interesting video on infoq today about the deployment from dev to production, which has encouraged me to investigate the process a bit more. At the moment I've just got sections in my config for each environment, and I comment out sections when not needed.

So, there's much to do, including getting a build server up and running, so hopefully I'll be writing about that soon, and a whole lot of research about deployment.