Tuesday, June 22, 2010
Managing connectivity in a slow, slow world
My starting point was to use an external project with a utility to create a single incident with a number of settings. This worked fine until I needed to create 3 incidents, as each request was managing it's own connection, setup of fields, and submitting the new object to the server. At this stage my connection was timing out, there had to be a better way!
First thing to do was extract some of the common functionality - I wanted to create a base incident, which I could just tweak for each scenario. This was great, because most of the fields stayed the same (each field change required a query to get the database id based on the string I had). So the first incident takes a bit to setup, but each subsequent incident may not require any further database interaction until all process are ready to submit.
Initially, I had ruled out using threads, as there is one process that can break all other process, but I may be able to extract this out, and thread the remaining process - which could give a bit of a performance boost.
Another option to consider - should I hardcode all the Id's so I don't need to do so many queries - if so, how do I manage the config for different environments (part of a bigger question). This would save some time, but perhaps present some issues down the track when the system changes.
It's pretty interesting thinking about performance optimization, and this has got me thinking about where I could do things better in my own code, where the time taken may not be so noticeable, but all up could be better managed.
Do you develop against external systems that are a bit on the slow side? How do you manage multiple requests? How do you ensure you can provide a prompt response to your user, while ensuring all requests complete successfully?