System Integration testing is a phase of most projects where a component is connected to other external systems and an enterprise view of a solution emerges that spans across modules, systems and people-organizations. This phase of most projects is notoriously difficult to complete on time and budget. This phase varies from the Integration Testing (IT) phase in that not all components are within the control of one team. It also varies from the UAT (user acceptance testing) phase in that UAT’s focus is business oriented, making the application run in the user’s natural environment whereas the SIT phase ends up with a large amount of technical focus.
Here are a few things often forgotten based 15 years of doing Enterprise Application Integration projects:
Make a clear chart of dependencies
In most organizations the application environments are a complex and heterogeneous mix of technologies, vendors and functionality. In scenarios where you are putting these together you should definitely put the understanding to paper. Have a diagram that explains system dependencies, process dependencies and data dependencies. I emphasize dependencies here (vs. business rules, mappings etc) because they are most relevant to maintaining schedule. These can turn into very large diagrams and most people are reluctant to detail these out but consequences of not doing it include bugs in system that apparently had “nothing to do with us” or difficulty in communicating system complexity to a large percentage of the team.
Believe it or not, since most of the focus always remains on functional aspects of the solution, simple things like what users multiple systems will use to talk to each other are frequently ignored. Make sure you enlist these explicitly and ask for enough users to be created ahead of time. These would include users used by applications to integrate and by people to test things out.
Similar problems on data. Make sure the dependencies you have on data are well understood. If you havent documented this, you dont understand this well enough. We ran into many a situation when the data we needed was not there not could it be easily created. E.g. if you are integrating invoices, you may need to have a set of POs already created in your ERP to create invoices against. The POs may inturn have other dependencies (e.g. shipments) that they are tied to. If you dont plan for this, you will often find managers complaining that the test cases planned for SIT are just tips of the ice berg.
Backward compatibility and Regression
Very rarely are completely new systems being rolled out. For every new system that is rolled out, several upgrade will generally follow, enhancing and extending its capabilities. As technical components change, these can have impact on prior implementations. Backward compatibility becomes a key challenge for technical design and testing. You may be integrating A to B but since C was also integrated to B and used the same components (that you have now changed) you will need to test out C, even though it may seem that has nothing to do with your project. Your team may have no context of system C or its key scenarios and you may not have estimated for the effort there.
The focus in most project revolves around positive functional scenarios. Which makes sense, since that is usually what business cares most about. It is however easy to neglect the vast number of ways in which things can go wrong. One of the most common scenarios is error planing and normalization. e.g. we had a scenario where a customer facing online-banking application was talking to a backend core-banking system. The banking system would return errors with usernames and cryptic codes or error descriptions written by developers in their native language (Indian English is a native language :)). An error translation framework needs to be constructed to understand, transform and normalize errors across systems. This is no small feat and needs to be planned for.
If a single business process (a click on an application) results in actions across systems (very common in SOA scenarios), you will almost never encounter a scenario where all these systems will agree on a single meaning of the transaction. We have talked about design for these in an earlier article on Integration challenges . From a testing point of view, testing out scenarios where System A and B succeeded and C failed is even more complex.
Design to limit dependencies
Design testing scenarios and application to reduce the number of times your entire testing cycle stops because one things is missing. In a recent project i was doing we had 60 odd major test cases that needed testing in SIT. If you had issues on test 1, everything beyond this would stop. Luckily we had the option to bypass that particular module and move on with the rest of the testing while issues in this case were fixed. Note in many cases the discussion around modularized testing spans across one system and team so this would be a topic
Have you recently gone through positive or negative experiences in SIT that we can learn from? If so, please share your thoughts with us in the comments below.