7 things to consider when integrating applications
The plathora of applications in organizations has grown significantly in the past decade. Enterprise, big and small now has many choices for any given task and the application portfolio seems to include everything from mammoth generalized ERPs to small specialized cloud apps. There are many reasons why companies need to maintain this application / vendor diversity and these factors make management of this diversity a constant challenge for IT Management. Integration these applications into one consistent and seamless business process is thus a challenge.
Here are a few tips to design, architect and manage integration projects better:
You’re not just integrating apps
In most integration scenarios, you aren’t just integrating applications together, you are usually integrating organizations. Departments within a company or across vendors. These organizations will have their own priorities, their own management and execution hierarchies. Most integration projects turn out to be much longer and expensive than planned for exactly this reason. Teams get fixated on the technical side of the issues and neglect the coordination and executive support necessary to execute integration projects successfully.
Build the 1st floor before you build the second floor
Kind of obvious but you will be surprised how many times integration is happening between two applications that “are supposed to get built during the integration project”. Services are being customized or new apps are being built that are supposed to integrate with existing systems. Make sure you are clear architecturally on what the system of reference is for your data and make sure that system is stable before you start building on top of it. Building on specifications is great in theory but can lead to very prolonged testing phases as teams play a stop and go with each other while they fix dependencies.
Now some architectural patterns that are very useful, and usually ignored in integration projects:
Centralized Audit and Logging
One of the biggest challenges in integrated environments is the lack of end to end visibility of data as it passes through applications. When designing make sure all systems are providing visibility of key input/output data. Ideally this should be presented in a common logging format that can then be consolidated to paint a consistent picture.
This is a fancy term for a simple concept. Loosely speaking it means that you either you do something completely or don’t do it at all. In service oriented architectures (SOA) this is a particularly common problem. Most systems now a days will expose fine grained services (Create, Update, Delete, Read) on entities and consumers are left to manage complex sequences on their own. Coarse grained services (those reflecting complex organizational business processes) will generally require more than one call to the service. For example a coarse grained bank loan creation method might need to call a fine grained account creation, then a fine grained customer creation and a fine grained loan creation. If one of the three services fails, a rollback would need to be done to ensure the loan creation transaction remains consistent. Very few application vendors provide transaction support in web services so the only option left is to manually revert the data for any calls already made. i.e. fire a delete for a create etc.. While specific scenarios vary, this is definitely something to think about.
A very useful architectural trait but rarely ignored in service design. Impotent services are those that can be fired twice with the same message without any impact. Systems built on idempotent services exhibit higher resilience and reliability as compared to non-idempotent ones. It also simplifies maintenance and manual rollback routines.
Tightly knit API calls are usually so efficient that the cost of executing them is rarely considered. Calls across systems (sometimes networks) are more expensive and need to planned carefully. XML based services (common in SOA scenarios) have bulkier message payloads which are expensive to serialize, transport and process. Several aspects to performance
- Latency: The time an operation takes to execute from start to end. Particularly concerning if the consumer is a synchronous application for example a web application where the client is waiting for a reply. You can analogize this to the length of a pipe.
- Throughout: The average number of transactions the system can process. Each might have a low or high latency but the overall capacity has to do with the overall size of the pipe. In most asynchronous scenarios (where there are no users waiting for the transaction to complete) this is more relevant.
Synchronous vs Asynchronous
I refer here to the way the user interacts with the data. For example when you place an order on an online shop you can expect the system to immediate tell you that your order has been processed or it can potentially start off the order fulfillment process and tell the user that they will be intimated when that is done. As a general rule, synchronous processes are much simpler to implement than asynchronous processes however a system designed for asynchronous operation has much higher hardware asset utilization and resilience.
There are a few more of these which I will try to cover in a separate post.