IT history can be told as a story of overcoming challenges. A constraint limits IT capability (e.g. processor speed, memory, storage, network, etc.) and then there is a step forward in IT technology to break through the constraint and move forward. Until we hit the next constraint. Then – repeat. Think of network bandwidth in the days of dial-up versus ubiquitous high-speed connectivity available now – even wireless.
A way to understand today's need for integration, and suitable solutions is to understand the history of the need for integration, the challenges at the time and how they were overcome. Then we can predict what challenges may come next and how we can overcome these challenges that lie ahead.
This article moves through a series of challenges and the integration capability break throughs that help overcome each challenge. Each is a tipping point in integration that reset the technology playing field. The coverage of each topic is intentionally brief and is not intended to be a full description of each capability. Please understand that there is far more that can be discussed regarding each integration capability.
In the beginning…Technical APIs
Fairly early in the history of programming callable subroutines began to be used. Eventually this code encapsulation approach extended to programs themselves. A program which exists and provides good value could be used (called) rather than recreating the program logic for each potential scenario. The question is how to integrate to this program?
The early answer to this question was often – “not easily”. In many cases the original creators of the called program did not anticipate others using it in ways beyond its initial purpose. So many of these programs contained code for user interface, business logic, and data that was not easily segmented. This led to some difficult and fragile integration scenarios based on screen scraping techniques.
Moving forward programs began to provide an application programming interface (API). But do not confuse this with the Business APIs or Web APIs more recently in vogue (we will get to these later). These original “technical APIs” were highly technical, and provider focused – i.e. focused on what the invoked program could provide. A calling program used a remote procedure call (RPC) to access this application interface.
Integration was born. Still there were significant limitations.
Business partners and early file interchanges
One such limitation involves how multiple businesses can communicate (e.g. for supply chain). Trying to use an RPC call across businesses was not going to work. Security concerns and non-standard interfaces are just some of the issues. Defaulting to the lowest common denominator, businesses can always exchange a file. Company A can build a file with some number of records in an agreed to format that would be sent to company B. A follow on to this is electronic data interchange (EDI) standards that deal with specifying the formats for the files and various techniques and approaches for sending the files between trading partners.
Value Added Networks (VANs) provide EDI services which avoids point-to-point dependencies and adds asynchronous buffering into the EDI transfer stage. VANs also provide a hub-spoke pattern for files. In addition, EDI started process-based integration. The order/sequence of messages is defined. For example, an Order is sent, an order Ack is sent back, then an advanced shipping notice, then an invoice,
Reliability / availability and the rise of messaging
As distributed computing took hold in the early 1990s some of the same challenges affecting integration between businesses became challenges inside the company as well. Different applications running on different platforms need to be called to complete a business transaction. However, network reliability and system/application availability are concerns. What if the destination system, application, or network connection is unavailable? Enter messaging. Rather than require each application to manage outage issues or incomplete transactions, messaging middleware handles assuring the message from application A reaches application B, wherever the application is located removing this concern from the application developer.
In addition to assured delivery messaging also allowed for independent scalability of the applications. Additional messaging patterns - one-many, routing, and publication/subscription (pub-sub) - enhanced the use cases that could be tackled.
Growth, Standardization, and Maintenance
As business application portfolios grow additional concerns arise. Many applications maintain their own copy of data which causes data duplication. And, each application may reference the data differently and maintain it in different formats. Keeping these multiple definitions accurate and in sync is a challenge.
Also, while tailoring unique calls to an application or two is not overly burdensome for a programmer, tailoring calls to many applications is a problem. The number of point-to-point application to application connections becomes unmanageable and very fragile. Application connectivity diagrams look like a bowl of spaghetti. If any application is changed or upgraded the maintenance to change all the calling systems is an obstacle to progress.
The 'hub and spoke' integration pattern using messaging brokers began to address these issues. It helps with the de-spaghettification and introduced concepts such as canonical message forms, as well as process-driven integration in the hub. One key advantage of the hub is being able to centrally manage integrations and re-use connectivity and data formats. Hub-and-spoke/broker in combination with pub-sub and canonical form helped eliminate the point-point issues.
Service Oriented Architecture (SOA) built on top of this by defining services often using standards (web services) to provide a façade for traditional, new, or multiple back-end applications. Governance to drive the reuse of services, often using a registry and version control added to the centralized capabilities. This manifested in an enterprise service bus (ESB) architectural approach providing centralized integration capability. All calling applications simply connected to the ESB, the ESB then routed and transformed the requests to the correct versions of the services and returned the response, transformed back as required, to the calling application.
Separation of concerns is the guiding principle. Let the application integration middleware handle the connectivity and allow the applications to focus on the business logic or information they provide. Application integration provides transformation, protocol conversions and routing to take requests from one application, invoke as many applications as required – each with the formats that they require, gathers the necessary information, and responds to the calling application with the reply in the format that it expects.
A move to consumer focused consumption and external exposure
While SOA solves many issues it also introduces a few new challenges. Services are relatively costly to build, trying to fulfill any need that may arise from consumers to drive re-usability across disparate scenarios. Services are created intentionally coarse grained, to meet these various consumer needs. The result is that services return significant amounts of information about the requested object, and perhaps related entities too (e.g. customer + orders, customer + accounts).
A service that provides all required information about an asset is useful if the calling program needs all or most of that information. However, it is entirely too cumbersome if the requestor needs only a fraction of what is returned. Combined with the difficulty in formulating the request and parsing through a huge response the solution is not usable in situations where a small subset of data is needed.
This need for smaller interactions became a common occurrence with the rise in Mobile applications. The required bandwidth to send a large amount of unnecessary data is not acceptable for performance or processing on a mobile device. Additional challenges in SOA relate to:
- onboarding additional users for services which often requires human interactions and approvals,
- the additional load that these requests can put on back-end systems, and
- securing access to these systems as traffic from mobile devices is coming from outside the enterprise.
Business (or Web) APIs address these concerns. These lightweight interfaces offer the ability to provide consumer-oriented interfaces tailored to specific use cases or users’ requirements – while still accessing the services that already exist in SOA. Note that these business APIs are different than the technical APIs discussed earlier in that they are far simpler. These are consumer oriented versus provider oriented and require only the inputs necessary for the consumer scenario and return only the consumer scenario targeted output data. The management tooling around Business APIs provides self-service on-boarding for intended audiences (via a developer portal) and protects access to the back-end systems through a security gateway. Role based analytics also provide insights into usage and patterns.
Business APIs also introduce new opportunities commonly referred to as the API Economy. And, many digital transformation initiatives are built taking advantage of business APIs.
APIs provide another mechanism for partnering as APIs provide appropriate access to business partners while restricting visibility to only the data the partners are entitled/enabled to access. Business opportunities for ecosystems and marketplaces followed.
As API consumption increased a secondary concern arose – too many API variants. Managing many APIs with only slight variations in the returned data can result in multiple APIs that seem to do almost the same thing. Some businesses address this through selected levels of granularity (small, medium, or large) without tailoring APIs to every consumer scenario individually, but other businesses want to return just the data the calling application needs. Allowing the consumer to dynamically specify the data they require is accomplished via GraphQL – a new variation on the definition of an “API”. The term “API” is no longer tied to a specific implementation technology (e.g. REST), but rather to the approach in managing the asset consumption regardless of technology.
Decentralization for agility
Business APIs started the path toward increased agility by providing an abstraction layer between consumers and providers – allowing each to progress independently without affecting the other. As new applications are introduced on the cloud (e.g. SaaS) and applications are developed or modernized using a microservice architecture it is also necessary to connect these applications – both with each other and with the existing applications. API Gateways, application integration, or other integration styles historically had centralized runtimes which worked well when the applications were co-located. But, with the applications no longer in the same data center traversing the network from a cloud to a central location and then back out to another (or the same) location does not make sense. These added network hops increase latency and introduce concerns around complexity and potential security risk.
Additionally, application development approaches commonly use an agile methodology relying on small distributed teams to develop solutions quickly. Having these teams rely on a central expert team as the only skills to perform integration tasks created an unacceptable bottleneck.
The solution to these integration challenges is to use the same microservice technology and cloud-native approaches for the integration solutions as is being used for the applications themselves. This allows the integration runtimes to be deployed as microservices near the applications they need to integrate – on any cloud or on-premise. Common integration governance and automated policies are established to allow the distributed teams to scale the integration development.
Scalability and Performance
In most architectures today applications are hosted in a variety of locations – one or more clouds and on-premise. So, if an application needs data or transactions that are hosted in another location co-locating the integration code near the invoking application removes one network hop, but still leaves the other network connections in place as the systems in the other locations are accessed in real time to obtain the necessary information. As the number of applications increases and as their usage increases, the number of back-end transactions increases as well. Handling this scale with necessary performance is costly.
The solution to this issue is to move the data that the application needs to the application location in advance of the request – thus eliminating the real time network traversal and providing the necessary scalability and performance. This is accomplished using events.
Events use a publication subscription model (pub/sub) to allow applications to subscribe to the data they need from the original back-end source. As data changes on the back end, subscribed applications are sent the data they care about (i.e. subscribe to). This event history provides an alternative source of truth that new front-end data stores can be built from (and indeed re-built) without needing an initial migration exercise from the back-end sources.
These front-end, local, data stores are a projection of the back-end data sources providing only the information required for the consuming application. The applications can now access the data locally removing the latency that would be required if the data is accessed in real time for every transaction. It also reduces the scale requirements on the back end as the data is published once as it changes rather than responding to each individual request.
What’s next?
I have attempted to succinctly describe the challenges and resulting integration capabilities developed to solve each challenge. We can debate whether I was succinct – probably too little for some readers and too much for others!
Quiz question:
In this article we identified the following types of integration capabilities: screen-scraping, file, EDI, messaging, application integration, Business APIs (with a security gateway), and events. What can we notice about this full list of integration capabilities?
The answer is that they are all still in use today. Subsequent integration capabilities do not replace the preceding capability, each still exists in some form to address the need that drove its creation. Each additional capability adds an additional tool into our integration tool kit that is used for the appropriate challenge. In fact, in most situations multiple of these integration capabilities are used together. This is the focus I plan to address next.
In part 2 of this series, A Perspective on Current Integration Scenarios and What Might Follow,I discuss common integration patterns and take out my crystal ball to predict the next set of challenges and where integration is headed.
If you have questions, please let me know. Connect with me through comments here or via twitter @Arglick to continue the discussion.
Graphics courtesy of Pixabay.com – Myriams-Fotos, Gerd Altmann, Shafin_Protic
#ACE#Agile-Integration#API#APIConnect#Cloudintegration#cloudpak4integration#cloudpakforintegration#DataPowerGateways#events#eventstreams#filetransfer#highspeedfiletransfer#Hybrid-Integration#IBMCloudPakforIntegration(ICP4I)#Integration#Integration-Architecture#integrationplatform#Messaging#MQ#securegateway#whatsnew